query_id
stringlengths
32
32
query
stringlengths
6
5.38k
positive_passages
listlengths
1
22
negative_passages
listlengths
9
100
subset
stringclasses
7 values
efbe7c744693e9aac16e66d9aee8b2ef
Distance and similarity measures for hesitant fuzzy sets
[ { "docid": "82592f60e0039089e3c16d9534780ad5", "text": "A model for grey-tone image enhancement using the concept of fuzzy sets is suggested. It involves primary enhancement, smoothing, and then final enhancement. The algorithm for both the primary and final enhancements includes the extraction of fuzzy properties corresponding to pixels and then successive applications of the fuzzy operator \"contrast intensifier\" on the property plane. The three different smoothing techniques considered in the experiment are defocussing, averaging, and max-min rule over the neighbors of a pixel. The reduction of the \"index of fuzziness\" and \"entropy\" for different enhanced outputs (corresponding to different values of fuzzifiers) is demonstrated for an English script input. Enhanced output as obtained by histogram modification technique is also presented for comparison.", "title": "" } ]
[ { "docid": "85d7ff422f9753543494f6a1c4bdf21c", "text": "Early in the last century, 3 events put Colorado in the orthodontic spotlight: the discovery-by an orthodontist-of the caries-preventive powers of fluoridated water, the formation of dentistry's first specialty board, and the founding of a supply company by and for orthodontists. Meanwhile, inventive practitioners were giving the profession more choices of treatment modalities, and stainless steel was making its feeble debut.", "title": "" }, { "docid": "8824f01def7d13db2e436c074d459676", "text": "In this paper, numerical treatment is presented for the solution of boundary value problems of one-dimensional Bratu-type equations using artificial neural networks. Three types of transfer functions including Log-sigmoid, radial basis, and tan-sigmoid are used in the neural networks’ modeling. The optimum weights for all the three networks are searched with the interior point method. Various test cases of Bratu-type equations have been simulated using the developed models. The accuracy, convergence, and effectiveness of the methods are substantiated by a large number of simulation data for each model by taking enough independent runs.", "title": "" }, { "docid": "138d45574cee04ff8fa3020f5fe85a21", "text": "Physical contact between melanocytes and keratinocytes is a prerequisite for melanosome transfer to occur, but cellular signals induced during or after contact are not fully understood. Herein, it is shown that interactions between melanocyte and keratinocyte plasma membranes induced a transient intracellular calcium signal in keratinocytes that was required for pigment transfer. This intracellular calcium signal occurred due to release of calcium from intracellular stores. Pigment transfer observed in melanocyte-keratinocyte co-cultures was inhibited when intracellular calcium in keratinocytes was chelated. We propose that a 'ligand-receptor' type interaction exists between melanocytes and keratinocytes that triggers intracellular calcium signalling in keratinocytes and mediates melanin transfer.", "title": "" }, { "docid": "196ddcefb2c3fcb6edd5e8d108f7e219", "text": "This paper may be considered as a practical reference for those who wish to add (now sufficiently matured) Agent Based modeling to their analysis toolkit and may or may not have some System Dynamics or Discrete Event modeling background. We focus on systems that contain large numbers of active objects (people, business units, animals, vehicles, or even things like projects, stocks, products, etc. that have timing, event ordering or other kind of individual behavior associated with them). We compare the three major paradigms in simulation modeling: System Dynamics, Discrete Event and Agent Based Modeling with respect to how they approach such systems. We show in detail how an Agent Based model can be built from an existing System Dynamics or a Discrete Event model and then show how easily it can be further enhanced to capture much more complicated behavior, dependencies and interactions thus providing for deeper insight in the system being modeled. Commonly understood examples are used throughout the paper; all models are specified in the visual language supported by AnyLogic tool. We view and present Agent Based modeling not as a substitution to older modeling paradigms but as a useful add-on that can be efficiently combined with System Dynamics and Discrete Event modeling. Several multi-paradigm model architectures are suggested.", "title": "" }, { "docid": "1f28ca58aabd0e2523492308c4da3929", "text": "Sepsis is a leading cause of in-hospital death over the world and septic shock, the most severe complication of sepsis, reaches a mortality rate as high as 50%. Early diagnosis and treatment can prevent most morbidity and mortality. In this work, Recent Temporal Patterns (RTPs) are used in conjunction with SVM classifier to build a robust yet interpretable model for early diagnosis of septic shock. This model is applied to two different prediction tasks: visit-level early diagnosis and event-level early prediction. For each setting, this model is compared against several strong baselines including atemporal method called Last-Value, six classic machine learning algorithms, and lastly, a state-of-the-art deep learning model: Long Short-Term Memory (LSTM). Our results suggest that RTP-based model can outperform all aforementioned baseline models for both diagnosis tasks. More importantly, the extracted interpretative RTPs can shed lights for the clinicians to discover progression behavior and latent patterns among septic shock patients.", "title": "" }, { "docid": "3cc0707cec7af22db42e530399e762a8", "text": "While watching television, people increasingly consume additional content related to what they are watching. We consider the task of finding video content related to a live television broadcast for which we leverage the textual stream of subtitles associated with the broadcast. We model this task as a Markov decision process and propose a method that uses reinforcement learning to directly optimize the retrieval effectiveness of queries generated from the stream of subtitles. Our dynamic query modeling approach significantly outperforms state-of-the-art baselines for stationary query modeling and for text-based retrieval in a television setting. In particular we find that carefully weighting terms and decaying these weights based on recency significantly improves effectiveness. Moreover, our method is highly efficient and can be used in a live television setting, i.e., in near real time.", "title": "" }, { "docid": "4ac15541b7d1f77f55da749e3871efea", "text": "Acidovorax avenae subsp. citrulli is the causal agent of bacterial fruit blotch (BFB), a threatening disease of watermelon, melon, and other cucurbits. Despite the economic importance of BFB, relatively little is known about basic aspects of the pathogen's biology and the molecular basis of its interaction with host plants. To identify A. avenae subsp. citrulli genes associated with pathogenicity, we generated a transposon (Tn5) mutant library on the background of strain M6, a group I strain of A. avenae subsp. citrulli, and screened it for reduced virulence by seed-transmission assays with melon. Here, we report the identification of a Tn5 mutant with reduced virulence that is impaired in pilM, which encodes a protein involved in assembly of type IV pili (TFP). Further characterization of this mutant revealed that A. avenae subsp. citrulli requires TFP for twitching motility and wild-type levels of biofilm formation. Significant reductions in virulence and biofilm formation as well as abolishment of twitching were also observed in insertional mutants affected in other TFP genes. We also provide the first evidence that group I strains of A. avenae subsp. citrulli can colonize and move through host xylem vessels.", "title": "" }, { "docid": "b214270aacf9c9672af06e58ff26aa5a", "text": "Traditional techniques for measuring similarities between time series are based on handcrafted similarity measures, whereas more recent learning-based approaches cannot exploit external supervision. We combine ideas from timeseries modeling and metric learning, and study siamese recurrent networks (SRNs) that minimize a classification loss to learn a good similarity measure between time series. Specifically, our approach learns a vectorial representation for each time series in such a way that similar time series are modeled by similar representations, and dissimilar time series by dissimilar representations. Because it is a similarity prediction models, SRNs are particularly well-suited to challenging scenarios such as signature recognition, in which each person is a separate class and very few examples per class are available. We demonstrate the potential merits of SRNs in withindomain and out-of-domain classification experiments and in one-shot learning experiments on tasks such as signature, voice, and sign language recognition.", "title": "" }, { "docid": "ae4ffd43ea098581aa1d1980e61ebe6c", "text": "In response to the new challenges in the design and operation of communication networks, and taking inspiration from how living beings deal with complexity and scalability, in this position paper we introduce an innovative system concept called COgnition-BAsed NETworkS (COBANETS). The proposed approach develops around the systematic application of advanced machine learning techniques and, in particular, unsupervised deep learning and probabilistic generative models for system-wide learning, modeling, optimization, and data representation. Moreover, in COBANETS we propose to combine the learning architecture with the emerging network virtualization paradigms, which make it possible to actuate automatic optimization and reconfiguration strategies at the system level, thus fully unleashing the potential of the learning approach. Compared to past and current research efforts in this area, the technical approach depicted in this paper is deeply interdisciplinary and more comprehensive, calling for the synergic combination of expertise of computer scientists, communications and networking engineers, and cognitive scientists, with the ultimate aim of breaking new ground through a profound rethinking of how the modern understanding of cognition can be used in the management and optimization of telecommunication networks.", "title": "" }, { "docid": "6a143e9aab34836fc34ffcd6cc9d1096", "text": "MOTIVATION\nDNA microarrays are now capable of providing genome-wide patterns of gene expression across many different conditions. The first level of analysis of these patterns requires determining whether observed differences in expression are significant or not. Current methods are unsatisfactory due to the lack of a systematic framework that can accommodate noise, variability, and low replication often typical of microarray data.\n\n\nRESULTS\nWe develop a Bayesian probabilistic framework for microarray data analysis. At the simplest level, we model log-expression values by independent normal distributions, parameterized by corresponding means and variances with hierarchical prior distributions. We derive point estimates for both parameters and hyperparameters, and regularized expressions for the variance of each gene by combining the empirical variance with a local background variance associated with neighboring genes. An additional hyperparameter, inversely related to the number of empirical observations, determines the strength of the background variance. Simulations show that these point estimates, combined with a t -test, provide a systematic inference approach that compares favorably with simple t -test or fold methods, and partly compensate for the lack of replication.", "title": "" }, { "docid": "ffc2db2f3762b77af679f2a757bbc745", "text": "We study, for the first time, automated inference on criminality based solely on still face images. Via supervised machine learning, we build four classifiers (logistic regression, KNN, SVM, CNN) using facial images of 1856 real persons controlled for race, gender, age and facial expressions, nearly half of whom were convicted criminals, for discriminating between criminals and non-criminals. All four classifiers perform consistently well and produce evidence for the validity of automated face-induced inference on criminality, despite the historical controversy surrounding the topic. Also, we find some discriminating structural features for predicting criminality, such as lip curvature, eye inner corner distance, and the so-called nose-mouth angle. Above all, the most important discovery of this research is that criminal and non-criminal face images populate two quite distinctive manifolds. The variation among criminal faces is significantly greater than that of the non-criminal faces. The two manifolds consisting of criminal and non-criminal faces appear to be concentric, with the non-criminal manifold lying in the kernel with a smaller span, exhibiting a law of normality for faces of non-criminals. In other words, the faces of general law-biding public have a greater degree of resemblance compared with the faces of criminals, or criminals have a higher degree of dissimilarity in facial appearance than normal people.", "title": "" }, { "docid": "d15dc60ef2fb1e6096a3aba372698fd9", "text": "One of the most interesting applications of Industry 4.0 paradigm is enhanced process control. Traditionally, process control solutions based on Cyber-Physical Systems (CPS) consider a top-down view where processes are represented as executable high-level descriptions. However, most times industrial processes follow a bottom-up model where processes are executed by low-level devices which are hard-programmed with the process to be executed. Thus, high-level components only may supervise the process execution as devices cannot modify dynamically their behavior. Therefore, in this paper we propose a vertical CPS-based solution (including a reference and a functional architecture) adequate to perform enhanced process control in Industry 4.0 scenarios with a bottom-up view. The proposed solution employs an event-driven service-based architecture where control is performed by means of finite state machines. Furthermore, an experimental validation is provided proving that in more than 97% of cases the proposed solution allows a stable and effective control.", "title": "" }, { "docid": "8bc615dfa51a9c5835660c1b0eb58209", "text": "Large scale grid connected photovoltaic (PV) energy conversion systems have reached the megawatt level. This imposes new challenges on existing grid interface converter topologies and opens new opportunities to be explored. In this paper a new medium voltage multilevel-multistring configuration is introduced based on a three-phase cascaded H-bridge (CHB) converter and multiple string dc-dc converters. The proposed configuration enables a large increase of the total capacity of the PV system, while improving power quality and efficiency. The converter structure is very flexible and modular since it decouples the grid converter from the PV string converter, which allows to accomplish independent control goals. The main challenge of the proposed configuration is to handle the inherent power imbalances that occur not only between the different cells of one phase of the converter but also between the three phases. The control strategy to deal with these imbalances is also introduced in this paper. Simulation results of a 7-level CHB for a multistring PV system are presented to validate the proposed topology and control method.", "title": "" }, { "docid": "e870f2fe9a26b241bdeca882b6186169", "text": "Some people may be laughing when looking at you reading in your spare time. Some may be admired of you. And some may want be like you who have reading hobby. What about your own feel? Have you felt right? Reading is a need and a hobby at once. This condition is the on that will make you feel that you must read. If you know are looking for the book enPDFd recommender systems handbook as the choice of reading, you can find here.", "title": "" }, { "docid": "d657085072f829db812a2735d0e7f41c", "text": "Recently, increasing attention has been drawn to training semantic segmentation models using synthetic data and computer-generated annotation. However, domain gap remains a major barrier and prevents models learned from synthetic data from generalizing well to real-world applications. In this work, we take the advantage of additional geometric information from synthetic data, a powerful yet largely neglected cue, to bridge the domain gap. Such geometric information can be generated easily from synthetic data, and is proven to be closely coupled with semantic information. With the geometric information, we propose a model to reduce domain shift on two levels: on the input level, we augment the traditional image translation network with the additional geometric information to translate synthetic images into realistic styles; on the output level, we build a task network which simultaneously performs depth estimation and semantic segmentation on the synthetic data. Meanwhile, we encourage the network to preserve the correlation between depth and semantics by adversarial training on the output space. We then validate our method on two pairs of synthetic to real dataset: Virtual KITTI→KITTI, and SYNTHIA→Cityscapes, where we achieve a significant performance gain compared to the non-adaptive baseline and methods without using geometric information. This demonstrates the usefulness of geometric information from synthetic data for cross-domain semantic segmentation.", "title": "" }, { "docid": "310aa0a02f8fc8b7b6d31c987a12a576", "text": "We describe a novel markerless camera tracking approach and user interaction methodology for augmented reality (AR) on unprepared tabletop environments. We propose a real-time system architecture that combines two types of feature tracking. Distinctive image features of the scene are detected and tracked frame-to-frame by computing optical flow. In order to achieve real-time performance, multiple operations are processed in a synchronized multi-threaded manner: capturing a video frame, tracking features using optical flow, detecting distinctive invariant features, and rendering an output frame. We also introduce user interaction methodology for establishing a global coordinate system and for placing virtual objects in the AR environment by tracking a user's outstretched hand and estimating a camera pose relative to it. We evaluate the speed and accuracy of our hybrid feature tracking approach, and demonstrate a proof-of-concept application for enabling AR in unprepared tabletop environments, using bare hands for interaction.", "title": "" }, { "docid": "107bb53e3ceda3ee29fc348febe87f11", "text": "The objective here is to develop a flat surface area measuring system which is used to calculate the surface area of any irregular sheet. The irregular leather sheet is used in this work. The system is self protected by user name and password set through software for security purpose. Only authorize user can enter into the system by entering the valid pin code. After entering into the system, the user can measure the area of any irregular sheet, monitor and control the system. The heart of the system is Programmable Logic Controller (Master K80S) which controls the complete working of the system. The controlling instructions for the system are given through the designed Human to Machine Interface (HMI). For communication purpose the GSM modem is also interfaced with the Programmable Logic Controller (PLC). The remote user can also monitor the current status of the devices by sending SMS message to the GSM modem.", "title": "" }, { "docid": "a8534157b31e858b5825acd8f4fff269", "text": "In recent years, the Smart City concept is emerging as a way to increase efficiency, reduce costs, and improve the overall quality of citizen life. The rise of Smart City solutions is encouraged by the increasing availability of Internet of Things (IoT) devices and crowd sensing technologies. This paper presents an IoT Crowd Sensing platform that offers a set of services to citizens by exploiting a network of bicycles as IoT probes. Based on a survey conducted to identify the most interesting bike-enabled services, the SmartBike platform provides: real time remote geo-location of users’ bikes, anti-theft service, information about traveled route, and air pollution monitoring. The proposed SmartBike platform is composed of three main components: the SmartBike mobile sensors for data collection installed on the bicycle; the end-user devices implementing the user interface for geo-location and anti-theft; and the SmartBike central servers for storing and processing detected data and providing a web interface for data visualization. The suitability of the platform was evaluated through the implementation of an initial prototype. Results demonstrate that the proposed SmartBike platform is able to provide the stated services, and, in addition, that the accuracy of the acquired air quality measurements is compatible with the one provided by the official environmental monitoring system of the city of Turin. The described platform will be adopted within a project promoted by the city of Turin, that aims at helping people making their mobility behavior more sustainable.", "title": "" }, { "docid": "d911ccb1bbb761cbfee3e961b8732534", "text": "This paper presents a study on SIFT (Scale Invariant Feature transform) which is a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. There are various applications of SIFT that includes object recognition, robotic mapping and navigation, image stitching, 3D modeling, gesture recognition, video tracking, individual identification of wildlife and match moving.", "title": "" }, { "docid": "8cd62b12b4406db29b289a3e1bd5d05a", "text": "Humor generation is a very hard problem in the area of computational humor. In this paper, we present a joke generation model based on neural networks. The model can generate a short joke relevant to the topic that the user specifies. Inspired by the architecture of neural machine translation and neural image captioning, we use an encoder for representing user-provided topic information and an RNN decoder for joke generation. We trained the model by short jokes of Conan O’Brien with the help of POS Tagger. We evaluate the performance of our model by human ratings from five English speakers. In terms of the average score, our model outperforms a probabilistic model that puts words into slots in a fixed-structure sentence.", "title": "" } ]
scidocsrr
e4704bc34bfb4b243ee5295fcb57ece2
Universal Dependencies : A cross-linguistic typology
[ { "docid": "f52dca1ec4b77059639f6faf7c79746a", "text": "We present an automatic approach to tree annotation in which basic nonterminal symbols are alternately split and merged to maximize the likelihood of a training treebank. Starting with a simple Xbar grammar, we learn a new grammar whose nonterminals are subsymbols of the original nonterminals. In contrast with previous work, we are able to split various terminals to different degrees, as appropriate to the actual complexity in the data. Our grammars automatically learn the kinds of linguistic distinctions exhibited in previous work on manual tree annotation. On the other hand, our grammars are much more compact and substantially more accurate than previous work on automatic annotation. Despite its simplicity, our best grammar achieves an F1 of 90.2% on the Penn Treebank, higher than fully lexicalized systems.", "title": "" }, { "docid": "3a0108dab06fdb6c2764665057ce1564", "text": "Stanford Dependencies (SD) provide a functional characterization of the grammatical relations in syntactic parse-trees. The SD representation is useful for parser evaluation, for downstream applications, and, ultimately, for natural language understanding, however, the design of SD focuses on structurally-marked relations and under-represents morphosyntactic realization patterns observed in Morphologically Rich Languages (MRLs). We present a novel extension of SD, called Unified-SD (U-SD), which unifies the annotation of structurallyand morphologically-marked relations via an inheritance hierarchy. We create a new resource composed of U-SDannotated constituency and dependency treebanks for the MRL Modern Hebrew, and present two systems that can automatically predict U-SD annotations, for gold segmented input as well as raw texts, with high baseline accuracy.", "title": "" }, { "docid": "4cdef79370abcd380357c8be92253fa5", "text": "In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. We show how a datadriven deterministic dependency parser, in itself restricted to projective structures, can be combined with graph transformation techniques to produce non-projective structures. Experiments using data from the Prague Dependency Treebank show that the combined system can handle nonprojective constructions with a precision sufficient to yield a significant improvement in overall parsing accuracy. This leads to the best reported performance for robust non-projective parsing of Czech.", "title": "" } ]
[ { "docid": "21cde70c4255e706cb05ff38aec99406", "text": "In this paper, a multiple classifier machine learning (ML) methodology for predictive maintenance (PdM) is presented. PdM is a prominent strategy for dealing with maintenance issues given the increasing need to minimize downtime and associated costs. One of the challenges with PdM is generating the so-called “health factors,” or quantitative indicators, of the status of a system associated with a given maintenance issue, and determining their relationship to operating costs and failure risk. The proposed PdM methodology allows dynamical decision rules to be adopted for maintenance management, and can be used with high-dimensional and censored data problems. This is achieved by training multiple classification modules with different prediction horizons to provide different performance tradeoffs in terms of frequency of unexpected breaks and unexploited lifetime, and then employing this information in an operating cost-based maintenance decision system to minimize expected costs. The effectiveness of the methodology is demonstrated using a simulated example and a benchmark semiconductor manufacturing maintenance problem.", "title": "" }, { "docid": "559637a4f8f5b99bb3210c5c7d03d2e0", "text": "Third-generation personal navigation assistants (PNAs) (i.e., those that provide a map, the user's current location, and directions) must be able to reconcile the user's location with the underlying map. This process is known as map matching. Most existing research has focused on map matching when both the user's location and the map are known with a high degree of accuracy. However, there are many situations in which this is unlikely to be the case. Hence, this paper considers map matching algorithms that can be used to reconcile inaccurate locational data with an inaccurate map/network. Ó 2000 Published by Elsevier Science Ltd.", "title": "" }, { "docid": "89e25ae1d0f5dbe3185a538c2318b447", "text": "This paper presents a fully-integrated 3D image radar engine utilizing beamforming for electrical scanning and precise ranging technique for distance measurement. Four transmitters and four receivers form a sensor frontend with phase shifters and power combiners adjusting the beam direction. A built-in 31.3 GHz clock source and a frequency tripler provide both RF carrier and counting clocks for the distance measurement. Flip-chip technique with low-temperature co-fired ceramic (LTCC) antenna design creates a miniature module as small as 6.5 × 4.4 × 0.8 cm3. Designed and fabricated in 65 nm CMOS technology, the transceiver array chip dissipates 960 mW from a 1.2-V supply and occupies chip area of 3.6 × 2.1 mm 2. This prototype achieves ±28° scanning range, 2-m maximum distance, and 1 mm depth resolution.", "title": "" }, { "docid": "7dc5e63ddbb8ec509101299924093c8b", "text": "The task of aspect and opinion terms co-extraction aims to explicitly extract aspect terms describing features of an entity and opinion terms expressing emotions from user-generated texts. To achieve this task, one effective approach is to exploit relations between aspect terms and opinion terms by parsing syntactic structure for each sentence. However, this approach requires expensive effort for parsing and highly depends on the quality of the parsing results. In this paper, we offer a novel deep learning model, named coupled multi-layer attentions. The proposed model provides an end-to-end solution and does not require any parsers or other linguistic resources for preprocessing. Specifically, the proposed model is a multilayer attention network, where each layer consists of a couple of attentions with tensor operators. One attention is for extracting aspect terms, while the other is for extracting opinion terms. They are learned interactively to dually propagate information between aspect terms and opinion terms. Through multiple layers, the model can further exploit indirect relations between terms for more precise information extraction. Experimental results on three benchmark datasets in SemEval Challenge 2014 and 2015 show that our model achieves stateof-the-art performances compared with several baselines.", "title": "" }, { "docid": "b280d6115add9407a08de94d34fe47d2", "text": "Terabytes of data are generated day-to-day from modern information systems, cloud computing and digital technologies, as the increasing number of Internet connected devices grows. However, the analysis of these massive data requires many efforts at multiple levels for knowledge extraction and decision making. Therefore, Big Data Analytics is a current area of research and development that has become increasingly important. This article investigates cutting-edge research efforts aimed at analyzing Internet of Things (IoT) data. The basic objective of this article is to explore the potential impact of large data challenges, research efforts directed towards the analysis of IoT data and various tools associated with its analysis. As a result, this article suggests the use of platforms to explore big data in numerous stages and better understand the knowledge we can draw from the data, which opens a new horizon for researchers to develop solutions based on open research challenges and topics.", "title": "" }, { "docid": "6c2a033b374b4318cd94f0a617ec705a", "text": "In this paper, we propose to use Deep Neural Net (DNN), which has been recently shown to reduce speech recognition errors significantly, in Computer-Aided Language Learning (CALL) to evaluate English learners’ pronunciations. Multi-layer, stacked Restricted Boltzman Machines (RBMs), are first trained as nonlinear basis functions to represent speech signals succinctly, and the output layer is discriminatively trained to optimize the posterior probabilities of correct, sub-phonemic “senone” states. Three Goodness of Pronunciation (GOP) scores, including: the likelihood-based posterior probability, averaged framelevel posteriors of the DNN output layer “senone” nodes, and log likelihood ratio of correct and competing models, are tested with recordings of both native and non-native speakers, along with manual grading of pronunciation quality. The experimental results show that the GOP estimated by averaged frame-level posteriors of “senones” correlate with human scores the best. Comparing with GOPs estimated with non-DNN, i.e. GMMHMM, based models, the new approach can improve the correlations relatively by 22.0% or 15.6%, at word or sentence levels, respectively. In addition, the frame-level posteriors, which doesn’t need a decoding lattice and its corresponding forwardbackward computations, is suitable for supporting fast, on-line, multi-channel applications.", "title": "" }, { "docid": "8ed122ede076474bdad5c8fa2c8fd290", "text": "Faced with changing markets and tougher competition, more and more companies realize that to compete effectively they must transform how they function. But while senior managers understand the necessity of change, they often misunderstand what it takes to bring it about. They assume that corporate renewal is the product of company-wide change programs and that in order to transform employee behavior, they must alter a company's formal structure and systems. Both these assumptions are wrong, say these authors. Using examples drawn from their four-year study of organizational change at six large corporations, they argue that change programs are, in fact, the greatest obstacle to successful revitalization and that formal structures and systems are the last thing a company should change, not the first. The most successful change efforts begin at the periphery of a corporation, in a single plant or division. Such efforts are led by general managers, not the CEO or corporate staff people. And these general managers concentrate not on changing formal structures and systems but on creating ad hoc organizational arrangements to solve concrete business problems. This focuses energy for change on the work itself, not on abstractions such as \"participation\" or \"culture.\" Once general managers understand the importance of this grass-roots approach to change, they don't have to wait for senior management to start a process of corporate renewal. The authors describe a six-step change process they call the \"critical path.\"", "title": "" }, { "docid": "5e7976392b26e7c2172d2e5c02d85c57", "text": "A multiprocessor virtual machine benefits its guest operating system in supporting scalable job throughput and request latency—useful properties in server consolidation where servers require several of the system processors for steady state or to handle load bursts. Typical operating systems, optimized for multiprocessor systems in their use of spin-locks for critical sections, can defeat flexible virtual machine scheduling due to lock-holder preemption and misbalanced load. The virtual machine must assist the guest operating system to avoid lock-holder preemption and to schedule jobs with knowledge of asymmetric processor allocation. We want to support a virtual machine environment with flexible scheduling policies, while maximizing guest performance. This paper presents solutions to avoid lock-holder preemption for both fully virtualized and paravirtualized environments. Experiments show that we can nearly eliminate the effects of lock-holder preemption. Furthermore, the paper presents a scheduler feedback mechanism that despite the presence of asymmetric processor allocation achieves optimal and fair load balancing in the guest operating system.", "title": "" }, { "docid": "51a67685249e0108c337d53b5b1c7c92", "text": "CONTEXT\nEvidence suggests that early adverse experiences play a preeminent role in development of mood and anxiety disorders and that corticotropin-releasing factor (CRF) systems may mediate this association.\n\n\nOBJECTIVE\nTo determine whether early-life stress results in a persistent sensitization of the hypothalamic-pituitary-adrenal axis to mild stress in adulthood, thereby contributing to vulnerability to psychopathological conditions.\n\n\nDESIGN AND SETTING\nProspective controlled study conducted from May 1997 to July 1999 at the General Clinical Research Center of Emory University Hospital, Atlanta, Ga.\n\n\nPARTICIPANTS\nForty-nine healthy women aged 18 to 45 years with regular menses, with no history of mania or psychosis, with no active substance abuse or eating disorder within 6 months, and who were free of hormonal and psychotropic medications were recruited into 4 study groups (n = 12 with no history of childhood abuse or psychiatric disorder [controls]; n = 13 with diagnosis of current major depression who were sexually or physically abused as children; n = 14 without current major depression who were sexually or physically abused as children; and n = 10 with diagnosis of current major depression and no history of childhood abuse).\n\n\nMAIN OUTCOME MEASURES\nAdrenocorticotropic hormone (ACTH) and cortisol levels and heart rate responses to a standardized psychosocial laboratory stressor compared among the 4 study groups.\n\n\nRESULTS\nWomen with a history of childhood abuse exhibited increased pituitary-adrenal and autonomic responses to stress compared with controls. This effect was particularly robust in women with current symptoms of depression and anxiety. Women with a history of childhood abuse and a current major depression diagnosis exhibited a more than 6-fold greater ACTH response to stress than age-matched controls (net peak of 9.0 pmol/L [41.0 pg/mL]; 95% confidence interval [CI], 4.7-13.3 pmol/L [21.6-60. 4 pg/mL]; vs net peak of 1.4 pmol/L [6.19 pg/mL]; 95% CI, 0.2-2.5 pmol/L [1.0-11.4 pg/mL]; difference, 8.6 pmol/L [38.9 pg/mL]; 95% CI, 4.6-12.6 pmol/L [20.8-57.1 pg/mL]; P<.001).\n\n\nCONCLUSIONS\nOur findings suggest that hypothalamic-pituitary-adrenal axis and autonomic nervous system hyperreactivity, presumably due to CRF hypersecretion, is a persistent consequence of childhood abuse that may contribute to the diathesis for adulthood psychopathological conditions. Furthermore, these results imply a role for CRF receptor antagonists in the prevention and treatment of psychopathological conditions related to early-life stress. JAMA. 2000;284:592-597", "title": "" }, { "docid": "686593aca763bf003219dc1faf05cd36", "text": "This chapter examines positive teacher-student relationships, as seen through a variety of psychological models and provides recommendations for schools and teachers.", "title": "" }, { "docid": "4b570eb16d263b2df0a8703e9135f49c", "text": "ions. They also presume that consumers carefully calculate the give and get components of value, an assumption that did not hold true for most consumers in the exploratory study. Price as a Quality Indicator Most experimental studies related to quality have focused on price as the key extrinsic quality signal. As suggested in the propositions, price is but one of several potentially useful extrinsic cues; brand name or package may be equally or more important, especially in packaged goods. Further, evidence of a generalized price-perceived quality relationship is inconclusive. Quality research may benefit from a de-emphasis on price as the main extrinsic quality indicator. Inclusion of other important indicators, as well as identification of situations in which each of those indicators is important, may provide more interesting and useful answers about the extrinsic signals consumers use. Management Implications An understanding of what quality and value mean to consumers offers the promise of improving brand positions through more precise market analysis and segmentation, product planning, promotion, and pricing strategy. The model presented here suggests the following strategies that can be implemented to understand and capitalize on brand quality and value. Close the Quality Perception Gap Though managers increasingly acknowledge the importance of quality, many continue to define and measure it from the company's perspective. Closing the gap between objective and perceived quality requires that the company view quality the way the consumer does. Research that investigates which cues are important and how consumers form impressions of qualConsumer Perceptions of Price, Quality, and Value / 17 ity based on those technical, objective cues is necessary. Companies also may benefit from research that identifies the abstract dimensions of quality desired by consumers in a product class. Identify Key Intrinsic and Extrinsic Attribute", "title": "" }, { "docid": "3aa58539c69d6706bc0a9ca0256cdf80", "text": "BACKGROUND\nAcne vulgaris is a prevalent skin disorder impairing both physical and psychosocial health. This study was designed to investigate the effectiveness of photodynamic therapy (PDT) combined with minocycline in moderate to severe facial acne and influence on quality of life (QOL).\n\n\nMETHODS\nNinety-five patients with moderate to severe facial acne (Investigator Global Assessment [IGA] score 3-4) were randomly treated with PDT and minocycline (n = 48) or minocycline alone (n = 47). All patients took minocycline hydrochloride 100 mg/d for 4 weeks, whereas patients in the minocycline plus PDT group also received 4 times PDT treatment 1 week apart. IGA score, lesion counts, Dermatology Life Quality Index (DLQI), and safety evaluation were performed before treatment and at 2, 4, 6, and 8 weeks after enrolment.\n\n\nRESULTS\nThere were no statistically significant differences in characteristics between 2 treatment groups at baseline. Minocycline plus PDT treatment led to a greater mean percentage reduction from baseline in lesion counts versus minocycline alone at 8 weeks for both inflammatory (-74.4% vs -53.3%; P < .001) and noninflammatory lesions (-61.7% vs -42.4%; P < .001). More patients treated with minocycline plus PDT achieved IGA score <2 at study end (week 8: 30/48 vs 20/47; P < .05). Patients treated with minocycline plus PDT got significant lower DLQI at 8 weeks (4.4 vs 6.3; P < .001). Adverse events were mild and manageable.\n\n\nCONCLUSIONS\nCompared with minocycline alone, the combination of PDT with minocycline significantly improved clinical efficacy and QOL in moderate to severe facial acne patients.", "title": "" }, { "docid": "8a24f9d284507765e0026ae8a70fc482", "text": "The diagnosis of pulmonary tuberculosis in patients with Human Immunodeficiency Virus (HIV) is complicated by the increased presence of sputum smear negative tuberculosis. Diagnosis of smear negative pulmonary tuberculosis is made by an algorithm recommended by the National Tuberculosis and Leprosy Programme that uses symptoms, signs and laboratory results. The objective of this study is to determine the sensitivity and specificity of the tuberculosis treatment algorithm used for the diagnosis of sputum smear negative pulmonary tuberculosis. A cross-section study with prospective enrollment of patients was conducted in Dar-es-Salaam Tanzania. For patients with sputum smear negative, sputum was sent for culture. All consenting recruited patients were counseled and tested for HIV. Patients were evaluated using the National Tuberculosis and Leprosy Programme guidelines and those fulfilling the criteria of having active pulmonary tuberculosis were started on anti tuberculosis therapy. Remaining patients were provided appropriate therapy. A chest X-ray, mantoux test, and Full Blood Picture were done for each patient. The sensitivity and specificity of the recommended algorithm was calculated. Predictors of sputum culture positive were determined using multivariate analysis. During the study, 467 subjects were enrolled. Of those, 318 (68.1%) were HIV positive, 127 (27.2%) had sputum culture positive for Mycobacteria Tuberculosis, of whom 66 (51.9%) were correctly treated with anti-Tuberculosis drugs and 61 (48.1%) were missed and did not get anti-Tuberculosis drugs. Of the 286 subjects with sputum culture negative, 107 (37.4%) were incorrectly treated with anti-Tuberculosis drugs. The diagnostic algorithm for smear negative pulmonary tuberculosis had a sensitivity and specificity of 38.1% and 74.5% respectively. The presence of a dry cough, a high respiratory rate, a low eosinophil count, a mixed type of anaemia and presence of a cavity were found to be predictive of smear negative but culture positive pulmonary tuberculosis. The current practices of establishing pulmonary tuberculosis diagnosis are not sensitive and specific enough to establish the diagnosis of Acid Fast Bacilli smear negative pulmonary tuberculosis and over treat people with no pulmonary tuberculosis.", "title": "" }, { "docid": "b9022ac8992c0a59fefb7de43aa54eca", "text": "Although scholars have repeatedly linked video games to aggression, little research has investigated how specific game characteristics might generate such effects. In this study, we consider how game mode—cooperative, competitive, or solo—shapes aggressive cognition. Using experimental data, we find partial support for the idea that cooperative play modes prompt less aggressive cognition. Further analysis of potential mediating variables along with the influence of gender suggests the effect is primarily explained by social learning rather than frustration.", "title": "" }, { "docid": "244b82dd22612ac7d964dfae903022d4", "text": "Vector-space models, from word embeddings to neural network parsers, have many advantages for NLP. But how to generalise from fixed-length word vectors to a vector space for arbitrary linguistic structures is still unclear. In this paper we propose bag-of-vector embeddings of arbitrary linguistic graphs. A bag-of-vector space is the minimal nonparametric extension of a vector space, allowing the representation to grow with the size of the graph, but not tying the representation to any specific tree or graph structure. We propose efficient training and inference algorithms based on tensor factorisation for embedding arbitrary graphs in a bag-ofvector space. We demonstrate the usefulness of this representation by training bag-of-vector embeddings of dependency graphs and evaluating them on unsupervised semantic induction for the Semantic Textual Similarity and Natural Language Inference tasks.", "title": "" }, { "docid": "c22b598200cf68ab26c0c92cbb182b4a", "text": "With the rise of Web-based applications, it is both important and feasible for human-computer interaction practitioners to measure a product’s user experience. While quantifying user attitudes at a small scale has been heavily studied, in this industry case study, we detail best Happiness Tracking Surveys (HaTS) for collecting attitudinal data at a large scale directly in the product and over time. This method was developed at Google to track attitudes and open-ended feedback over time, and to characterize products’ user bases. This case study of HaTS goes beyond the design of the questionnaire to also suggest best practices for appropriate sampling, invitation techniques, and its data analysis. HaTS has been deployed successfully across dozens of Google’s products to measure progress towards product goals and to inform product decisions; its sensitivity to product changes has been demonstrated widely. We are confident that teams in other organizations will be able to embrace HaTS as well, and, if necessary, adapt it for their unique needs.", "title": "" }, { "docid": "f0659349cab12decbc4d07eb74361b79", "text": "This article suggests that the context and process of resource selection have an important influence on firm heterogeneity and sustainable competitive advantage. It is argued that a firm’s sustainable advantage depends on its ability to manage the institutional context of its resource decisions. A firm’s institutional context includes its internal culture as well as broader influences from the state, society, and interfirm relations that define socially acceptable economic behavior. A process model of firm heterogeneity is proposed that combines the insights of a resourcebased view with the institutional perspective from organization theory. Normative rationality, institutional isolating mechanisms, and institutional sources of firm homogeneity are proposed as determinants of rent potential that complement and extend resource-based explanations of firm variation and sustainable competitive advantage. The article suggests that both resource capital and institutional capital are indispensable to sustainable competitive advantage.  1997 by John Wiley & Sons, Ltd.", "title": "" }, { "docid": "ce03a26947b37829043406fe671869c5", "text": "Diagnosing students' knowledge proficiency, i.e., the mastery degrees of a particular knowledge point in exercises, is a crucial issue for numerous educational applications, e.g., targeted knowledge training and exercise recommendation. Educational theories have converged that students learn and forget knowledge from time to time. Thus, it is necessary to track their mastery of knowledge over time. However, traditional methods in this area either ignored the explanatory power of the diagnosis results on knowledge points or relied on a static assumption. To this end, in this paper, we devise an explanatory probabilistic approach to track the knowledge proficiency of students over time by leveraging educational priors. Specifically, we first associate each exercise with a knowledge vector in which each element represents an explicit knowledge point by leveraging educational priors (i.e., Q-matrix ). Correspondingly, each student is represented as a knowledge vector at each time in a same knowledge space. Second, given the student knowledge vector over time, we borrow two classical educational theories (i.e., Learning curve and Forgetting curve ) as priors to capture the change of each student's proficiency over time. After that, we design a probabilistic matrix factorization framework by combining student and exercise priors for tracking student knowledge proficiency. Extensive experiments on three real-world datasets demonstrate both the effectiveness and explanatory power of our proposed model.", "title": "" }, { "docid": "a9a22c9c57e9ba8c3deefbea689258d5", "text": "Functional neuroimaging studies have shown that romantic love and maternal love are mediated by regions specific to each, as well as overlapping regions in the brain's reward system. Nothing is known yet regarding the neural underpinnings of unconditional love. The main goal of this functional magnetic resonance imaging study was to identify the brain regions supporting this form of love. Participants were scanned during a control condition and an experimental condition. In the control condition, participants were instructed to simply look at a series of pictures depicting individuals with intellectual disabilities. In the experimental condition, participants were instructed to feel unconditional love towards the individuals depicted in a series of similar pictures. Significant loci of activation were found, in the experimental condition compared with the control condition, in the middle insula, superior parietal lobule, right periaqueductal gray, right globus pallidus (medial), right caudate nucleus (dorsal head), left ventral tegmental area and left rostro-dorsal anterior cingulate cortex. These results suggest that unconditional love is mediated by a distinct neural network relative to that mediating other emotions. This network contains cerebral structures known to be involved in romantic love or maternal love. Some of these structures represent key components of the brain's reward system.", "title": "" } ]
scidocsrr
821d92bbd18d5d92ee6719432bf33d69
Accelerated Mini-batch Randomized Block Coordinate Descent Method
[ { "docid": "e2a9bb49fd88071631986874ea197bc1", "text": "We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also known to converge quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude.", "title": "" } ]
[ { "docid": "21ffd3ae843e694a052ed14edb5ec149", "text": "This article discusses the need for more satisfactory implicit measures in consumer psychology and assesses the theoretical foundations, validity, and value of the Implicit Association Test (IAT) as a measure of implicit consumer social cognition. Study 1 demonstrates the IAT’s sen­ sitivity to explicit individual differences in brand attitudes, ownership, and usage frequency, and shows their correlations with IAT-based measures of implicit brand attitudes and brand re­ lationship strength. In Study 2, the contrast between explicit and implicit measures of attitude toward the ad for sportswear advertisements portraying African American (Black) and Euro­ pean American (White) athlete–spokespersons revealed different patterns of responses to ex­ plicit and implicit measures in Black and White respondents. These were explained in terms of self-presentation biases and system justification theory. Overall, the results demonstrate that the IAT enhances our understanding of consumer responses, particularly when consumers are either unable or unwilling to identify the sources of influence on their behaviors or opinions.", "title": "" }, { "docid": "e2a1ff393ad57ebaa9f3631e7910bab6", "text": "We apply principles and techniques of recommendation systems to develop a predictive model of customers’ restaurant ratings. Using Yelp’s dataset, we extract collaborative and content based features to identify customer and restaurant profiles. In particular, we implement singular value decomposition, hybrid cascade of K-nearest neighbor clustering, weighted bi-partite graph projection, and several other learning algorithms. Using Root metrics Mean Squared Error and Mean Absolute Error, we then evaluate and compare the algorithms’ performances.", "title": "" }, { "docid": "b94e096ea1bc990bd7c72aab988dd5ff", "text": "The paper describes the design and implementation of an independent, third party contract monitoring service called Contract Compliance Checker (CCC). The CCC is provided with the specification of the contract in force, and is capable of observing and logging the relevant business-to-business (B2B) interaction events, in order to determine whether the actions of the business partners are consistent with the contract. A contract specification language called EROP (for Events, Rights, Obligations and Prohibitions) for the CCC has been developed based on business rules, that provides constructs to specify what rights, obligation and prohibitions become active and inactive after the occurrence of events related to the execution of business operations. The system has been designed to work with B2B industry standards such as ebXML and RosettaNet.", "title": "" }, { "docid": "8834a4238d05953173cc544d7fa69103", "text": "Providing an accurate, low cost estimate of sub-room indoor positioning remains an active area of research with applications including reactive indoor spaces, dynamic temperature control, wireless health, and augmented reality, to name a few. Recently proposed indoor localization solutions have required anywhere from zero additional infrastructure to customized RF hardware and have provided room-level to centimeter-level accuracy, typically in that respective order. One emerging technology that is proving a pragmatic solution for scalable, accurate localization is that of Bluetooth Low Energy beaconing, spearheaded by Apple's recently introduced iBeacon protocol. In this demo, we present a suite of localization tools developed around the iBeacon protocol, providing an in-depth look at Bluetooth Low Energy's viability as an indoor positioning technology. Our system shows an average position estimation error of 0.53 meters.", "title": "" }, { "docid": "c075c26fcfad81865c58a284013c0d33", "text": "A novel pulse compression technique is developed that improves the axial resolution of an ultrasonic imaging system and provides a boost in the echo signal-to-noise ratio (eSNR). The new technique, called the resolution enhancement compression (REC) technique, was validated with simulations and experimental measurements. Image quality was examined in terms of three metrics: the cSNR, the bandwidth, and the axial resolution through the modulation transfer function (MTF). Simulations were conducted with a weakly-focused, single-element ultrasound source with a center frequency of 2.25 MHz. Experimental measurements were carried out with a single-element transducer (f/3) with a center frequency of 2.25 MHz from a planar reflector and wire targets. In simulations, axial resolution of the ultrasonic imaging system was almost doubled using the REC technique (0.29 mm) versus conventional pulsing techniques (0.60 mm). The -3 dB pulse/echo bandwidth was more than doubled from 48% to 97%, and maximum range sidelobes were -40 dB. Experimental measurements revealed an improvement in axial resolution using the REC technique (0.31 mm) versus conventional pulsing (0.44 mm). The -3 dB pulse/echo bandwidth was doubled from 56% to 113%, and maximum range sidelobes were observed at -45 dB. In addition, a significant gain in eSNR (9 to 16.2 dB) was achieved", "title": "" }, { "docid": "9e04e2d09e0b57a6af76ed522ede1154", "text": "The field of surveillance and forensics research is currently shifting focus and is now showing an ever increasing interest in the task of people reidentification. This is the task of assigning the same identifier to all instances of a particular individual captured in a series of images or videos, even after the occurrence of significant gaps over time or space. People reidentification can be a useful tool for people analysis in security as a data association method for long-term tracking in surveillance. However, current identification techniques being utilized present many difficulties and shortcomings. For instance, they rely solely on the exploitation of visual cues such as color, texture, and the object’s shape. Despite the many advances in this field, reidentification is still an open problem. This survey aims to tackle all the issues and challenging aspects of people reidentification while simultaneously describing the previously proposed solutions for the encountered problems. This begins with the first attempts of holistic descriptors and progresses to the more recently adopted 2D and 3D model-based approaches. The survey also includes an exhaustive treatise of all the aspects of people reidentification, including available datasets, evaluation metrics, and benchmarking.", "title": "" }, { "docid": "6b8329ef59c6811705688e48bf6c0c08", "text": "Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE can be viewed as the joint factorization of multiple networks» Laplacians; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.", "title": "" }, { "docid": "c5441c3010dd0169f0b20e383c05e0c9", "text": "The purpose of the present study was to elucidate how plyometric training improves stretch-shortening cycle (SSC) exercise performance in terms of muscle strength, tendon stiffness, and muscle-tendon behavior during SSC exercise. Eleven men were assigned to a training group and ten to a control group. Subjects in the training group performed depth jumps (DJ) using only the ankle joint for 12 weeks. Before and after the period, we observed reaction forces at foot, muscle-tendon behavior of the gastrocnemius, and electromyographic activities of the triceps surae and tibialis anterior during DJ. Maximal static plantar flexion strength and Achilles tendon stiffness were also determined. In the training group, maximal strength remained unchanged while tendon stiffness increased. The force impulse of DJ increased, with a shorter contact time and larger reaction force over the latter half of braking and initial half of propulsion phases. In the latter half of braking phase, the average electromyographic activity (mEMG) increased in the triceps surae and decreased in tibialis anterior, while fascicle behavior of the gastrocnemius remained unchanged. In the initial half of propulsion, mEMG of triceps surae and shortening velocity of gastrocnemius fascicle decreased, while shortening velocity of the tendon increased. These results suggest that the following mechanisms play an important role in improving SSC exercise performance through plyometric training: (1) optimization of muscle-tendon behavior of the agonists, associated with alteration in the neuromuscular activity during SSC exercise and increase in tendon stiffness and (2) decrease in the neuromuscular activity of antagonists during a counter movement.", "title": "" }, { "docid": "6d777bd24d9e869189c388af94384fa1", "text": "OBJECTIVE\nThe aim of this study was to explore the efficacy of insulin-loaded trimethylchitosan nanoparticles on certain destructive effects of diabetes type one.\n\n\nMATERIALS AND METHODS\nTwenty-five male Wistar rats were randomly divided into three control groups (n=5) and two treatment groups (n=5). The control groups included normal diabetic rats without treatment and diabetic rats treated with the nanoparticles. The treatment groups included diabetic rats treated with the insulin-loaded trimethylchitosan nanoparticles and the diabetic rats treated with trade insulin. The experiment period was eight weeks and the rats were treated for the last two weeks.\n\n\nRESULT\nThe livers of the rats receiving both forms of insulin showed less severe microvascular steatosis and fatty degeneration, and ameliorated blood glucose, serum biomarkers, and oxidant/antioxidant parameters with no significant differences. The gene expression of pyruvate kinase could be compensated by both the treatment protocols and the new coated form of insulin could not significantly influence the gene expression of glucokinase (p<0.05). The result of the present study showed the potency of the nanoparticle form of insulin to attenuate hyperglycemia, oxidative stress, and inflammation in diabetes, which indicate the bioavailability of insulin-encapsulated trimethylchitosan nanoparticles.", "title": "" }, { "docid": "c091e5b24dc252949b3df837969e263a", "text": "The emergence of powerful portable computers, along with advances in wireless communication technologies, has made mobile computing a reality. Among the applications that are finding their way to the market of mobile computingthose that involve data managementhold a prominent position. In the past few years, there has been a tremendous surge of research in the area of data management in mobile computing. This research has produced interesting results in areas such as data dissemination over limited bandwith channels, location-dependent querying of data, and advanced interfaces for mobile computers. This paper is an effort to survey these techniques and to classify this research in a few broad areas.", "title": "" }, { "docid": "e86f1f37eac7c2182c5f77c527d8fac6", "text": "Eating members of one's own species is one of the few remaining taboos in modern human societies. In humans, aggression cannibalism has been associated with mental illness. The objective of this report is to examine the unique set of circumstances and characteristics revealing the underlying etiology leading to such an act and the type of psychological effect it has for the perpetrator. A case report of a patient with paranoid schizophrenia who committed patricide and cannibalism is presented. The psychosocial implications of anthropophagy on the particular patient management are outlined.", "title": "" }, { "docid": "d3e51c3f9ece671cf5e8e1f630c83a8c", "text": "Bayesian (machine) learning has been playing a significant role in machine learning for a long time due to its particular ability to embrace uncertainty, encode prior knowledge, and endow interpretability. On the back of Bayesian learning’s great success, Bayesian nonparametric learning (BNL) has emerged as a force for further advances in this field due to its greater modelling flexibility and representation power. Instead of playing with the fixed-dimensional probabilistic distributions of Bayesian learning, BNL creates a new “game” with infinite-dimensional stochastic processes. BNL has long been recognised as a research subject in statistics, and, to date, several state-of-the-art pilot studies have demonstrated that BNL has a great deal of potential to solve real-world machine-learning tasks. However, despite these promising results, BNL has not created a huge wave in the machine-learning community. Esotericism may account for this. The books and surveys on BNL written by statisticians are overcomplicated and filled with tedious theories and proofs. Each is certainly meaningful but may scare away new researchers, especially those with computer science backgrounds. Hence, the aim of this article is to provide a plain-spoken, yet comprehensive, theoretical survey of BNL in terms that researchers in the machine-learning community can understand. It is hoped this survey will serve as a starting point for understanding and exploiting the benefits of BNL in our current scholarly endeavours. To achieve this goal, we have collated the extant studies in this field and aligned them with the steps of a standard BNL procedure—from selecting the appropriate stochastic processes through manipulation to executing the model inference algorithms. At each step, past efforts have been thoroughly summarised and discussed. In addition, we have reviewed the common methods for implementing BNL in various machine-learning tasks along with its diverse applications in the real world as examples to motivate future studies.", "title": "" }, { "docid": "80f9f3f12e33807e63ee5ba58916d41c", "text": "Positivist and interpretivist researchers have different views on how their research outcomes may be evaluated. The issues of validity, reliability and generalisability, used in evaluating positivist studies, are regarded of relatively little significance by many qualitative researchers for judging the merits of their interpretive investigations. In confirming the research, those three canons need at least to be re-conceptualised in order to reflect the keys issues of concern for interpretivists. Some interpretivists address alternative issues such as credibility, dependability and transferability when determining the trustworthiness of their qualitative investigations. A strategy proposed by several authors for establishing the trustworthiness of the qualitative inquiry is the development of a research audit trail. The audit trail enables readers to trace through a researcher’s logic and determine whether the study’s findings may be relied upon as a platform for further enquiry. While recommended in theory, this strategy is rarely implemented in practice. This paper examines the role of the research audit trail in improving the trustworthiness of qualitative research. Further, it documents the development of an audit trail for an empirical qualitative research study that centred on an interpretive evaluation of a new Information and Communication Technology (ICT) student administrative system in the tertiary education sector in the Republic of Ireland. This research study examined the impact of system introduction across five Institutes of Technology (IoTs) through case study research that incorporated multiple evidence sources. The evidence collected was analysed using a grounded theory method, which was supported by qualitative data analysis software. The key concepts and categories that emerged from this process were synthesized into a cross case primary narrative; through reflection the primary narrative was reduced to a higher order narrative that presented the principle findings or key research themes. From this higher order narrative a theoretical conjecture was distilled. Both a physical and intellectual audit trail for this study are presented in this paper. The physical audit trail documents all keys stages of a research study and reflects the key research methodology decisions. The intellectual audit trail, on the other hand, outlines how a researcher’s thinking evolved throughout all phases of the study. Hence, these audit trails make transparent the key decisions taken throughout the research process. The paper concludes by discussing the value of this audit trail process in confirming a qualitative study’s findings.", "title": "" }, { "docid": "77e61d56d297b62e1078542fd74ffe5e", "text": "This paper introduces a complete design method to construct an adaptive fuzzy logic controller (AFLC) for DC–DC converter. In a conventional fuzzy logic controller (FLC), knowledge on the system supplied by an expert is required for developing membership functions (parameters) and control rules. The proposed AFLC, on the other hand, do not required expert for making parameters and control rules. Instead, parameters and rules are generated using a model data file, which contains summary of input–output pairs. The FLC use Mamdani type fuzzy logic controllers for the defuzzification strategy and inference operators. The proposed controller is designed and verified by digital computer simulation and then implemented for buck, boost and buck–boost converters by using an 8-bit microcontroller. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "7ac1249e901e558443bc8751b11c9427", "text": "Despite the growing popularity of leasing as an alternative to purchasing a vehicle, there is very little research on how consumers choose among various leasing and ̄nancing (namely buying) contracts and how this choice a®ects the brand they choose. In this paper therefore, we develop a structural model of the consumer's choice of automobile brand and the related decision of whether to lease or buy it. We conceptualize the leasing and buying of the same vehicle as two di®erent goods, each with its own costs and bene ̄ts. The di®erences between the two types of contracts are summarized along three dimensions: (i) the \\net price\" or ̄nancial cost of the contract, (ii) maintenance and repair costs and (iii) operating costs, which depend on the consumer's driving behavior. Based on consumer utility maximization, we derive a nested logit of brand and contract choice that captures the tradeo®s among all three costs. The model is estimated on a dataset of new car purchases from the near luxury segment of the automobile market. The optimal choice of brand and contract is determined by the consumer's implicit interest rate and the number of miles she expects to drive, both of which are estimated as parameters of the model. The empirical results yield several interesting ̄ndings. We ̄nd that (i) cars that deteriorate faster are more likely to be leased than bought, (ii) the estimated implicit interest rate is higher than the market rate, which implies that consumers do not make e±cient tradeo®s between the net price and operating costs and may often incorrectly choose to lease and (iii) the estimate of the annual expected mileage indicates that most consumers would incur substantial penalties if they lease, which explains why buying or ̄nancing continues to be more popular than leasing. This research also provides several interesting managerial insights into the e®ectiveness of various promotional instruments. We examine this issue by looking at (i) sales response to a promotion, (ii) the ability of the promotion to draw sales from other brands and (iii) its overall pro ̄tability. We ̄nd, for example that although the sales response to a cash rebate on a lease is greater than an equivalent increase in the residual value, under certain conditions and for certain brands, a residual value promotion yields higher pro ̄ts. These ̄ndings are of particular value to manufacturers in the prevailing competitive environment, which is marked by the extensive use of large rebates and 0% APR o®ers.", "title": "" }, { "docid": "187fcbf0a52de7dd7de30f8846b34e1e", "text": "Goal-oriented dialogue systems typically rely on components specifically developed for a single task or domain. This limits such systems in two different ways: If there is an update in the task domain, the dialogue system usually needs to be updated or completely re-trained. It is also harder to extend such dialogue systems to different and multiple domains. The dialogue state tracker in conventional dialogue systems is one such component — it is usually designed to fit a welldefined application domain. For example, it is common for a state variable to be a categorical distribution over a manually-predefined set of entities (Henderson et al., 2013), resulting in an inflexible and hard-to-extend dialogue system. In this paper, we propose a new approach for dialogue state tracking that can generalize well over multiple domains without incorporating any domain-specific knowledge. Under this framework, discrete dialogue state variables are learned independently and the information of a predefined set of possible values for dialogue state variables is not required. Furthermore, it enables adding arbitrary dialogue context as features and allows for multiple values to be associated with a single state variable. These characteristics make it much easier to expand the dialogue state space. We evaluate our framework using the widely used dialogue state tracking challenge data set (DSTC2) and show that our framework yields competitive results with other state-of-the-art results despite incorporating little domain knowledge. We also show that this framework can benefit from widely available external resources such as pre-trained word embeddings.", "title": "" }, { "docid": "0c529c9a9f552f89e0c0ad3e000cbd37", "text": "In this article, I introduce an emotion paradox: People believe that they know an emotion when they see it, and as a consequence assume that emotions are discrete events that can be recognized with some degree of accuracy, but scientists have yet to produce a set of clear and consistent criteria for indicating when an emotion is present and when it is not. I propose one solution to this paradox: People experience an emotion when they conceptualize an instance of affective feeling. In this view, the experience of emotion is an act of categorization, guided by embodied knowledge about emotion. The result is a model of emotion experience that has much in common with the social psychological literature on person perception and with literature on embodied conceptual knowledge as it has recently been applied to social psychology.", "title": "" }, { "docid": "9a10716e1d7e24b790fb5dd48ad254ab", "text": "Probabilistic models based on Bayes' rule are an increasingly popular approach to understanding human cognition. Bayesian models allow immense representational latitude and complexity. Because they use normative Bayesian mathematics to process those representations, they define optimal performance on a given task. This article focuses on key mechanisms of Bayesian information processing, and provides numerous examples illustrating Bayesian approaches to the study of human cognition. We start by providing an overview of Bayesian modeling and Bayesian networks. We then describe three types of information processing operations-inference, parameter learning, and structure learning-in both Bayesian networks and human cognition. This is followed by a discussion of the important roles of prior knowledge and of active learning. We conclude by outlining some challenges for Bayesian models of human cognition that will need to be addressed by future research. WIREs Cogn Sci 2011 2 8-21 DOI: 10.1002/wcs.80 For further resources related to this article, please visit the WIREs website.", "title": "" }, { "docid": "8e80d8be3b8ccbc4b8b6b6a0dde4136f", "text": "When an event occurs, it attracts attention of information sources to publish related documents along its lifespan. The task of event detection is to automatically identify events and their related documents from a document stream, which is a set of chronologically ordered documents collected from various information sources. Generally, each event has a distinct activeness development so that its status changes continuously during its lifespan. When an event is active, there are a lot of related documents from various information sources. In contrast when it is inactive, there are very few documents, but they are focused. Previous works on event detection did not consider the characteristics of the event's activeness, and used rigid thresholds for event detection. We propose a concept called life profile, modeled by a hidden Markov model, to model the activeness trends of events. In addition, a general event detection framework, LIPED, which utilizes the learned life profiles and the burst-and-diverse characteristic to adjust the event detection thresholds adaptively, can be incorporated into existing event detection methods. Based on the official TDT corpus and contest rules, the evaluation results show that existing detection methods that incorporate LIPED achieve better performance in the cost and F1 metrics, than without.", "title": "" }, { "docid": "670028909831162ad026f24df5a32d1f", "text": "In case of volcanic eruption, a robotic volcano exploration for observing restricted areas is expected to judge the evacuation call for inhabitants. An unmanned ground vehicle (UGV) is one possibility to apply to such exploration missions. When a UGV traverses on volcanic fields, a slippage between the vehicle and the terrain occurs. This is because the volcanic environment is covered with loose soil and rocks, and there are many slopes. The slippage causes several problems for UGVs, particularly localization and terrainability. Therefore, in this research, we propose a slip estimation method based on a slip model to apply to slip-compensated odometry for tracked vehicles. First, we propose a slip model for tracked vehicles based on the force acting on a robot on a slope. The proposed slip model has two parameters: a pitch angle dependence and a constant component, and these parameters were identified by indoor slope-traveling experiments. Next, we propose a slip parameter estimation method using a particle filter technique with a velocity measurement sensor, and report on the effectiveness of our method by slope-traveling experiments. The experimental result shows that the accuracy of our position estimation method based on the slip-compensated odometry is improved in comparison with conventional methods by using the slip parameters.", "title": "" } ]
scidocsrr
0ce205f7cf837636edeb38b59a4c0ab4
Learning Deep Representations of Fine-Grained Visual Descriptions
[ { "docid": "a2f91e55b5096b86f6fa92e701c62898", "text": "The main question we address in this paper is how to use purely textual description of categories with no training images to learn visual classifiers for these categories. We propose an approach for zero-shot learning of object categories where the description of unseen categories comes in the form of typical text such as an encyclopedia entry, without the need to explicitly defined attributes. We propose and investigate two baseline formulations, based on regression and domain adaptation. Then, we propose a new constrained optimization formulation that combines a regression function and a knowledge transfer function with additional constraints to predict the classifier parameters for new classes. We applied the proposed approach on two fine-grained categorization datasets, and the results indicate successful classifier prediction.", "title": "" } ]
[ { "docid": "930787311add9bede553c6f52f420fb9", "text": "Prior research has demonstrated that interpersonal trust is critical to knowledge transfer in organizational settings. Yet there has been only limited systematic empirical work examining factors that promote a knowledge seeker’s trust in a knowledge source. We propose three categories of variables that affect interpersonal trust in this context: attributes of the relationship between the knowledge seeker and source; attributes of the knowledge source; and attributes of the knowledge seeker. We analyzed these multilevel data simultaneously with hierarchical linear modeling (HLM) using survey data from three companies in different industries and countries. We found that (1) variables in all three categories were statistically significant, with the biggest effect coming from more malleable features such as the cognitive dimension of social capital (i.e., shared vision and shared language), and little or no effect from more stable and visible features such as formal structure and demographic similarity; (2) benevolence-based trust was easier to predict than competence-based trust, both in terms of the number of significant predictors and the variance accounted for; and (3) knowledge seekers’ reliance on knowledgesource behaviors in determining how much to trust a source’s competence—the so-called “clues for competence”—were relied on even more heavily by knowledge seekers with more division tenure, suggesting that certain attitudes in the trust realm may solidify over time.", "title": "" }, { "docid": "aa88b71c68ed757faf9eb896a81003f5", "text": "Purpose The present study evaluated the platelet distribution pattern and growth factor release (VEGF, TGF-β1 and EGF) within three PRF (platelet-rich-fibrin) matrices (PRF, A-PRF and A-PRF+) that were prepared using different relative centrifugation forces (RCF) and centrifugation times. Materials and methods immunohistochemistry was conducted to assess the platelet distribution pattern within three PRF matrices. The growth factor release was measured over 10 days using ELISA. Results The VEGF protein content showed the highest release on day 7; A-PRF+ showed a significantly higher rate than A-PRF and PRF. The accumulated release on day 10 was significantly higher in A-PRF+ compared with A-PRF and PRF. TGF-β1 release in A-PRF and A-PRF+ showed significantly higher values on days 7 and 10 compared with PRF. EGF release revealed a maximum at 24 h in all groups. Toward the end of the study, A-PRF+ demonstrated significantly higher EGF release than PRF. The accumulated growth factor releases of TGF-β1 and EGF on day 10 were significantly higher in A-PRF+ and A-PRF than in PRF. Moreover, platelets were located homogenously throughout the matrix in the A-PRF and A-PRF+ groups, whereas platelets in PRF were primarily observed within the lower portion. ​Discussion the present results show an increase growthfactor release by decreased RCF. However, further studies must be conducted to examine the extent to which enhancing the amount and the rate of released growth factors influence wound healing and biomaterial-based tissue regeneration. ​Conclusion These outcomes accentuate the fact that with a reduction of RCF according to the previously LSCC (described low speed centrifugation concept), growth factor release can be increased in leukocytes and platelets within the solid PRF matrices.", "title": "" }, { "docid": "35a0a4cdba6fbab9f02bf4e50aace306", "text": "This paper analyzes task assignment for heterogeneous air vehicles using a guaranteed conflict-free assignment algorithm, the Consensus Based Bundle Algorithm (CBBA). We extend this recently proposed algorithm to handle two realistic multiUAV operational complications. Our first extension accounts for obstacle regions in order to generate collision free paths for UAVs. Our second extension reduces task planner sensitivity to sensor measurement noise, and thereby minimizes churning behavior in flight paths. After integrating our enhanced CBBA module with a 3D visualization and interaction software tool, we simulate multiple aircraft servicing stationary and moving ground targets. Preliminary simulation results establish that consistent, conflict-free multi-UAV path assignments can be calculated on the order of a few seconds. The enhanced CBBA consequently demonstrates significant potential for real-time performance in stressing environments.", "title": "" }, { "docid": "443191f41aba37614c895ba3533f80ed", "text": "De novo engineering of gene circuits inside cells is extremely difficult, and efforts to realize predictable and robust performance must deal with noise in gene expression and variation in phenotypes between cells. Here we demonstrate that by coupling gene expression to cell survival and death using cell–cell communication, we can programme the dynamics of a population despite variability in the behaviour of individual cells. Specifically, we have built and characterized a ‘population control’ circuit that autonomously regulates the density of an Escherichia coli population. The cell density is broadcasted and detected by elements from a bacterial quorum-sensing system, which in turn regulate the death rate. As predicted by a simple mathematical model, the circuit can set a stable steady state in terms of cell density and gene expression that is easily tunable by varying the stability of the cell–cell communication signal. This circuit incorporates a mechanism for programmed death in response to changes in the environment, and allows us to probe the design principles of its more complex natural counterparts.", "title": "" }, { "docid": "8c636402670a00e993efc66f419540f6", "text": "Valiant (1984) and others have studied the problem of learning various classes of Boolean functions from examples. Here we discuss incremental learning of these functions. We consider a setting in which the learner responds to each example according to a current hypothesis. Then the learner updates the hypothesis, if necessary, based on the correct classification of the example. One natural measure of the quality of learning in this setting is the number of mistakes the learner makes. For suitable classes of functions, learning algorithms are available that make a bounded number of mistakes, with the bound independent of the number of examples seen by the learner. We present one such algorithm that learns disjunctive Boolean functions, along with variants for learning other classes of Boolean functions. The basic method can be expressed as a linear-threshold algorithm. A primary advantage of this algorithm is that the number of mistakes grows only logarithmically with the number of irrelevant attributes in the examples. At the same time, the algorithm is computationally efficient in both time and space.", "title": "" }, { "docid": "4adfa3026fbfceca68a02ee811d8a302", "text": "Designing a new domain specific language is as any other complex task sometimes error-prone and usually time consuming, especially if the language shall be of high-quality and comfortably usable. Existing tool support focuses on the simplification of technical aspects but lacks support for an enforcement of principles for a good language design. In this paper we investigate guidelines that are useful for designing domain specific languages, largely based on our experience in developing languages as well as relying on existing guidelines on general purpose (GPLs) and modeling languages. We defined guidelines to support a DSL developer to achieve better quality of the language design and a better acceptance among its users.", "title": "" }, { "docid": "a8fb6ca739d0d1e75b8b94302f2139a2", "text": "OBJECTIVE\nTo assess the conditions under which employing an overview of systematic reviews is likely to lead to a high risk of bias.\n\n\nSTUDY DESIGN\nTo synthesise existing guidance concerning overview practice, a scoping review was conducted. Four electronic databases were searched with a pre-specified strategy (PROSPERO 2015:CRD42015027592) ending October 2015. Included studies needed to describe or develop overview methodology. Data were narratively synthesised to delineate areas highlighted as outstanding challenges or where methodological recommendations conflict.\n\n\nRESULTS\nTwenty-four papers met the inclusion criteria. There is emerging debate regarding overlapping systematic reviews; systematic review scope; quality of included research; updating; and synthesizing and reporting results. While three functions for overviews have been proposed-identify gaps, explore heterogeneity, summarize evidence-overviews cannot perform the first; are unlikely to achieve the second and third simultaneously; and can only perform the third under specific circumstances. Namely, when identified systematic reviews meet the following four conditions: (1) include primary trials that do not substantially overlap, (2) match overview scope, (3) are of high methodological quality, and (4) are up-to-date.\n\n\nCONCLUSION\nConsidering the intended function of proposed overviews with the corresponding methodological conditions may improve the quality of this burgeoning publication type. Copyright © 2017 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "4219836dc38e96a142e3b73cdf87e234", "text": "BACKGROUND\nNIATx200, a quality improvement collaborative, involved 201 substance abuse clinics. Each clinic was randomized to one of four implementation strategies: (a) interest circle calls, (b) learning sessions, (c) coach only or (d) a combination of all three. Each strategy was led by NIATx200 coaches who provided direct coaching or facilitated the interest circle and learning session interventions.\n\n\nMETHODS\nEligibility was limited to NIATx200 coaches (N = 18), and the executive sponsor/change leader of participating clinics (N = 389). Participants were invited to complete a modified Grasha Riechmann Student Learning Style Survey and Teaching Style Inventory. Principal components analysis determined participants' preferred learning and teaching styles.\n\n\nRESULTS\nResponses were received from 17 (94.4 %) of the coaches. Seventy-two individuals were excluded from the initial sample of change leaders and executive sponsors (N = 389). Responses were received from 80 persons (25.2 %) of the contactable individuals. Six learning profiles for the executive sponsors and change leaders were identified: Collaborative/Competitive (N = 28, 36.4 %); Collaborative/Participatory (N = 19, 24.7 %); Collaborative only (N = 17, 22.1 %); Collaborative/Dependent (N = 6, 7.8 %); Independent (N = 3, 5.2 %); and Avoidant/Dependent (N = 3, 3.9 %). NIATx200 coaches relied primarily on one of four coaching profiles: Facilitator (N = 7, 41.2 %), Facilitator/Delegator (N = 6, 35.3 %), Facilitator/Personal Model (N = 3, 17.6 %) and Delegator (N = 1, 5.9 %). Coaches also supported their primary coaching profiles with one of eight different secondary coaching profiles.\n\n\nCONCLUSIONS\nThe study is one of the first to assess teaching and learning styles within a QIC. Results indicate that individual learners (change leaders and executive sponsors) and coaches utilize multiple approaches in the teaching and practice-based learning of quality improvement (QI) processes. Identification teaching profiles could be used to tailor the collaborative structure and content delivery. Efforts to accommodate learning styles would facilitate knowledge acquisition enhancing the effectiveness of a QI collaborative to improve organizational processes and outcomes.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov Identifier: NCT00934141 Registered July 6, 2009. Retrospectively registered.", "title": "" }, { "docid": "de024671f84d853ac3bb7735a4497f1f", "text": "Neural networks for natural language reasoning have largely focused on extractive, fact-based question-answering (QA) and common-sense inference. However, it is also crucial to understand the extent to which neural networks can perform relational reasoning and combinatorial generalization from natural language—abilities that are often obscured by annotation artifacts and the dominance of language modeling in standard QA benchmarks. In this work, we present a novel benchmark dataset for language understanding that isolates performance on relational reasoning. We also present a neural message-passing baseline and show that this model, which incorporates a relational inductive bias, is superior at combinatorial generalization compared to a traditional recurrent neural network approach.", "title": "" }, { "docid": "6537921976c2779d1e7d921c939ec64d", "text": "Stencil computation sweeps over a spatial grid over multiple time steps to perform nearest-neighbor computations. The bandwidth-to-compute requirement for a large class of stencil kernels is very high, and their performance is bound by the available memory bandwidth. Since memory bandwidth grows slower than compute, the performance of stencil kernels will not scale with increasing compute density. We present a novel 3.5D-blocking algorithm that performs 2.5D-spatial and temporal blocking of the input grid into on-chip memory for both CPUs and GPUs. The resultant algorithm is amenable to both thread- level and data-level parallelism, and scales near-linearly with the SIMD width and multiple-cores. Our performance numbers are faster or comparable to state-of-the-art-stencil implementations on CPUs and GPUs. Our implementation of 7-point-stencil is 1.5X-faster on CPUs, and 1.8X faster on GPUs for single- precision floating point inputs than previously reported numbers. For Lattice Boltzmann methods, the corresponding speedup number on CPUs is 2.1X.", "title": "" }, { "docid": "d9605c1cde4c40d69c2faaea15eb466c", "text": "A magnetically tunable ferrite-loaded substrate integrated waveguide (SIW) cavity resonator is presented and demonstrated. X-band cavity resonator is operated in the dominant mode and the ferrite slabs are loaded onto the side walls of the cavity where the value of magnetic field is highest. Measured results for single and double ferrite-loaded SIW cavity resonators are presented. Frequency tuning range of more than 6% and 10% for single and double ferrite slabs are obtained. Unloaded Q -factor of more than 200 is achieved.", "title": "" }, { "docid": "5f4b0c833e7a542eb294fa2d7a305a16", "text": "Security awareness is an often-overlooked factor in an information security program. While organizations expand their use of advanced security technology and continuously train their security professionals, very little is used to increase the security awareness among the normal users, making them the weakest link in any organization. As a result, today, organized cyber criminals are putting significant efforts to research and develop advanced hacking methods that can be used to steal money and information from the general public. Furthermore, the high internet penetration growth rate in the Middle East and the limited security awareness among users is making it an attractive target for cyber criminals. In this paper, we will show the need for security awareness programs in schools, universities, governments, and private organizations in the Middle East by presenting results of several security awareness studies conducted among students and professionals in UAE in 2010. This includes a comprehensive wireless security survey in which thousands of access points were detected in Dubai and Sharjah most of which are either unprotected or employ weak types of protection. Another study focuses on evaluating the chances of general users to fall victims to phishing attacks which can be used to steal bank and personal information. Furthermore, a study of the user’s awareness of privacy issues when using RFID technology is presented. Finally, we discuss several key factors that are necessary to develop a successful information security awareness program.", "title": "" }, { "docid": "565a6f620f9ccd33b6faa5a7f37df188", "text": "Fog computing (FC) and Internet of Everything (IoE) are two emerging technological paradigms that, to date, have been considered standing-alone. However, because of their complementary features, we expect that their integration can foster a number of computing and network-intensive pervasive applications under the incoming realm of the future Internet. Motivated by this consideration, the goal of this position paper is fivefold. First, we review the technological attributes and platforms proposed in the current literature for the standing-alone FC and IoE paradigms. Second, by leveraging some use cases as illustrative examples, we point out that the integration of the FC and IoE paradigms may give rise to opportunities for new applications in the realms of the IoE, Smart City, Industry 4.0, and Big Data Streaming, while introducing new open issues. Third, we propose a novel technological paradigm, the Fog of Everything (FoE) paradigm, that integrates FC and IoE and then we detail the main building blocks and services of the corresponding technological platform and protocol stack. Fourth, as a proof-of-concept, we present the simulated energy-delay performance of a small-scale FoE prototype, namely, the V-FoE prototype. Afterward, we compare the obtained performance with the corresponding one of a benchmark technological platform, e.g., the V-D2D one. It exploits only device-to-device links to establish inter-thing “ad hoc” communication. Last, we point out the position of the proposed FoE paradigm over a spectrum of seemingly related recent research projects.", "title": "" }, { "docid": "84ae9f9f1dd10a8910ff99d1dd4ec227", "text": "With the advent of powerful ranging and visual sensors, nowadays, it is convenient to collect sparse 3-D point clouds and aligned high-resolution images. Benefitted from such convenience, this letter proposes a joint method to perform both depth assisted object-level image segmentation and image guided depth upsampling. To this end, we formulate these two tasks together as a bi-task labeling problem, defined in a Markov random field. An alternating direction method (ADM) is adopted for the joint inference, solving each sub-problem alternatively. More specifically, the sub-problem of image segmentation is solved by Graph Cuts, which attains discrete object labels efficiently. Depth upsampling is addressed via solving a linear system that recovers continuous depth values. By this joint scheme, robust object segmentation results and high-quality dense depth maps are achieved. The proposed method is applied to the challenging KITTI vision benchmark suite, as well as the Leuven dataset for validation. Comparative experiments show that our method outperforms stand-alone approaches.", "title": "" }, { "docid": "0e98010ded0712ab0e2f78af0a476c86", "text": "This paper presents a system that uses symbolic representations of audio concepts as words for the descriptions of audio tracks, that enable it to go beyond the state of the art, which is audio event classification of a small number of audio classes in constrained settings, to large-scale classification in the wild. These audio words might be less meaningful for an annotator but they are descriptive for computer algorithms. We devise a random-forest vocabulary learning method with an audio word weighting scheme based on TF-IDF and TD-IDD, so as to combine the computational simplicity and accurate multi-class classification of the random forest with the data-driven discriminative power of the TF-IDF/TD-IDD methods. The proposed random forest clustering with text-retrieval methods significantly outperforms two state-of-the-art methods on the dry-run set and the full set of the TRECVID MED 2010 dataset.", "title": "" }, { "docid": "4ed47f48df37717148d985ad927b813f", "text": "Given an incorrect value produced during a failed program run (e.g., a wrong output value or a value that causes the program to crash), the backward dynamic slice of the value very frequently captures the faulty code responsible for producing the incorrect value. Although the dynamic slice often contains only a small percentage of the statements executed during the failed program run, the dynamic slice can still be large and thus considerable effort may be required by the programmer to locate the faulty code.In this paper we develop a strategy for pruning the dynamic slice to identify a subset of statements in the dynamic slice that are likely responsible for producing the incorrect value. We observe that some of the statements used in computing the incorrect value may also have been involved in computing correct values (e.g., a value produced by a statement in the dynamic slice of the incorrect value may also have been used in computing a correct output value prior to the incorrect value). For each such executed statement in the dynamic slice, using the value profiles of the executed statements, we compute a confidence value ranging from 0 to 1 - a higher confidence value corresponds to greater likelihood that the execution of the statement produced a correct value. Given a failed run involving execution of a single error, we demonstrate that the pruning of a dynamic slice by excluding only the statements with the confidence value of 1 is highly effective in reducing the size of the dynamic slice while retaining the faulty code in the slice. Our experiments show that the number of distinct statements in a pruned dynamic slice are 1.79 to 190.57 times less than the full dynamic slice. Confidence values also prioritize the statements in the dynamic slice according to the likelihood of them being faulty. We show that examining the statements in the order of increasing confidence values is an effective strategy for reducing the effort of fault location.", "title": "" }, { "docid": "97fa48d92c4a1b9d2bab250d5383173c", "text": "This paper presents a new type of axial flux motor, the yokeless and segmented armature (YASA) topology. The YASA motor has no stator yoke, a high fill factor and short end windings which all increase torque density and efficiency of the machine. Thus, the topology is highly suited for high performance applications. The LIFEcar project is aimed at producing the world's first hydrogen sports car, and the first YASA motors have been developed specifically for the vehicle. The stator segments have been made using powdered iron material which enables the machine to be run up to 300 Hz. The iron in the stator of the YASA motor is dramatically reduced when compared to other axial flux motors, typically by 50%, causing an overall increase in torque density of around 20%. A detailed Finite Element analysis (FEA) analysis of the YASA machine is presented and it is shown that the motor has a peak efficiency of over 95%.", "title": "" }, { "docid": "7c75c77802045cfd8d89c73ca8a68ce6", "text": "The results of the 2016 Brexit referendum in the U.K. and presidential election in the U.S. surprised pollsters and traditional media alike, and social media is now being blamed in part for creating echo chambers that encouraged the spread of fake news that influenced voters.", "title": "" }, { "docid": "9081cb169f74b90672f84afa526f40b3", "text": "The paper presents an analysis of the main mechanisms of decryption of SSL/TLS traffic. Methods and technologies for detecting malicious activity in encrypted traffic that are used by leading companies are also considered. Also, the approach for intercepting and decrypting traffic transmitted over SSL/TLS is developed, tested and proposed. The developed approach has been automated and can be used for remote listening of the network, which will allow to decrypt transmitted data in a mode close to real time.", "title": "" }, { "docid": "6f8e441738a0c045a83f0e1efd4e0bbd", "text": "Irony and humour are just two of many forms of figurative language. Approaches to identify in vast volumes of data such as the internet humorous or ironic statements is important not only from a theoretical view point but also for their potential applicability in social networks or human-computer interactive systems. In this study we investigate the automatic detection of irony and humour in social networks such as Twitter casting it as a classification problem. We propose a rich set of features for text interpretation and representation to train classification procedures. In cross-domain classification experiments our model achieves and improves state-of-the-art", "title": "" } ]
scidocsrr
a51b226da1008a52c9ad1870f0497e60
UiLog: Improving Log-Based Fault Diagnosis by Log Analysis
[ { "docid": "4dc9360837b5793a7c322f5b549fdeb1", "text": "Today, event logs contain vast amounts of data that can easily overwhelm a human. Therefore, mining patterns from event logs is an important system management task. This paper presents a novel clustering algorithm for log file data sets which helps one to detect frequent patterns from log files, to build log file profiles, and to identify anomalous log file lines. Keywords—system monitoring, data mining, data clustering", "title": "" } ]
[ { "docid": "f333bc03686cf85aee0a65d4a81e8b34", "text": "A large portion of data mining and analytic services use modern machine learning techniques, such as deep learning. The state-of-the-art results by deep learning come at the price of an intensive use of computing resources. The leading frameworks (e.g., TensorFlow) are executed on GPUs or on high-end servers in datacenters. On the other end, there is a proliferation of personal devices with possibly free CPU cycles; this can enable services to run in users' homes, embedding machine learning operations. In this paper, we ask the following question: Is distributed deep learning computation on WAN connected devices feasible, in spite of the traffic caused by learning tasks? We show that such a setup rises some important challenges, most notably the ingress traffic that the servers hosting the up-to-date model have to sustain. In order to reduce this stress, we propose AdaComp, a novel algorithm for compressing worker updates to the model on the server. Applicable to stochastic gradient descent based approaches, it combines efficient gradient selection and learning rate modulation. We then experiment and measure the impact of compression, device heterogeneity and reliability on the accuracy of learned models, with an emulator platform that embeds TensorFlow into Linux containers. We report a reduction of the total amount of data sent by workers to the server by two order of magnitude (e.g., 191-fold reduction for a convolutional network on the MNIST dataset), when compared to a standard asynchronous stochastic gradient descent, while preserving model accuracy.", "title": "" }, { "docid": "5e0898aa58d092a1f3d64b37af8cf838", "text": "In this paper, we design a Deep Dual-Domain (D3) based fast restoration model to remove artifacts of JPEG compressed images. It leverages the large learning capacity of deep networks, as well as the problem-specific expertise that was hardly incorporated in the past design of deep architectures. For the latter, we take into consideration both the prior knowledge of the JPEG compression scheme, and the successful practice of the sparsity-based dual-domain approach. We further design the One-Step Sparse Inference (1-SI) module, as an efficient and lightweighted feed-forward approximation of sparse coding. Extensive experiments verify the superiority of the proposed D3 model over several state-of-the-art methods. Specifically, our best model is capable of outperforming the latest deep model for around 1 dB in PSNR, and is 30 times faster.", "title": "" }, { "docid": "645faf32f40732d291e604d7240f0546", "text": "Fault Diagnostics and Prognostics has been an increasing interest in recent years, as a result of the increased degree of automation and the growing demand for higher performance, efficiency, reliability and safety in industrial systems. On-line fault detection and isolation methods have been developed for automated processes. These methods include data mining methodologies, artificial intelligence methodologies or combinations of the two. Data Mining is the statistical approach of extracting knowledge from data. Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. Activities in AI include searching, recognizing patterns and making logical inferences. This paper focuses on the various techniques used for Fault Diagnostics and Prognostics in Industry application domains.", "title": "" }, { "docid": "d4a4c4a1d933488ab686097e18b4373a", "text": "Psychological stress is an important factor for the development of irritable bowel syndrome (IBS). More and more clinical and experimental evidence showed that IBS is a combination of irritable bowel and irritable brain. In the present review we discuss the potential role of psychological stress in the pathogenesis of IBS and provide comprehensive approaches in clinical treatment. Evidence from clinical and experimental studies showed that psychological stresses have marked impact on intestinal sensitivity, motility, secretion and permeability, and the underlying mechanism has a close correlation with mucosal immune activation, alterations in central nervous system, peripheral neurons and gastrointestinal microbiota. Stress-induced alterations in neuro-endocrine-immune pathways acts on the gut-brain axis and microbiota-gut-brain axis, and cause symptom flare-ups or exaggeration in IBS. IBS is a stress-sensitive disorder, therefore, the treatment of IBS should focus on managing stress and stress-induced responses. Now, non-pharmacological approaches and pharmacological strategies that target on stress-related alterations, such as antidepressants, antipsychotics, miscellaneous agents, 5-HT synthesis inhibitors, selective 5-HT reuptake inhibitors, and specific 5-HT receptor antagonists or agonists have shown a critical role in IBS management. A integrative approach for IBS management is a necessary.", "title": "" }, { "docid": "5cd6debed0333d480aeafe406f526d2b", "text": "In the coming advanced age society, an innovative technology to assist the activities of daily living of elderly and disabled people and the heavy work in nursing is desired. To develop such a technology, an actuator safe and friendly for human is required. It should be small, lightweight and has to provide a proper softness. A pneumatic rubber artificial muscle is available as such actuators. We have developed some types of pneumatic rubber artificial muscles and applied them to wearable power assist devices. A wearable power assist device is equipped to the human body to assist the muscular force, which supports activities of daily living, rehabilitation, heavy working, training and so on. In this paper, some types of pneumatic rubber artificial muscles developed in our laboratory are introduced. Further, two kinds of wearable power assist devices driven with the rubber artificial muscles are described. Some evaluations can clarify the effectiveness of pneumatic rubber artificial muscle for such an innovative human assist technology.", "title": "" }, { "docid": "79cdd24d14816f45b539f31606a3d5ee", "text": "The huge increase in type 2 diabetes is a burden worldwide. Many marketed compounds do not address relevant aspects of the disease; they may already compensate for defects in insulin secretion and insulin action, but loss of secreting cells (β-cell destruction), hyperglucagonemia, gastric emptying, enzyme activation/inhibition in insulin-sensitive cells, substitution or antagonizing of physiological hormones and pathways, finally leading to secondary complications of diabetes, are not sufficiently addressed. In addition, side effects for established therapies such as hypoglycemias and weight gain have to be diminished. At present, nearly 1000 compounds have been described, and approximately 180 of these are going to be developed (already in clinical studies), some of them directly influencing enzyme activity, influencing pathophysiological pathways, and some using G-protein-coupled receptors. In addition, immunological approaches and antisense strategies are going to be developed. Many compounds are derived from physiological compounds (hormones) aiming at improving their kinetics and selectivity, and others are chemical compounds that were obtained by screening for a newly identified target in the physiological or pathophysiological machinery. In some areas, great progress is observed (e.g., incretin area); in others, no great progress is obvious (e.g., glucokinase activators), and other areas are not recommended for further research. For all scientific areas, conclusions with respect to their impact on diabetes are given. Potential targets for which no chemical compound has yet been identified as a ligand (agonist or antagonist) are also described.", "title": "" }, { "docid": "da694b74b3eaae46d15f589e1abef4b8", "text": "Impaired water quality caused by human activity and the spread of invasive plant and animal species has been identified as a major factor of degradation of coastal ecosystems in the tropics. The main goal of this study was to evaluate the performance of AnnAGNPS (Annualized NonPoint Source Pollution Model), in simulating runoff and soil erosion in a 48 km watershed located on the Island of Kauai, Hawaii. The model was calibrated and validated using 2 years of observed stream flow and sediment load data. Alternative scenarios of spatial rainfall distribution and canopy interception were evaluated. Monthly runoff volumes predicted by AnnAGNPS compared well with the measured data (R 1⁄4 0.90, P < 0.05); however, up to 60% difference between the actual and simulated runoff were observed during the driest months (May and July). Prediction of daily runoff was less accurate (R 1⁄4 0.55, P < 0.05). Predicted and observed sediment yield on a daily basis was poorly correlated (R 1⁄4 0.5, P < 0.05). For the events of small magnitude, the model generally overestimated sediment yield, while the opposite was true for larger events. Total monthly sediment yield varied within 50% of the observed values, except for May 2004. Among the input parameters the model was most sensitive to the values of ground residue cover and canopy cover. It was found that approximately one third of the watershed area had low sediment yield (0e1 t ha 1 y ), and presented limited erosion threat. However, 5% of the area had sediment yields in excess of 5 t ha 1 y . Overall, the model performed reasonably well, and it can be used as a management tool on tropical watersheds to estimate and compare sediment loads, and identify ‘‘hot spots’’ on the landscape. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "7c98ac06ea8cb9b83673a9c300fb6f4c", "text": "Heart rate monitoring from wrist-type photoplethysmographic (PPG) signals during subjects' intensive exercise is a difficult problem, since the PPG signals are contaminated by extremely strong motion artifacts caused by subjects' hand movements. In this work, we formulate the heart rate estimation problem as a sparse signal recovery problem, and use a sparse signal recovery algorithm to calculate high-resolution power spectra of PPG signals, from which heart rates are estimated by selecting corresponding spectrum peaks. To facilitate the use of sparse signal recovery, we propose using bandpass filtering, singular spectrum analysis, and temporal difference operation to partially remove motion artifacts and sparsify PPG spectra. The proposed method was tested on PPG recordings from 10 subjects who were fast running at the peak speed of 15km/hour. The results showed that the averaged absolute estimation error was only 2.56 Beats/Minute, or 1.94% error compared to ground-truth heart rates from simultaneously recorded ECG.", "title": "" }, { "docid": "302079b366d2bc0c951e3c7d8eb30815", "text": "The rapid traffic growth and ubiquitous access requirements make it essential to explore the next generation (5G) wireless communication networks. In the current 5G research area, non-orthogonal multiple access has been proposed as a paradigm shift of physical layer technologies. Among all the existing non-orthogonal technologies, the recently proposed sparse code multiple access (SCMA) scheme is shown to achieve a better link level performance. In this paper, we extend the study by proposing an unified framework to analyze the energy efficiency of SCMA scheme and a low complexity decoding algorithm which is critical for prototyping. We show through simulation and prototype measurement results that SCMA scheme provides extra multiple access capability with reasonable complexity and energy consumption, and hence, can be regarded as an energy efficient approach for 5G wireless communication systems.", "title": "" }, { "docid": "d81fb36cad466df8629fada7e7f7cc8d", "text": "The limitations of each security technology combined with the growth of cyber attacks impact the efficiency of information security management and increase the activities to be performed by network administrators and security staff. Therefore, there is a need for the increase of automated auditing and intelligent reporting mechanisms for the cyber trust. Intelligent systems are emerging computing systems based on intelligent techniques that support continuous monitoring and controlling plant activities. Intelligence improves an individual’s ability to make better decisions. This paper presents a proposed architecture of an Intelligent System for Information Security Management (ISISM). The objective of this system is to improve security management processes such as monitoring, controlling, and decision making with an effect size that is higher than an expert in security by providing mechanisms to enhance the active construction of knowledge about threats, policies, procedures, and risks. We focus on requirements and design issues for the basic components of the intelligent system.", "title": "" }, { "docid": "2a8f464e709dcae4e34f73654aefe31f", "text": "LTE 4G cellular networks are gradually being adopted by all major operators in the world and are expected to rule the cellular landscape at least for the current decade. They will also form the starting point for further progress beyond the current generation of mobile cellular networks to chalk a path towards fifth generation mobile networks. The lack of open cellular ecosystem has limited applied research in this field within the boundaries of vendor and operator R&D groups. Furthermore, several new approaches and technologies are being considered as potential elements making up such a future mobile network, including cloudification of radio network, radio network programability and APIs following SDN principles, native support of machine-type communication, and massive MIMO. Research on these technologies requires realistic and flexible experimentation platforms that offer a wide range of experimentation modes from real-world experimentation to controlled and scalable evaluations while at the same time retaining backward compatibility with current generation systems.\n In this work, we present OpenAirInterface (OAI) as a suitably flexible platform towards open LTE ecosystem and playground [1]. We will demonstrate an example of the use of OAI to deploy a low-cost open LTE network using commodity hardware with standard LTE-compatible devices. We also show the reconfigurability features of the platform.", "title": "" }, { "docid": "f70c07e15c4070edf75e8846b4dff0b3", "text": "Polyphenols, including flavonoids, phenolic acids, proanthocyanidins and resveratrol, are a large and heterogeneous group of phytochemicals in plant-based foods, such as tea, coffee, wine, cocoa, cereal grains, soy, fruits and berries. Growing evidence indicates that various dietary polyphenols may influence carbohydrate metabolism at many levels. In animal models and a limited number of human studies carried out so far, polyphenols and foods or beverages rich in polyphenols have attenuated postprandial glycemic responses and fasting hyperglycemia, and improved acute insulin secretion and insulin sensitivity. The possible mechanisms include inhibition of carbohydrate digestion and glucose absorption in the intestine, stimulation of insulin secretion from the pancreatic beta-cells, modulation of glucose release from the liver, activation of insulin receptors and glucose uptake in the insulin-sensitive tissues, and modulation of intracellular signalling pathways and gene expression. The positive effects of polyphenols on glucose homeostasis observed in a large number of in vitro and animal models are supported by epidemiological evidence on polyphenol-rich diets. To confirm the implications of polyphenol consumption for prevention of insulin resistance, metabolic syndrome and eventually type 2 diabetes, human trials with well-defined diets, controlled study designs and clinically relevant end-points together with holistic approaches e.g., systems biology profiling technologies are needed.", "title": "" }, { "docid": "2b2cd290f12d98667d6a4df12697a05e", "text": "The chapter proposes three ways of integration of the two different worlds of relational and NoSQL databases: native, hybrid, and reducing to one option, either relational or NoSQL. The native solution includes using vendors’ standard APIs and integration on the business layer. In a relational environment, APIs are based on SQL standards, while the NoSQL world has its own, unstandardized solutions. The native solution means using the APIs of the individual systems that need to be connected, leaving to the businesslayer coding the task of linking and separating data in extraction and storage operations. A hybrid solution introduces an additional layer that provides SQL communication between the business layer and the data layer. The third integration solution includes vendors’ effort to foresee functionalities of “opposite” side, thus convincing developers’ community that their solution is sufficient.", "title": "" }, { "docid": "4421a42fc5589a9b91215b68e1575a3f", "text": "We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets.", "title": "" }, { "docid": "986df17e2fe07cf2c70c37391f99a5da", "text": "This paper is the last in a series of 16 which have explored current uses of information communications technology (ICT) in all areas of dentistry in general, and in dental education in particular. In this paper the authors explore current developments, referring back to the previous 15 papers, and speculate on how ICT should increasingly contribute to dental education in the future. After describing a vision of dental education in the next 50 years, the paper considers how ICT can help to fulfil the vision. It then takes a brief look at three aspects of the use of ICT in the world in general and speculates how dentistry can learn from other areas of human endeavour. Barriers to the use of ICT in dental education are then discussed. The final section of the paper outlines new developments in haptics, immersive environments, the semantic web, the IVIDENT project, nanotechnology and ergonometrics. The paper concludes that ICT will offer great opportunities to dental education but questions whether or not human limitations will allow it to be used to maximum effect.", "title": "" }, { "docid": "a8858713a7040ce6dd25706c9b72b45c", "text": "A new type of wearable button antenna for wireless local area network (WLAN) applications is proposed. The antenna is composed of a button with a diameter of circa 16 mm incorporating a patch on top of a dielectric disc. The button is located on top of a textile substrate and a conductive textile ground that are to be incorporated in clothing. The main characteristic feature of this antenna is that it shows two different types of radiation patterns, a monopole type pattern in the 2.4 GHz band for on-body communications and a broadside type pattern in the 5 GHz band for off-body communications. A very high efficiency of about 90% is obtained, which is much higher than similar full textile solutions in the literature. A prototype has been fabricated and measured. The effect of several real-life situations such as a tilted button and bending of the textile ground have been studied. Measurements agree very well with simulations.", "title": "" }, { "docid": "43c49bb7d9cebb8f476079ac9dd0af27", "text": "Nowadays, most recommender systems (RSs) mainly aim to suggest appropriate items for individuals. Due to the social nature of human beings, group activities have become an integral part of our daily life, thus motivating the study on group RS (GRS). However, most existing methods used by GRS make recommendations through aggregating individual ratings or individual predictive results rather than considering the collective features that govern user choices made within a group. As a result, such methods are heavily sensitive to data, hence they often fail to learn group preferences when the data are slightly inconsistent with predefined aggregation assumptions. To this end, we devise a novel GRS approach which accommodates both individual choices and group decisions in a joint model. More specifically, we propose a deep-architecture model built with collective deep belief networks and dual-wing restricted Boltzmann machines. With such a deep model, we can use high-level features, which are induced from lower-level features, to represent group preference so as to relieve the vulnerability of data. Finally, the experiments conducted on a real-world dataset prove the superiority of our deep model over other state-of-the-art methods.", "title": "" }, { "docid": "ca095eee8abefd4aef9fd8971efd7fb5", "text": "A radio-frequency identification (RFID) tag is a small, inexpensive microchip that emits an identifier in response to a query from a nearby reader. The price of these tags promises to drop to the range of $0.05 per unit in the next several years, offering a viable and powerful replacement for barcodes. The challenge in providing security for low-cost RFID tags is that they are computationally weak devices, unable to perform even basic symmetric-key cryptographic operations. Security researchers often therefore assume that good privacy protection in RFID tags is unattainable. In this paper, we explore a notion of minimalist cryptography suitable for RFID tags. We consider the type of security obtainable in RFID devices with a small amount of rewritable memory, but very limited computing capability. Our aim is to show that standard cryptography is not necessary as a starting point for improving security of very weak RFID devices. Our contribution is threefold: 1. We propose a new formal security model for authentication and privacy in RFID tags. This model takes into account the natural computational limitations and the likely attack scenarios for RFID tags in real-world settings. It represents a useful divergence from standard cryptographic security modeling, and thus a new view of practical formalization of minimal security requirements for low-cost RFID-tag security. 2. We describe protocol that provably achieves the properties of authentication and privacy in RFID tags in our proposed model, and in a good practical sense. Our proposed protocol involves no computationally intensive cryptographic operations, and relatively little storage. 3. Of particular practical interest, we describe some reduced-functionality variants of our protocol. We show, for instance, how static pseudonyms may considerably enhance security against eavesdropping in low-cost RFID tags. Our most basic static-pseudonym proposals require virtually no increase in existing RFID tag resources.", "title": "" }, { "docid": "fcd0c523e74717c572c288a90c588259", "text": "From analyzing 100 assessments of coping, the authors critiqued strategies and identified best practices for constructing category systems. From current systems, a list of 400 ways of coping was compiled. For constructing lower order categories, the authors concluded that confirmatory factor analysis should replace the 2 most common strategies (exploratory factor analysis and rational sorting). For higher order categories, they recommend that the 3 most common distinctions (problem- vs. emotion-focused, approach vs. avoidance, and cognitive vs. behavioral) no longer be used. Instead, the authors recommend hierarchical systems of action types (e.g., proximity seeking, accommodation). From analysis of 6 such systems, 13 potential core families of coping were identified. Future steps involve deciding how to organize these families, using their functional homogeneity and distinctiveness, and especially their links to adaptive processes.", "title": "" }, { "docid": "dd84b653de8b3b464c904a988a622a39", "text": "We demonstrate that for sentence-level relation extraction it is beneficial to consider other relations in the sentential context while predicting the target relation. Our architecture uses an LSTM-based encoder to jointly learn representations for all relations in a single sentence. We combine the context representations with an attention mechanism to make the final prediction. We use the Wikidata knowledge base to construct a dataset of multiple relations per sentence and to evaluate our approach. Compared to a baseline system, our method results in an average error reduction of 24% on a held-out set of relations. The code and the dataset to replicate the experiments are made available at https://github.com/ukplab.", "title": "" } ]
scidocsrr
e27ba4ab466a97ccdd24637a056982d1
Video Frame Synthesis Using Deep Voxel Flow
[ { "docid": "fdfea6d3a5160c591863351395929a99", "text": "Deep networks have recently enjoyed enormous success when applied to recognition and classification problems in computer vision [22, 33], but their use in graphics problems has been limited ([23, 7] are notable recent exceptions). In this work, we present a novel deep architecture that performs new view synthesis directly from pixels, trained from a large number of posed image sets. In contrast to traditional approaches, which consist of multiple complex stages of processing, each of which requires careful tuning and can fail in unexpected ways, our system is trained end-to-end. The pixels from neighboring views of a scene are presented to the network, which then directly produces the pixels of the unseen view. The benefits of our approach include generality (we only require posed image sets and can easily apply our method to different domains), and high quality results on traditionally difficult scenes. We believe this is due to the end-to-end nature of our system, which is able to plausibly generate pixels according to color, depth, and texture priors learnt automatically from the training data. We show view interpolation results on imagery from the KITTI dataset [12], from data from [1] as well as on Google Street View images. To our knowledge, our work is the first to apply deep learning to the problem of new view synthesis from sets of real-world, natural imagery.", "title": "" } ]
[ { "docid": "7c3bd683a927626c97ec9ae31b0bae3e", "text": "Project portfolio management in relation to innovation has increasingly gained the attention of practitioners and academics during the last decade. While significant progress has been made in the pursuit of a process approach to achieve an effective project portfolio management, limited attention has been paid to the issue of how to integrate sustainability into innovation portfolio management decision making. The literature is lacking insights on how to manage the innovation project portfolio throughout the strategic analysis phase to the monitoring of the portfolio performance in relation to sustainability during the development phase of projects. This paper presents a 5step framework for integrating sustainability in the innovation project portfolio management process in the field of product development. The framework can be applied for the management of a portfolio of three project categories that involve breakthrough projects, platform projects and derivative projects. It is based on the assessment of various methods of project evaluation and selection, and a case analysis in the automotive industry. It enables the integration of the three dimensions of sustainability into the innovation project portfolio management process within firms. The three dimensions of sustainability involve ecological sustainability, social sustainability and economic sustainability. Another benefit is enhancing the ability of firms to achieve an effective balance of investment between the three dimensions of sustainability, taking the competitive approach of a firm toward the marketplace into account. 2014 Published by Elsevier B.V. * Corresponding author. Tel.: +31 6 12990878. E-mail addresses: jacques.brook@ig-nl.com (J.W. Brook), fabrizio.pagnanelli@vodafone.it (F. Pagnanelli). G Models ENGTEC-1407; No. of Pages 17 Please cite this article in press as: Brook, J.W., Pagnanelli, F., Integrating sustainability into innovation project portfolio management – A strategic perspective. J. Eng. Technol. Manage. (2014), http://dx.doi.org/10.1016/j.jengtecman.2013.11.004", "title": "" }, { "docid": "f910a4d90730c1ce5bc597b001b556bf", "text": "We suggest that an appropriate role of early visual processing is to describe a scene in terms of intrinsic (vertical) characteristics-such as range, orientation, reflectance, and incident illumination-of the surface element visible at each point in the image. Support for this idea comes from three sources: the obvious utility of intrinsic characteristics for higher-level scene analysis; the apparent ability of humans to determine these characteristics, regardless of viewing conditions or familiarity with the scene; and a theoretical argument that such a description is obtainable, by a noncognitive and nonpurposive process, at least, for simple scene domains. The central problem in recovering intrinsic scene characteristics is that the information is confounded in the original light-intensity image: a single intensity value encodes all the characteristics of the corresponding scene point. Recovery depends on exploiting constraints, derived from assumptions about the nature of the scene and the physics of the imaging process. I INTRODUCTION Despite corsiderable progress in recent years, our understanding of the principles underlying visual perception remains primitive. Attempts to construct computer models for the interpretation of arbitrary scenes have resulted in such poor performance, limited range of abilities, and inflexibility that, were it not for the human existence proof, we might have been tempted long ago to conclude that high-performance, general-purpose vision is impossible. On the other hand, attempts to unravel the mystery of human vision, have resulted in a limited understanding of the elementary neurophysiology, and a wealth of phenomenological observations of the total system, but not, as yet, in a cohesive model of how the system functions. The time is right for those in both fields to take a broader view: those in computer vision might do well to look harder at the phenomenology of human vision for clues that might indicate fundamental inadequacies of current aproaches; these concerned with human vision might gain insights by thinking more about what information is sought, and how it might be obtained, from a computational point of view. This position has been strongly advocated for some time by Horn [18-20] and Marr [26-29] at MIT. Current scene analysis systems often use pictorial features, such as regions of uniform intensity, or step charges in intensity, as an initial level of description and then jump directly to descriptions at the level of complete objects. The limitations of this approach are well known [4]: first, region-growing and edge-finding programs are unreliable in extracting the …", "title": "" }, { "docid": "c66b529b1de24c8031622f3d28b3ada4", "text": "This work addresses the design of a dual-fed aperture-coupled circularly polarized microstrip patch antenna, operating at its fundamental mode. A numerical parametric assessment was carried out, from which some general practical guidelines that may aid the design of such antennas were derived. Validation was achieved by a good match between measured and simulated results obtained for a specific antenna set assembled, chosen from the ensemble of the numerical analysis.", "title": "" }, { "docid": "2d0c28d1c23ecee1f1a08be11a49aaa2", "text": "Dictionary learning has became an increasingly important task in machine learning, as it is fundamental to the representation problem. A number of emerging techniques specifically include a codebook learning step, in which a critical knowledge abstraction process is carried out. Existing approaches in dictionary (codebook) learning are either generative (unsupervised e.g. k-means) or discriminative (supervised e.g. extremely randomized forests). In this paper, we propose a multiple instance learning (MIL) strategy (along the line of weakly supervised learning) for dictionary learning. Each code is represented by a classifier, such as a linear SVM, which naturally performs metric fusion for multi-channel features. We design a formulation to simultaneously learn mixtures of codes by maximizing classification margins in MIL. State-of-the-art results are observed in image classification benchmarks based on the learned codebooks, which observe both compactness and effectiveness.", "title": "" }, { "docid": "36142a4c0639662fe52dcc3fdf7b1ca4", "text": "We present hierarchical change-detection tests (HCDTs), as effective online algorithms for detecting changes in datastreams. HCDTs are characterized by a hierarchical architecture composed of a detection layer and a validation layer. The detection layer steadily analyzes the input datastream by means of an online, sequential CDT, which operates as a low-complexity trigger that promptly detects possible changes in the process generating the data. The validation layer is activated when the detection one reveals a change, and performs an offline, more sophisticated analysis on recently acquired data to reduce false alarms. Our experiments show that, when the process generating the datastream is unknown, as it is mostly the case in the real world, HCDTs achieve a far more advantageous tradeoff between false-positive rate and detection delay than their single-layered, more traditional counterpart. Moreover, the successful interplay between the two layers permits HCDTs to automatically reconfigure after having detected and validated a change. Thus, HCDTs are able to reveal further departures from the postchange state of the data-generating process.", "title": "" }, { "docid": "c8bfa845f5eaaeeab5bcf7bdc601bfb5", "text": "Completely labeled pathology datasets are often challenging and time-consuming to obtain. Semi-supervised learning (SSL) methods are able to learn from fewer labeled data points with the help of a large number of unlabeled data points. In this paper, we investigated the possibility of using clustering analysis to identify the underlying structure of the data space for SSL. A cluster-then-label method was proposed to identify high-density regions in the data space which were then used to help a supervised SVM in finding the decision boundary. We have compared our method with other supervised and semi-supervised state-of-the-art techniques using two different classification tasks applied to breast pathology datasets. We found that compared with other state-of-the-art supervised and semi-supervised methods, our SSL method is able to improve classification performance when a limited number of labeled data instances are made available. We also showed that it is important to examine the underlying distribution of the data space before applying SSL techniques to ensure semi-supervised learning assumptions are not violated by the data.", "title": "" }, { "docid": "158e71c3e5877e339c7fcf3616ab77b1", "text": "UNLABELLED\nRecurrent low back pain (LBP) is associated with altered motor coordination of the lumbar paraspinal muscles. Whether these changes can be modified with motor training remains unclear. Twenty volunteers with unilateral LBP were randomly assigned to cognitively activate the lumbar multifidus independently from other back muscles (skilled training) or to activate all paraspinal muscles with no attention to any specific muscles (extension training). Electromyographic (EMG) activity of deep (DM) and superficial multifidus (SM) muscles were recorded bilaterally using intramuscular fine-wire electrodes and that of superficial abdominal and back muscles using surface electrodes. Motor coordination was assessed before and immediately after training as onsets of trunk muscle EMG during rapid arm movements, and as EMG amplitude at the mid-point of slow trunk flexion-extension movements. Despite different intentions of the training tasks, the pattern of activity was similar for both. After both training tasks, activation of the DM and SM muscles was earlier during rapid arm movements. However, during slow trunk movements, DM and SM activity was increased, and EMG activity of the superficial trunk muscles was reduced only after skilled training. These findings show the potential to alter motor coordination with motor training of the lumbar paraspinal muscles in recurrent LBP.\n\n\nPERSPECTIVES\nChanges in motor coordination differed between skilled and extension training during slows trunk movements. As identical patterns of muscle activity were observed between training protocols, the results suggest that training-induced changes in motor coordination are not simply related to the muscle activation, but appear to be related to the task.", "title": "" }, { "docid": "80824fdff1ffea11a7ebc97fad239482", "text": "The increasing quality and affordability of consumer electroencephalogram (EEG) headsets make them attractive for situations where medical grade devices are impractical. Predicting and tracking cognitive states is possible for tasks that were previously not conducive to EEG monitoring. For instance, monitoring operators for states inappropriate to the task (e.g. drowsy drivers), tracking mental health (e.g. anxiety) and productivity (e.g. tiredness) are among possible applications for the technology. Consumer grade EEG headsets are affordable and relatively easy to use, but they lack the resolution and quality of signal that can be achieved using medical grade EEG devices. Thus, the key questions remain: to what extent are wearable EEG devices capable of mental state recognition, and what kind of mental states can be accurately recognized with these devices? In this work, we examined responses to two different types of input: instructional (‘logical’) versus recreational (‘emotional‘) videos, using a range of machine-learning methods. We tried SVMs, sparse logistic regression, and Deep Belief Networks, to discriminate between the states of mind induced by different types of video input, that can be roughly labeled as ‘logical’ vs. ‘emotional’. Our results demonstrate a significant potential of wearable EEG devices in differentiating cognitive states between situations with large contextual but subtle apparent differences.", "title": "" }, { "docid": "13ecd39b2b49fb108ed03e28e8a0578b", "text": "Optional stopping refers to the practice of peeking at data and then, based on the results, deciding whether or not to continue an experiment. In the context of ordinary significance-testing analysis, optional stopping is discouraged, because it necessarily leads to increased type I error rates over nominal values. This article addresses whether optional stopping is problematic for Bayesian inference with Bayes factors. Statisticians who developed Bayesian methods thought not, but this wisdom has been challenged by recent simulation results of Yu, Sprenger, Thomas, and Dougherty (2013) and Sanborn and Hills (2013). In this article, I show through simulation that the interpretation of Bayesian quantities does not depend on the stopping rule. Researchers using Bayesian methods may employ optional stopping in their own research and may provide Bayesian analysis of secondary data regardless of the employed stopping rule. I emphasize here the proper interpretation of Bayesian quantities as measures of subjective belief on theoretical positions, the difference between frequentist and Bayesian interpretations, and the difficulty of using frequentist intuition to conceptualize the Bayesian approach.", "title": "" }, { "docid": "a4d789c37eea4505fff66ebe875601a3", "text": "A mechanistic model for out-of-order superscalar processors is developed and then applied to the study of microarchitecture resource scaling. The model divides execution time into intervals separated by disruptive miss events such as branch mispredictions and cache misses. Each type of miss event results in characterizable performance behavior for the execution time interval. By considering an interval's type and length (measured in instructions), execution time can be predicted for the interval. Overall execution time is then determined by aggregating the execution time over all intervals. The mechanistic model provides several advantages over prior modeling approaches, and, when estimating performance, it differs from detailed simulation of a 4-wide out-of-order processor by an average of 7%.\n The mechanistic model is applied to the general problem of resource scaling in out-of-order superscalar processors. First, we use the model to determine size relationships among microarchitecture structures in a balanced processor design. Second, we use the mechanistic model to study scaling of both pipeline depth and width in balanced processor designs. We corroborate previous results in this area and provide new results. For example, we show that at optimal design points, the pipeline depth times the square root of the processor width is nearly constant. Finally, we consider the behavior of unbalanced, overprovisioned processor designs based on insight gained from the mechanistic model. We show that in certain situations an overprovisioned processor may lead to improved overall performance. Designs where a processor's dispatch width is wider than its issue width are of particular interest.", "title": "" }, { "docid": "9a93c8a3a678cbc6a5dd0f20f8a4157c", "text": "A one-step, mild procedure based on coaxial electrospinning was developed for incorporation and controlled release of two model proteins, BSA and lysozyme, from biodegradable core-shell nanofibers with PCL as shell and protein-containing PEG as core. The thickness of the core and shell could be adjusted by the feed rate of the inner dope, which in turn affected the release profiles of the incorporated proteins. It was revealed that the released lysozyme maintained its structure and bioactivity. The current method may find wide applications for controlled release of proteins and tissue engineering.", "title": "" }, { "docid": "76c6dea53623c831186afc202d260608", "text": "We present CitNetExplorer, a new software tool for analyzing and visualizing citation networks of scientific publications. CitNetExplorer can for instance be used to study the development of a research field, to delineate the literature on a research topic, and to support literature reviewing. We first introduce the main concepts that need to be understood when working with CitNetExplorer. We then demonstrate CitNetExplorer by using the tool to analyze the scientometric literature and the literature on community detection in networks. Finally, we discuss some technical details on the construction, visualization, and analysis of citation networks in CitNetExplorer.", "title": "" }, { "docid": "5745ed6c874867ad2de84b040e40d336", "text": "The chemokine (C-X-C motif) ligand 1 (CXCL1) regulates tumor-stromal interactions and tumor invasion. However, the precise role of CXCL1 on gastric tumor growth and patient survival remains unclear. In the current study, protein expressions of CXCL1, vascular endothelial growth factor (VEGF) and phospho-signal transducer and activator of transcription 3 (p-STAT3) in primary tumor tissues from 98 gastric cancer patients were measured by immunohistochemistry (IHC). CXCL1 overexpressed cell lines were constructed using Lipofectamine 2000 reagent or lentiviral vectors. Effects of CXCL1 on VEGF expression and local tumor growth were evaluated in vitro and in vivo. CXCL1 was positively expressed in 41.4% of patients and correlated with VEGF and p-STAT3 expression. Higher CXCL1 expression was associated with advanced tumor stage and poorer prognosis. In vitro studies in AGS and SGC-7901 cells revealed that CXCL1 increased cell migration but had little effect on cell proliferation. CXCL1 activated VEGF signaling in gastric cancer (GC) cells, which was inhibited by STAT3 or chemokine (C-X-C motif) receptor 2 (CXCR2) blockade. CXCL1 also increased p-STAT3 expression in GC cells. In vivo, CXCL1 increased xenograft local tumor growth, phospho-Janus kinase 2 (p-JAK2), p-STAT3 levels, VEGF expression and microvessel density. These results suggested that CXCL1 increased local tumor growth through activation of VEGF signaling which may have mechanistic implications for the observed inferior GC survival. The CXCL1/CXCR2 pathway might be potent to improve anti-angiogenic therapy for gastric cancer.", "title": "" }, { "docid": "ed769b97bea6d4bbe7e282ad6dbb1c67", "text": "Three basic switching structures are defined: one is formed by two capacitors and three diodes; the other two are formed by two inductors and two diodes. They are inserted in either a Cuk converter, or a Sepic, or a Zeta converter. The SC/SL structures are built in such a way as when the active switch of the converter is on, the two inductors are charged in series or the two capacitors are discharged in parallel. When the active switch is off, the two inductors are discharged in parallel or the two capacitors are charged in series. As a result, the line voltage is reduced more times than in classical Cuk/Sepic/Zeta converters. The steady-state analysis of the new converters, a comparison of the DC voltage gain and of the voltage and current stresses of the new hybrid converters with those of the available quadratic converters, and experimental results are given", "title": "" }, { "docid": "034aee35a236731d3d6b50d53c4ea718", "text": "Preservation permitting patterns of developmental evolution can be reconstructed within long extinct clades, and the rich fossil record of trilobite ontogeny and phylogeny provides an unparalleled opportunity for doing so. Furthermore, knowledge of Hox gene expression patterns among living arthropods permit inferences about possible Hox gene deployment in trilobites. The trilobite anteroposterior body plan is consistent with recent suggestions that basal euarthropods had a relatively low degree of tagmosis among cephalic limbs, possibly related to overlapping expression domains of cephalic Hox genes. Trilobite trunk segments appeared sequentially at a subterminal generative zone, and were exchanged between regions of fused and freely articulating segments during growth. Homonomous trunk segment shape and gradual size transition were apparently phylogenetically basal conditions and suggest a single trunk tagma. Several derived clades independently evolved functionally distinct tagmata within the trunk, apparently exchanging flexible segment numbers for greater regionally autonomy. The trilobite trunk chronicles how different aspects of arthropod segmentation coevolved as the degree of tagmosis increased.", "title": "" }, { "docid": "9d8b0a97eb195c972c1c0d989625a600", "text": "Emerging millimeter-wave frequency applications require high performance, low-cost and compact devices and circuits. This is the reason why the Substrate Integrated Waveguide (SIW) technology, which combines some advantages of planar circuits and metallic waveguides, has focused a lot of attention in recent years. However, not all three-dimensional metallic waveguide devices and circuit are integrable in planar form. In its first section, this paper reviews recently proposed three-dimensional SIW devices that are taking advantages of the third-dimension to achieve either more compact or multidimensional circuits at millimeter wave frequencies. Also, in a second section, special interest is oriented to recent development of air-filled SIW based on low-cost multilayer printed circuit board (PCB) for high performance millimeter-wave substrate integrated circuits and systems.", "title": "" }, { "docid": "1c4e71d00521219717607cbef90b5bec", "text": "The design of security for cyber-physical systems must take into account several characteristics common to such systems. Among these are feedback between the cyber and physical environment, distributed management and control, uncertainty, real-time requirements, and geographic distribution. This paper discusses these characteristics and suggests a design approach that better integrates security into the core design of the system. A research roadmap is presented that highlights some of the missing pieces needed to enable such an approach. 1. What is a Cyber-Physical-System? The term cyber-physical system has been applied to many problems, ranging from robotics, through SCADA, and distributed control systems. Not all cyber-physical systems involve critical infrastructure, but there are common elements that change the nature of the solutions that must be considered when securing cyber-physical systems. First, the extremely critical nature of activities performed by some cyber-physical systems means that we need security that works, and that by itself means we need something different. All kidding aside, there are fundamental system differences in cyber-physical systems that will force us to look at security in ways more closely tied to the physical application. It is my position that by focusing on these differences we can see where new (or rediscovered) approaches are needed, and that by building systems that support the inclusion of security as part of the application architecture, we can improve the security of both cyber-physical systems, where such an approach is most clearly warranted, as well as improve the security of cyber-only systems, where such an approach is more easily ignored. In this position paper I explain the characteristics of cyber-physical systems that must drive new research in security. I discuss the security problem areas that need attention because of these characteristics and I describe a design methodology for security that provides for better integration of security design with application design. Finally, I suggest some of the components of future systems that can help us include security as a focusing issue in the architectural design of critical applications.", "title": "" }, { "docid": "739aaf487d6c5a7b7fe9d0157d530382", "text": "A blockchain framework is presented for addressing the privacy and security challenges associated with the Big Data in smart mobility. It is composed of individuals, companies, government and universities where all the participants collect, own, and control their data. Each participant shares their encrypted data to the blockchain network and can make information transactions with other participants as long as both party agrees to the transaction rules (smart contract) issued by the owner of the data. Data ownership, transparency, auditability and access control are the core principles of the proposed blockchain for smart mobility Big Data.", "title": "" }, { "docid": "e010498736203e56b900ffc7b0585c34", "text": "any of the security properties that are outlined repeatedly in the newer regulations and standards can easily be side-stepped. Too often the culprits are unsophisticated software development techniques, a lack of securityfocused quality assurance, and scarce security training for software developers, software architects, and project managers. To meet future needs, opportunities, and threats associated with information security, security needs to be “baked in” to the overall systems development life-cycle process. Information security and privacy loom ever larger as issues for public and private sector organizations alike today. Government regulations and industry standards attempt to address these issues. Computer hardware and software providers invest in meeting both regulatory and market demands for information security and privacy. And individual organizations — corporations and government agencies alike — are voicing concern about the problem.", "title": "" }, { "docid": "bd431b9c908dc779726751cb8a23dacd", "text": "Modbus protocol is a de-facto standard protocol in industrial automation. As the size and complexity of industry systems increase rapidly, the importance of real-time communication protocols arises as well. In this paper, we analyze the performance of the Modbus/TCP communication protocol which is implemented using the Network Simulator version 3 (NS-3). The performance evaluation focuses on the response time depending on the number of nodes and topology.", "title": "" } ]
scidocsrr
ff94e983994594cd5e31b6fda2fac4a3
Cybersecurity of SCADA Systems: Vulnerability assessment and mitigation
[ { "docid": "b9efcefffc894501f7cfc42d854d6068", "text": "Disruption of electric power operations can be catastrophic on the national security and economy. Due to the complexity of widely dispersed assets and the interdependency between computer, communication, and power systems, the requirement to meet security and quality compliance on the operations is a challenging issue. In recent years, NERC's cybersecurity standard was initiated to require utilities compliance on cybersecurity in control systems - NERC CIP 1200. This standard identifies several cyber-related vulnerabilities that exist in control systems and recommends several remedial actions (e.g., best practices). This paper is an overview of the cybersecurity issues for electric power control and automation systems, the control architectures, and the possible methodologies for vulnerability assessment of existing systems.", "title": "" }, { "docid": "480f940bf5a2226b659048d9840582d9", "text": "Vulnerability assessment is a requirement of NERC's cybersecurity standards for electric power systems. The purpose is to study the impact of a cyber attack on supervisory control and data acquisition (SCADA) systems. Compliance of the requirement to meet the standard has become increasingly challenging as the system becomes more dispersed in wide areas. Interdependencies between computer communication system and the physical infrastructure also become more complex as information technologies are further integrated into devices and networks. This paper proposes a vulnerability assessment framework to systematically evaluate the vulnerabilities of SCADA systems at three levels: system, scenarios, and access points. The proposed method is based on cyber systems embedded with the firewall and password models, the primary mode of protection in the power industry today. The impact of a potential electronic intrusion is evaluated by its potential loss of load in the power system. This capability is enabled by integration of a logic-based simulation method and a module for the power flow computation. The IEEE 30-bus system is used to evaluate the impact of attacks launched from outside or from within the substation networks. Countermeasures are identified for improvement of the cybersecurity.", "title": "" } ]
[ { "docid": "5cda87e3e8f5e5794db7ec2a523eb807", "text": "Active learning techniques have gained popularity to reduce human effort in labeling data instances for inducing a classifier. When faced with large amounts of unlabeled data, such algorithms automatically identify the exemplar and representative instances to be selected for manual annotation. More recently, there have been attempts toward a batch mode form of active learning, where a batch of data points is simultaneously selected from an unlabeled set. Real-world applications require adaptive approaches for batch selection in active learning, depending on the complexity of the data stream in question. However, the existing work in this field has primarily focused on static or heuristic batch size selection. In this paper, we propose two novel optimization-based frameworks for adaptive batch mode active learning (BMAL), where the batch size as well as the selection criteria are combined in a single formulation. We exploit gradient-descent-based optimization strategies as well as properties of submodular functions to derive the adaptive BMAL algorithms. The solution procedures have the same computational complexity as existing state-of-the-art static BMAL techniques. Our empirical results on the widely used VidTIMIT and the mobile biometric (MOBIO) data sets portray the efficacy of the proposed frameworks and also certify the potential of these approaches in being used for real-world biometric recognition applications.", "title": "" }, { "docid": "b41a64f09b640e8c20c602878abf1996", "text": "Electronic Health Records (EHRs) are entirely controlled by hospitals instead of patients, which complicates seeking medical advices from different hospitals. Patients face a critical need to focus on the details of their own healthcare and restore management of their own medical data. The rapid development of blockchain technology promotes population healthcare, including medical records as well as patient-related data. This technology provides patients with comprehensive, immutable records, and access to EHRs free from service providers and treatment websites. In this paper, to guarantee the validity of EHRs encapsulated in blockchain, we present an attribute-based signature scheme with multiple authorities, in which a patient endorses a message according to the attribute while disclosing no information other than the evidence that he has attested to it. Furthermore, there are multiple authorities without a trusted single or central one to generate and distribute public/private keys of the patient, which avoids the escrow problem and conforms to the mode of distributed data storage in the blockchain. By sharing the secret pseudorandom function seeds among authorities, this protocol resists collusion attack out of $N$ from $N-1$ corrupted authorities. Under the assumption of the computational bilinear Diffie-Hellman, we also formally demonstrate that, in terms of the unforgeability and perfect privacy of the attribute-signer, this attribute-based signature scheme is secure in the random oracle model. The comparison shows the efficiency and properties between the proposed method and methods proposed in other studies.", "title": "" }, { "docid": "aa54e87bd6a2967ceda284975bdedfeb", "text": "In this paper we present a method for segmentation of fingernail patterns and differentiate them as distinct nail parts; fingernail plate with lunula and distal free edge of nail plate. In the research work, focus is on fixed area of the fingernail plate plus lunula, as it remains unchanged in structure, where as the distal nail edge extends and changes in structure over a period of time. In order to segment fingernail parts, we have devised an algorithm that automatically separates unchanging region of fingernail plate from free distal edge of nail structure. The fingernail plate that includes lunula within (may or may not be prominently present in fingernails), is used as biometric in our advance study. Theory suggests, every fingernail within finger formation comprises of the brightest regions amongst the captured finger data set (in our system). Proposed method is of two stages. In first stage, color image is converted to gray scale and contrast enhancement is applied using adaptive histogram equalization. In second stage, we perform segmentation using watershed method that exercises maxima and minima properties of marker controlled watershed principles. In order to verify the results of the algorithm, we have constructed a confusion matrix where evaluation has been done with ground truth. Additionally, the segmented object's from both the methods is considered for quality metrics assessment. Similarity accuracy between the ground truth and watershed result is 84.0% correctness for fingernail plate. Initial fingernail segmentation results are promising, supporting its use for biometric application.", "title": "" }, { "docid": "d5880bfd50b5f82c27e74774d7f0f51a", "text": "Fatigue is often cited by clinicians as a debilitating symptom suffered by the many who are infected with HIV. This article provides a review of HIV-related fatigue, including research on possible physiological causes such as anemia, CD4 count, impaired liver function, impaired thyroid function, and cortisol abnormalities. Psychological causes of fatigue, particularly depression, are reviewed as well. Measurement issues, such as the use of inappropriate tools, the problem of measuring the presence or absence of fatigue, and the use of tools developed for other groups of patients, are reviewed. The need for a comprehensive fatigue tool that is appropriate for people with HIV is discussed. Current treatment research, including thyroid replacement, hyperbaric oxygen, and dextroamphetamine, is presented. Finally, the implications for further research, including the need for qualitative studies to learn more about the phenomenon, develop an instrument to measure fatigue, and examine variables together to get a complete picture of this complex concept, are reviewed.", "title": "" }, { "docid": "18285ee4096c50691b9949315abb4d21", "text": "Automated visual inspection (AVI) is becoming an integral part of modern surface mount technology (SMT) assembly process. This high technology assembly, produces printed circuit boards (PCB) with tiny and delicate electronic components. With the increase in demand for such PCBs, high-volume production has to cater for both the quantity and zero defect quality assurance. The ever changing technology in fabrication, placement and soldering of SMT electronic components have caused an increase in PCB defects both in terms of numbers and types. Consequently, a wide range of defect detecting techniques and algorithms have been reported and implemented in AVI systems in the past decade. Unfortunately, the turn-over rate for PCB inspection is very crucial in the electronic industry. Current AVI systems spend too much time inspecting PCBs on a component-bycomponent basis. In this paper, we focus on providing a solution that can cover a larger inspection area of a PCB at any one time. This will reduce inspection time and increase the throughput of PCB production. Our solution is targeted for missing and misalignment defects of SMT devices in a PCB. An alternative visual inspection approach using color background subtraction is presented to address the stated defect. Experimental results of various defect PCBs are also presented. Key–Words: PCB Inspection, Background Subtraction, Automated Visual Inspection.", "title": "" }, { "docid": "4ea537e5b8c773c318a81c0ba7a8d789", "text": "Behavioral economics increases the explanatory power of economics by providing it with more realistic psychological foundations. This book consists of representative recent articles in behavioral economics. This chapter is intended to provide an introduction to the approach and methods of behavioral economics, and to some of its major findings, applications, and promising new directions. It also seeks to fill some unavoidable gaps in the chapters’ coverage of topics.", "title": "" }, { "docid": "b531674f21e88ac82071583531e639c6", "text": "OBJECTIVE\nTo evaluate use of, satisfaction with, and social adjustment with adaptive devices compared with prostheses in young people with upper limb reduction deficiencies.\n\n\nMETHODS\nCross-sectional study of 218 young people with upper limb reduction deficiencies (age range 2-20 years) and their parents. A questionnaire was used to evaluate participants' characteristics, difficulties encountered, and preferred solutions for activities, use satisfaction, and social adjustment with adaptive devices vs prostheses. The Quebec User Evaluation of Satisfaction with assistive Technology and a subscale of Trinity Amputation and Prosthesis Experience Scales were used.\n\n\nRESULTS\nOf 218 participants, 58% were boys, 87% had transversal upper limb reduction deficiencies, 76% with past/present use of adaptive devices and 37% with past/present use of prostheses. Young people (> 50%) had difficulties in performing activities. Of 360 adaptive devices, 43% were used for self-care (using cutlery), 28% for mobility (riding a bicycle) and 5% for leisure activities. Prostheses were used for self-care (4%), mobility (9%), communication (3%), recreation and leisure (6%) and work/employment (4%). The preferred solution for difficult activities was using unaffected and affected arms/hands and other body parts (> 60%), adaptive devices (< 48%) and prostheses (< 9%). Satisfaction and social adjustment with adaptive devices were greater than with prostheses (p < 0.05).\n\n\nCONCLUSION\nYoung people with upper limb reduction deficiencies are satisfied and socially well-adjusted with adaptive devices. Adaptive devices are good alternatives to prostheses.", "title": "" }, { "docid": "689f7aad97d36f71e43e843a331fcf5d", "text": "Dimension-reducing feature extraction neural network techniques which also preserve neighbourhood relationships in data have traditionally been the exclusive domain of Kohonen self organising maps. Recently, we introduced a novel dimension-reducing feature extraction process, which is also topographic, based upon a Radial Basis Function architecture. It has been observed that the generalisation performance of the system is broadly insensitive to model order complexity and other smoothing factors such as the kernel widths, contrary to intuition derived from supervised neural network models. In this paper we provide an effective demonstration of this property and give a theoretical justification for the apparent 'self-regularising' behaviour of the 'NEUROSCALE' architecture. 1 'NeuroScale': A Feed-forward Neural Network Topographic Transformation Recently an important class of topographic neural network based feature extraction approaches, which can be related to the traditional statistical methods of Sammon Mappings (Sammon, 1969) and Multidimensional Scaling (Kruskal, 1964), have been introduced (Mao and Jain, 1995; Lowe, 1993; Webb, 1995; Lowe and Tipping, 1996). These novel alternatives to Kohonen-like approaches for topographic feature extraction possess several interesting properties. For instance, the NEuROSCALE architecture has the empirically observed property that the generalisation perfor544 D. Lowe and M. E. Tipping mance does not seem to depend critically on model order complexity, contrary to intuition based upon knowledge of its supervised counterparts. This paper presents evidence for their 'self-regularising' behaviour and provides an explanation in terms of the curvature of the trained models. We now provide a brief introduction to the NEUROSCALE philosophy of nonlinear topographic feature extraction. Further details may be found in (Lowe, 1993; Lowe and Tipping, 1996). We seek a dimension-reducing, topographic transformation of data for the purposes of visualisation and analysis. By 'topographic', we imply that the geometric structure of the data be optimally preserved in the transformation, and the embodiment of this constraint is that the inter-point distances in the feature space should correspond as closely as possible to those distances in the data space. The implementation of this principle by a neural network is very simple. A Radial Basis Function (RBF) neural network is utilised to predict the coordinates of the data point in the transformed feature space. The locations of the feature points are indirectly determined by adjusting the weights of the network. The transformation is determined by optimising the network parameters in order to minimise a suitable error measure that embodies the topographic principle. The specific details of this alternative approach are as follows. Given an mdimensional input space of N data points x q , an n-dimensional feature space of points Yq is generated such that the relative positions of the feature space points minimise the error, or 'STRESS', term: N E = 2: 2:(d~p dqp )2, (1) p q>p where the d~p are the inter-point Euclidean distances in the data space: d~p = J(xq Xp)T(Xq xp), and the dqp are the corresponding distances in the feature space: dqp = J(Yq Yp)T(Yq Yp)· The points yare generated by the RBF, given the data points as input. That is, Yq = f(xq;W), where f is the nonlinear transformation effected by the RBF with parameters (weights and any kernel smoothing factors) W. The distances in the feature space may thus be given by dqp =11 f(xq) f(xp) \" and so more explicitly by", "title": "" }, { "docid": "6b0cfbadd815713179b2312293174379", "text": "In order to take full advantage of the SiC devices' high-temperature and high-frequency capabilities, a transformer isolated gate driver is designed for the SiC JFET phase leg module to achieve a fast switching speed of 26V/ns and a small cross-talking voltage of 4.2V in a 650V and 5A inductive load test. Transformer isolated gate drive circuits suitable for high-temperature applications are compared with respect to different criteria. Based on the comparison, an improved edge triggered gate drive topology is proposed. Then, using the proposed gate drive topology, special issues in the phase-leg gate drive design are discussed. Several strategies are implemented to improve the phase-leg gate drive performance and alleviate the cross-talking issue. Simulation and experimental results are given for verification purposes.", "title": "" }, { "docid": "d0e442630ad81aaa011f2a8d7e6034ee", "text": "Manifold theory has been the central concept of many learning methods. However, learning modern CNNs with manifold structures has not raised due attention, mainly because of the inconvenience of imposing manifold structures onto the architecture of the CNNs. In this paper we present ManifoldNet, a novel method to encourage learning of manifold-aware representations. Our approach segments the input manifold into a set of fragments. By assigning the corresponding segmentation id as a pseudo label to every sample, we convert the problem of preserving the local manifold structure into a point-wise classification task. Due to its unsupervised nature, the segmentation tends to be noisy. We mitigate this by introducing ensemble manifold segmentation (EMS). EMS accounts for the manifold structure by dividing the training data into an ensemble of classification training sets that contain samples of local proximity. CNNs are trained on these ensembles under a multi-task learning framework to conform to the manifold. ManifoldNet can be trained with only the pseudo labels or together with task-specific labels. We evaluate ManifoldNet on two different tasks: network imitation (distillation) and semi-supervised learning. Our experiments show that the manifold structures are effectively utilized for both unsupervised and semi-supervised learning.", "title": "" }, { "docid": "db70302a3d7e7e7e5974dd013e587b12", "text": "In recent years, the emerging Internet-of-Things (IoT) has led to rising concerns about the security of networked embedded devices. In this work, we propose the SIPHON architecture---a Scalable high-Interaction Honeypot platform for IoT devices. Our architecture leverages IoT devices that are physically at one location and are connected to the Internet through so-called \\emph{wormholes} distributed around the world. The resulting architecture allows exposing few physical devices over a large number of geographically distributed IP addresses. We demonstrate the proposed architecture in a large scale experiment with 39 wormhole instances in 16 cities in 9 countries. Based on this setup, five physical IP cameras, one NVR and one IP printer are presented as 85 real IoT devices on the Internet, attracting a daily traffic of 700MB for a period of two months. A preliminary analysis of the collected traffic indicates that devices in some cities attracted significantly more traffic than others (ranging from 600 000 incoming TCP connections for the most popular destination to less than 50 000 for the least popular). We recorded over 400 brute-force login attempts to the web-interface of our devices using a total of 1826 distinct credentials, from which 11 attempts were successful. Moreover, we noted login attempts to Telnet and SSH ports some of which used credentials found in the recently disclosed Mirai malware.", "title": "" }, { "docid": "00ffbb0d49677681c9db68c3b9d410ad", "text": "Despite distinct differences between walking and running, the two types of human locomotion are likely to be controlled by shared pattern-generating networks. However, the differences between their kinematics and kinetics imply that corresponding muscle activations may also be quite different. We examined the differences between walking and running by recording kinematics and electromyographic (EMG) activity in 32 ipsilateral limb and trunk muscles during human locomotion, and compared the effects of speed (3-12 km/h) and gait. We found that the timing of muscle activation was accounted for by five basic temporal activation components during running as we previously found for walking. Each component was loaded on similar sets of leg muscles in both gaits but generally on different sets of upper trunk and shoulder muscles. The major difference between walking and running was that one temporal component, occurring during stance, was shifted to an earlier phase in the step cycle during running. These muscle activation differences between gaits did not simply depend on locomotion speed as shown by recordings during each gait over the same range of speeds (5-9 km/h). The results are consistent with an organization of locomotion motor programs having two parts, one that organizes muscle activation during swing and another during stance and the transition to swing. The timing shift between walking and running reflects therefore the difference in the relative duration of the stance phase in the two gaits.", "title": "" }, { "docid": "caa252bbfad7ab5c989ae7687818f8ae", "text": "Nowadays, GPU accelerators are widely used in areas with large data-parallel computations such as scientific computations or neural networks. Programmers can either write code in low-level CUDA/OpenCL code or use a GPU extension for a high-level programming language for better productivity. Most extensions focus on statically-typed languages, but many programmers prefer dynamically-typed languages due to their simplicity and flexibility. \n This paper shows how programmers can write high-level modular code in Ikra, a Ruby extension for array-based GPU computing. Programmers can compose GPU programs of multiple reusable parallel sections, which are subsequently fused into a small number of GPU kernels. We propose a seamless syntax for separating code regions that extensively use dynamic language features from those that are compiled for efficient execution. Moreover, we propose symbolic execution and a program analysis for kernel fusion to achieve performance that is close to hand-written CUDA code.", "title": "" }, { "docid": "50906e5d648b7598c307b09975daf2d8", "text": "Digitization forces industries to adapt to changing market conditions and consumer behavior. Exponential advances in technology, increased consumer power and sharpened competition imply that companies are facing the menace of commoditization. To sustainably succeed in the market, obsolete business models have to be adapted and new business models can be developed. Differentiation and unique selling propositions through innovation as well as holistic stakeholder engagement help companies to master the transformation. To enable companies and start-ups facing the implications of digital change, a tool was created and designed specifically for this demand: the Business Model Builder. This paper investigates the process of transforming the Business Model Builder into a software-supported digitized version. The digital twin allows companies to simulate the iterative adjustment of business models to constantly changing market conditions as well as customer needs on an ongoing basis. The user can modify individual variables, understand interdependencies and see the impact on the result of the business case, i.e. earnings before interest and taxes (EBIT) or economic value added (EVA). The simulation of a business models accordingly provides the opportunity to generate a dynamic view of the business model where any changes of input variables are considered in the result, the business case. Thus, functionality, feasibility and profitability of a business model can be reviewed, tested and validated in the digital simulation tool.", "title": "" }, { "docid": "1269bdb48c686c9643f007d4aee4afea", "text": "Hundreds of public SPARQL endpoints have been deployed on the Web, forming a novel decentralised infrastructure for querying billions of structured facts from a variety of sources on a plethora of topics. But is this infrastructure mature enough to support applications? For 427 public SPARQL endpoints registered on the DataHub, we conduct various experiments to test their maturity. Regarding discoverability, we find that only one-third of endpoints make descriptive meta-data available, making it difficult to locate or learn about their content and capabilities. Regarding interoperability, we find patchy support for established SPARQL features like ORDER BY as well as (understandably) for new SPARQL 1.1 features. Regarding efficiency, we show that the performance of endpoints for generic queries can vary by up to 3–4 orders of magnitude. Regarding availability, based on a 27-month long monitoring experiment, we show that only 32.2% of public endpoints can be expected to have (monthly) “two-nines” uptimes of 99–100%.", "title": "" }, { "docid": "10973f1a045d05084039f05e92578f9a", "text": "Determination of credit portfolio loss distributions is essential for the valuation and risk management of multi-name credit derivatives such as CDOs. The default time model has recently become a market standard approach for capturing the default correlation, which is one of the main drivers for the portfolio loss. However, the default time model yields very different default dependency compared with a continuous-time credit migration model. To build a connection between them, we calibrate the correlation parameter of a single-factor Gaussian copula model to portfolio loss distribution determined from a multi-step credit migration simulation. The deal correlation is produced as a measure of the portfolio average correlation effect that links the two models. Procedures for obtaining the portfolio loss distributions in both models are described in the paper and numerical results are presented.", "title": "" }, { "docid": "dc7262a2e046bd5f633e9f5fbb5f1830", "text": "We investigate a dual-annular-ring CMUT array configuration for forward-looking intravascular ultrasound (FL-IVUS) imaging. The array consists of separate, concentric transmit and receive ring arrays built on the same silicon substrate. This configuration has the potential for independent optimization of each array and uses the silicon area more effectively without any particular drawback. We designed and fabricated a 1 mm diameter test array which consists of 24 transmit and 32 receive elements. We investigated synthetic phased array beamforming with a non-redundant subset of transmit-receive element pairs of the dual-annular-ring array. For imaging experiments, we designed and constructed a programmable FPGA-based data acquisition and phased array beamforming system. Pulse-echo measurements along with imaging simulations suggest that dual-ring-annular array should provide performance suitable for real-time FL-IVUS applications", "title": "" }, { "docid": "441f80a25e7a18760425be5af1ab981d", "text": "This paper proposes efficient algorithms for group sparse optimization with mixed `2,1-regularization, which arises from the reconstruction of group sparse signals in compressive sensing, and the group Lasso problem in statistics and machine learning. It is known that encoding the group information in addition to sparsity will lead to better signal recovery/feature selection. The `2,1-regularization promotes group sparsity, but the resulting problem, due to the mixed-norm structure and possible grouping irregularity, is considered more difficult to solve than the conventional `1-regularized problem. Our approach is based on a variable splitting strategy and the classic alternating direction method (ADM). Two algorithms are presented, one derived from the primal and the other from the dual of the `2,1-regularized problem. The convergence of the proposed algorithms is guaranteed by the existing ADM theory. General group configurations such as overlapping groups and incomplete covers can be easily handled by our approach. Computational results show that on random problems the proposed ADM algorithms exhibit good efficiency, and strong stability and robustness.", "title": "" }, { "docid": "6537921976c2779d1e7d921c939ec64d", "text": "Stencil computation sweeps over a spatial grid over multiple time steps to perform nearest-neighbor computations. The bandwidth-to-compute requirement for a large class of stencil kernels is very high, and their performance is bound by the available memory bandwidth. Since memory bandwidth grows slower than compute, the performance of stencil kernels will not scale with increasing compute density. We present a novel 3.5D-blocking algorithm that performs 2.5D-spatial and temporal blocking of the input grid into on-chip memory for both CPUs and GPUs. The resultant algorithm is amenable to both thread- level and data-level parallelism, and scales near-linearly with the SIMD width and multiple-cores. Our performance numbers are faster or comparable to state-of-the-art-stencil implementations on CPUs and GPUs. Our implementation of 7-point-stencil is 1.5X-faster on CPUs, and 1.8X faster on GPUs for single- precision floating point inputs than previously reported numbers. For Lattice Boltzmann methods, the corresponding speedup number on CPUs is 2.1X.", "title": "" }, { "docid": "055cb9aca6b16308793944154dc7866a", "text": "Learning systems depend on three interrelated components: topologies, cost/performance functions, and learning algorithms. Topologies provide the constraints for the mapping, and the learning algorithms offer the means to find an optimal solution; but the solution is optimal with respect to what? Optimality is characterized by the criterion and in neural network literature, this is the least addressed component, yet it has a decisive influence in generalization performance. Certainly, the assumptions behind the selection of a criterion should be better understood and investigated. Traditionally, least squares has been the benchmark criterion for regression problems; considering classification as a regression problem towards estimating class posterior probabilities, least squares has been employed to train neural network and other classifier topologies to approximate correct labels. The main motivation to utilize least squares in regression simply comes from the intellectual comfort this criterion provides due to its success in traditional linear least squares regression applications – which can be reduced to solving a system of linear equations. For nonlinear regression, the assumption of Gaussianity for the measurement error combined with the maximum likelihood principle could be emphasized to promote this criterion. In nonparametric regression, least squares principle leads to the conditional expectation solution, which is intuitively appealing. Although these are good reasons to use the mean squared error as the cost, it is inherently linked to the assumptions and habits stated above. Consequently, there is information in the error signal that is not captured during the training of nonlinear adaptive systems under non-Gaussian distribution conditions when one insists on second-order statistical criteria. This argument extends to other linear-second-order techniques such as principal component analysis (PCA), linear discriminant analysis (LDA), and canonical correlation analysis (CCA). Recent work tries to generalize these techniques to nonlinear scenarios by utilizing kernel techniques or other heuristics. This begs the question: what other alternative cost functions could be used to train adaptive systems and how could we establish rigorous techniques for extending useful concepts from linear and second-order statistical techniques to nonlinear and higher-order statistical learning methodologies?", "title": "" } ]
scidocsrr
6ad5201c31f61f26b196a9d147a81a89
A survey of intrusion detection systems in wireless sensor networks
[ { "docid": "66b337e0b6b2d28f7414cf5f88a724a0", "text": "Sensor networks are currently an active research area mainly due to the potential of their applications. In this paper we investigate the use of Wireless Sensor Networks (WSN) for air pollution monitoring in Mauritius. With the fast growing industrial activities on the island, the problem of air pollution is becoming a major concern for the health of the population. We proposed an innovative system named Wireless Sensor Network Air Pollution Monitoring System (WAPMS) to monitor air pollution in Mauritius through the use of wireless sensors deployed in huge numbers around the island. The proposed system makes use of an Air Quality Index (AQI) which is presently not available in Mauritius. In order to improve the efficiency of WAPMS, we have designed and implemented a new data aggregation algorithm named Recursive Converging Quartiles (RCQ). The algorithm is used to merge data to eliminate duplicates, filter out invalid readings and summarise them into a simpler form which significantly reduce the amount of data to be transmitted to the sink and thus saving energy. For better power management we used a hierarchical routing protocol in WAPMS and caused the motes to sleep during idle time.", "title": "" } ]
[ { "docid": "227786365219fe1efab6414bae0d8cdb", "text": "Predicting the occurrence of links is a fundamental problem in networks. In the link prediction problem we are given a snapshot of a network and would like to infer which interactions among existing members are likely to occur in the near future or which existing interactions are we missing. Although this problem has been extensively studied, the challenge of how to effectively combine the information from the network structure with rich node and edge attribute data remains largely open.\n We develop an algorithm based on Supervised Random Walks that naturally combines the information from the network structure with node and edge level attributes. We achieve this by using these attributes to guide a random walk on the graph. We formulate a supervised learning task where the goal is to learn a function that assigns strengths to edges in the network such that a random walker is more likely to visit the nodes to which new links will be created in the future. We develop an efficient training algorithm to directly learn the edge strength estimation function.\n Our experiments on the Facebook social graph and large collaboration networks show that our approach outperforms state-of-the-art unsupervised approaches as well as approaches that are based on feature extraction.", "title": "" }, { "docid": "16987d81cd90db3c0abe2631de9e737c", "text": "Docker containers are becoming an attractive implementation choice for next-generation microservices-based applications. When provisioning such an application, container (microservice) instances need to be created from individual container images. Starting a container on a node, where images are locally available, is fast but it may not guarantee the quality of service due to insufficient resources. When a collection of nodes are available, one can select a node with sufficient resources. However, if the selected node does not have the required image, downloading the image from a different registry increases the provisioning time. Motivated by these observations, in this paper, we present CoMICon, a system for co-operative management of Docker images among a set of nodes. The key features of CoMICon are: (1) it enables a co-operative registry among a set of nodes, (2) it can store or delete images partially in the form of layers, (3) it facilitates the transfer of image layers between registries, and (4) it enables distributed pull of an image while starting a container. Using these features, we describe—(i) high availability management of images and (ii) provisioning management of distributed microservices based applications. We extensively evaluate the performance of CoMICon using 142 real, publicly available images from Docker hub. In contrast to state-of-the-art full image based approach, CoMICon can increase the number of highly available images up to 3x while reducing the application provisioning time by 28% on average.", "title": "" }, { "docid": "56c30ddf0aedfb0f13885d90e22e6537", "text": "A single-pole double-throw novel switch device in0.18¹m SOI complementary metal-oxide semiconductor(CMOS) process is developed for 0.9 Ghz wireless GSMsystems. The layout of the device is optimized keeping inmind the parameters of interest for the RF switch. A subcircuitmodel, with the standard surface potential (PSP) modelas the intrinsic FET model along with the parasitic elementsis built to predict the Ron and Coff of the switch. Themeasured data agrees well with the model. The eight FETstacked switch achieved an Ron of 2.5 ohms and an Coff of180 fF.", "title": "" }, { "docid": "3fa8b8a93716a85f8573bd1cb8d215f2", "text": "Vision-based research for intelligent vehicles have traditionally focused on specific regions around a vehicle, such as a front looking camera for, e.g., lane estimation. Traffic scenes are complex and vital information could be lost in unobserved regions. This paper proposes a framework that uses four visual sensors for a full surround view of a vehicle in order to achieve an understanding of surrounding vehicle behaviors. The framework will assist the analysis of naturalistic driving studies by automating the task of data reduction of the observed trajectories. To this end, trajectories are estimated using a vehicle detector together with a multiperspective optimized tracker in each view. The trajectories are transformed to a common ground plane, where they are associated between perspectives and analyzed to reveal tendencies around the ego-vehicle. The system is tested on sequences from 2.5 h of drive on US highways. The multiperspective tracker is tested in each view as well as for the ability to associate vehicles bet-ween views with a 92% recall score. A case study of vehicles approaching from the rear shows certain patterns in behavior that could potentially influence the ego-vehicle.", "title": "" }, { "docid": "e393cf414910dbf50ac18d2ad0f2cd15", "text": "Training relation extractors for the purpose of automated knowledge base population requires the availability of sufficient training data. The amount of manual labeling can be significantly reduced by applying distant supervision, which generates training data by aligning large text corpora with existing knowledge bases. This typically results in a highly noisy training set, where many training sentences do not express the intended relation. In this paper, we propose to combine distant supervision with minimal human supervision by annotating features (in particular shortest dependency paths) rather than complete relation instances. Such feature labeling eliminates noise from the initial training set, resulting in a significant increase of precision at the expense of recall. We further improve on this approach by introducing the Semantic Label Propagation (SLP) method, which uses the similarity between low-dimensional representations of candidate training instances to again extend the (filtered) training set in order to increase recall while maintaining high precision. Our strategy is evaluated on an established test collection designed for knowledge base population (KBP) from the TAC KBP English slot filling task. The experimental results show that SLP leads to substantial performance gains when compared to existing approaches while requiring an almost negligible human annotation effort.", "title": "" }, { "docid": "747319dc1492cf26e9b9112e040cbba7", "text": "Markerless tracking of hands and fingers is a promising enabler for human-computer interaction. However, adoption has been limited because of tracking inaccuracies, incomplete coverage of motions, low framerate, complex camera setups, and high computational requirements. In this paper, we present a fast method for accurately tracking rapid and complex articulations of the hand using a single depth camera. Our algorithm uses a novel detectionguided optimization strategy that increases the robustness and speed of pose estimation. In the detection step, a randomized decision forest classifies pixels into parts of the hand. In the optimization step, a novel objective function combines the detected part labels and a Gaussian mixture representation of the depth to estimate a pose that best fits the depth. Our approach needs comparably less computational resources which makes it extremely fast (50 fps without GPU support). The approach also supports varying static, or moving, camera-to-scene arrangements. We show the benefits of our method by evaluating on public datasets and comparing against previous work.", "title": "" }, { "docid": "d8484cc7973882777f65a28fcdbb37be", "text": "The reported power analysis attacks on hardware implementations of the MICKEY family of streams ciphers require a large number of power traces. The primary motivation of our work is to break an implementation of the cipher when only a limited number of power traces can be acquired by an adversary. In this paper, we propose a novel approach to mount a Template attack (TA) on MICKEY-128 2.0 stream cipher using Particle Swarm Optimization (PSO) generated initialization vectors (IVs). In addition, we report the results of power analysis against a MICKEY-128 2.0 implementation on a SASEBO-GII board to demonstrate our proposed attack strategy. The captured power traces were analyzed using Least Squares Support Vector Machine (LS-SVM) learning algorithm based binary classifiers to segregate the power traces into the respective Hamming distance (HD) classes. The outcomes of the experiments reveal that our proposed power analysis attack strategy requires a much lesser number of IVs compared to a standard Correlation Power Analysis (CPA) attack on MICKEY-128 2.0 during the key loading phase of the cipher.", "title": "" }, { "docid": "d212f981eb8cc6054b2651009179b722", "text": "A sixth-order 10.7-MHz bandpass switched-capacitor filter based on a double terminated ladder filter is presented. The filter uses a multipath operational transconductance amplifier (OTA) that presents both better accuracy and higher slew rate than previously reported Class-A OTA topologies. Design techniques based on charge cancellation and slower clocks are used to reduce the overall capacitance from 782 down to 219 unity capacitors. The filter's center frequency and bandwidth are 10.7 MHz and 400 kHz, respectively, and a passband ripple of 1 dB in the entire passband. The quality factor of the resonators used as filter terminations is around 32. The measured (filter + buffer) third-intermodulation (IM3) distortion is less than -40 dB for a two-tone input signal of +3-dBm power level each. The signal-to-noise ratio is roughly 58 dB while the IM3 is -45 dB; the power consumption for the standalone filter is 42 mW. The chip was fabricated in a 0.35-mum CMOS process; filter's area is 0.84 mm2", "title": "" }, { "docid": "c861009ed309b208218182e60b126228", "text": "We present a novel beam-search decoder for grammatical error correction. The decoder iteratively generates new hypothesis corrections from current hypotheses and scores them based on features of grammatical correctness and fluency. These features include scores from discriminative classifiers for specific error categories, such as articles and prepositions. Unlike all previous approaches, our method is able to perform correction of whole sentences with multiple and interacting errors while still taking advantage of powerful existing classifier approaches. Our decoder achieves an F1 correction score significantly higher than all previous published scores on the Helping Our Own (HOO) shared task data set.", "title": "" }, { "docid": "4927fee47112be3d859733c498fbf594", "text": "To design effective tools for detecting and recovering from software failures requires a deep understanding of software bug characteristics. We study software bug characteristics by sampling 2,060 real world bugs in three large, representative open-source projects—the Linux kernel, Mozilla, and Apache. We manually study these bugs in three dimensions—root causes, impacts, and components. We further study the correlation between categories in different dimensions, and the trend of different types of bugs. The findings include: (1) semantic bugs are the dominant root cause. As software evolves, semantic bugs increase, while memory-related bugs decrease, calling for more research effort to address semantic bugs; (2) the Linux kernel operating system (OS) has more concurrency bugs than its non-OS counterparts, suggesting more effort into detecting concurrency bugs in operating system code; and (3) reported security bugs are increasing, and the majority of them are caused by semantic bugs, suggesting more support to help developers diagnose and fix security bugs, especially semantic security bugs. In addition, to reduce the manual effort in building bug benchmarks for evaluating bug detection and diagnosis tools, we use machine learning techniques to classify 109,014 bugs automatically.", "title": "" }, { "docid": "caa35f58e9e217fd45daa2e49c4a4cde", "text": "Despite its linguistic complexity, the Horn of Africa region includes several major languages with more than 5 million speakers, some crossing the borders of multiple countries. All of these languages have official status in regions or nations and are crucial for development; yet computational resources for the languages remain limited or non-existent. Since these languages are complex morphologically, software for morphological analysis and generation is a necessary first step toward nearly all other applications. This paper describes a resource for morphological analysis and generation for three of the most important languages in the Horn of Africa, Amharic, Tigrinya, and Oromo. 1 Language in the Horn of Africa The Horn of Africa consists politically of four modern nations, Ethiopia, Somalia, Eritrea, and Djibouti. As in most of sub-Saharan Africa, the linguistic picture in the region is complex. The great majority of people are speakers of AfroAsiatic languages belonging to three sub-families: Semitic, Cushitic, and Omotic. Approximately 75% of the population of almost 100 million people are native speakers of four languages: the Cushitic languages Oromo and Somali and the Semitic languages Amharic and Tigrinya. Many others speak one or the other of these languages as second languages. All of these languages have official status at the national or regional level. All of the languages of the region, especially the Semitic languages, are characterized by relatively complex morphology. For such languages, nearly all forms of language technology depend on the existence of software for analyzing and generating word forms. As with most other subSaharan languages, this software has previously not been available. This paper describes a set of Python programs called HornMorpho that address this lack for three of the most important languages, Amharic, Tigrinya, and Oromo. 2 Morphological processingn 2.1 Finite state morphology Morphological analysis is the segmentation of words into their component morphemes and the assignment of grammatical morphemes to grammatical categories and lexical morphemes to lexemes. Morphological generation is the reverse process. Both processes relate a surface level to a lexical level. The relationship between the levels has traditionally been viewed within linguistics in terms of an ordered series of phonological rules. Within computational morphology, a very significant advance came with the demonstration that phonological rules could be implemented as finite state transducers (Kaplan and Kay, 1994) (FSTs) and that the rule ordering could be dispensed with using FSTs that relate the surface and lexical levels directly (Koskenniemi, 1983), so-called “twolevel” morphology. A second important advance was the recognition by Karttunen et al. (1992) that a cascade of composed FSTs could implement the two-level model. This made possible quite complex finite state systems, including ordered alternation rules representing context-sensitive variation in the phonological or orthographic shape of morphemes, the morphotactics characterizing the possible sequences of morphemes (in canonical form) for a given word class, and a lexicon. The key feature of such systems is that, even though the FSTs making up the cascade must be composed in a particular order, the result of composition is a single FST relating surface and lexical levels directly, as in two-level morphology. Because of the invertibility of FSTs, it is a simple matter to convert an analysis FST (surface input Figure 1: Basic architecture of lexical FSTs for morphological analysis and generation. Each rectangle represents an FST; the outermost rectangle is the full FST that is actually used for processing. “.o.” represents composition of FSTs, “+” concatenation of FSTs. to lexical output) to one that performs generation (lexical input to surface output). This basic architecture, illustrated in Figure 1, consisting of a cascade of composed FSTs representing (1) alternation rules and (2) morphotactics, including a lexicon of stems or roots, is the basis for the system described in this paper. We may also want to handle words whose roots or stems are not found in the lexicon, especially when the available set of known roots or stems is limited. In such cases the lexical component is replaced by a phonotactic component characterizing the possible shapes of roots or stems. Such a “guesser” analyzer (Beesley and Karttunen, 2003) analyzes words with unfamiliar roots or stems by positing possible roots or stems. 2.2 Semitic morphology These ideas have revolutionized computational morphology, making languages with complex word structure, such as Finnish and Turkish, far more amenable to analysis by traditional computational techniques. However, finite state morphology is inherently biased to view morphemes as sequences of characters or phones and words as concatenations of morphemes. This presents problems in the case of non-concatenative morphology, for example, discontinuous morphemes and the template morphology that characterizes Semitic languages such as Amharic and Tigrinya. The stem of a Semitic verb consists of a root, essentially a sequence of consonants, and a template that inserts other segments between the root consonants and possibly copies certain of the consonants. For example, the Amharic verb root sbr ‘break’ can combine with roughly 50 different templates to form stems in words such as y ̃b•l y1-sEbr-al ‘he breaks’, ° ̃¤’ tEsEbbEr-E ‘it was broken’, ‰ ̃b’w l-assEbb1r-Ew , ‘let me cause him to break something’, ̃§§” sEbabar-i ‘broken into many pieces’. A number of different additions to the basic FST framework have been proposed to deal with non-concatenative morphology, all remaining finite state in their complexity. A discussion of the advantages and drawbacks of these different proposals is beyond the scope of this paper. The approach used in our system is one first proposed by Amtrup (2003), based in turn on the well studied formalism of weighted FSTs. In brief, in Amtrup’s approach, each of the arcs in a transducer may be “weighted” with a feature structure, that is, a set of grammatical feature-value pairs. As the arcs in an FST are traversed, a set of feature-value pairs is accumulated by unifying the current set with whatever appears on the arcs along the path through the transducer. These feature-value pairs represent a kind of memory for the path that has been traversed but without the power of a stack. Any arc whose feature structure fails to unify with the current set of feature-value pairs cannot be traversed. The result of traversing such an FST during morphological analysis is not only an output character sequence, representing the root of the word, but a set of feature-value pairs that represents the grammatical structure of the input word. In the generation direction, processing begins with a root and a set of feature-value pairs, representing the desired grammatical structure of the output word, and the output is the surface wordform corresponding to the input root and grammatical structure. In Gasser (2009) we showed how Amtrup’s technique can be applied to the analysis and generation of Tigrinya verbs. For an alternate approach to handling the morphotactics of a subset of Amharic verbs, within the context of the Xerox finite state tools (Beesley and Karttunen, 2003), see Amsalu and Demeke (2006). Although Oromo, a Cushitic language, does not exhibit the root+template morphology that is typical of Semitic languages, it is also convenient to handle its morphology using the same technique because there are some long-distance dependencies and because it is useful to have the grammatical output that this approach yields for analysis.", "title": "" }, { "docid": "2b00f2b02fa07cdd270f9f7a308c52c5", "text": "A noninvasive and easy-operation measurement of the heart rate has great potential in home healthcare. We present a simple and high running efficiency method for measuring heart rate from a video. By only tracking one feature point which is selected from a small ROI (Region of Interest) in the head area, we extract trajectories of this point in both X-axis and Y-axis. After a series of processes including signal filtering, interpolation, the Independent Component Analysis (ICA) is used to obtain a periodic signal, and then the heart rate can be calculated. We evaluated on 10 subjects and compared to a commercial heart rate measuring instrument (YUYUE YE680B) and achieved high degree of agreement. A running time comparison experiment to the previous proposed motion-based method is carried out and the result shows that the time cost is greatly reduced in our method.", "title": "" }, { "docid": "e4a3dfe53a66d0affd73234761e7e0e2", "text": "BACKGROUND\nWhether cannabis can cause psychotic or affective symptoms that persist beyond transient intoxication is unclear. We systematically reviewed the evidence pertaining to cannabis use and occurrence of psychotic or affective mental health outcomes.\n\n\nMETHODS\nWe searched Medline, Embase, CINAHL, PsycINFO, ISI Web of Knowledge, ISI Proceedings, ZETOC, BIOSIS, LILACS, and MEDCARIB from their inception to September, 2006, searched reference lists of studies selected for inclusion, and contacted experts. Studies were included if longitudinal and population based. 35 studies from 4804 references were included. Data extraction and quality assessment were done independently and in duplicate.\n\n\nFINDINGS\nThere was an increased risk of any psychotic outcome in individuals who had ever used cannabis (pooled adjusted odds ratio=1.41, 95% CI 1.20-1.65). Findings were consistent with a dose-response effect, with greater risk in people who used cannabis most frequently (2.09, 1.54-2.84). Results of analyses restricted to studies of more clinically relevant psychotic disorders were similar. Depression, suicidal thoughts, and anxiety outcomes were examined separately. Findings for these outcomes were less consistent, and fewer attempts were made to address non-causal explanations, than for psychosis. A substantial confounding effect was present for both psychotic and affective outcomes.\n\n\nINTERPRETATION\nThe evidence is consistent with the view that cannabis increases risk of psychotic outcomes independently of confounding and transient intoxication effects, although evidence for affective outcomes is less strong. The uncertainty about whether cannabis causes psychosis is unlikely to be resolved by further longitudinal studies such as those reviewed here. However, we conclude that there is now sufficient evidence to warn young people that using cannabis could increase their risk of developing a psychotic illness later in life.", "title": "" }, { "docid": "9db0e9b90db4d7fd9c0f268b5ee9b843", "text": "Traditionally, the evaluation of surgical procedures in virtual reality (VR) simulators has been restricted to their individual technical aspects disregarding the procedures carried out by teams. However, some decision models have been proposed to support the collaborative training evaluation process of surgical teams in collaborative virtual environments. The main objective of this article is to present a collaborative simulator based on VR, named SimCEC, as a potential solution for education, training, and evaluation in basic surgical routines for teams of undergraduate students. The simulator considers both tasks performed individually and those carried in a collaborative manner. The main contribution of this work is to improve the discussion about VR simulators requirements (design and implementation) to provide team training in relevant topics, such as users’ feedback in real time, collaborative training in networks, interdisciplinary integration of curricula, and continuous evaluation.", "title": "" }, { "docid": "0e54be77f69c6afbc83dfabc0b8b4178", "text": "Spinal muscular atrophy (SMA) is a neurodegenerative disease characterized by loss of motor neurons in the anterior horn of the spinal cord and resultant weakness. The most common form of SMA, accounting for 95% of cases, is autosomal recessive proximal SMA associated with mutations in the survival of motor neurons (SMN1) gene. Relentless progress during the past 15 years in the understanding of the molecular genetics and pathophysiology of SMA has resulted in a unique opportunity for rational, effective therapeutic trials. The goal of SMA therapy is to increase the expression levels of the SMN protein in the correct cells at the right time. With this target in sight, investigators can now effectively screen potential therapies in vitro, test them in accurate, reliable animal models, move promising agents forward to clinical trials, and accurately diagnose patients at an early or presymptomatic stage of disease. A major challenge for the SMA community will be to prioritize and develop the most promising therapies in an efficient, timely, and safe manner with the guidance of the appropriate regulatory agencies. This review will take a historical perspective to highlight important milestones on the road to developing effective therapies for SMA.", "title": "" }, { "docid": "a8b8f36f7093c79759806559fb0f0cf4", "text": "Cooperative adaptive cruise control (CACC) is an extension of ACC. In addition to measuring the distance to a predecessor, a vehicle can also exchange information with a predecessor by wireless communication. This enables a vehicle to follow its predecessor at a closer distance under tighter control. This paper focuses on the impact of CACC on traffic-flow characteristics. It uses the traffic-flow simulation model MIXIC that was specially designed to study the impact of intelligent vehicles on traffic flow. The authors study the impacts of CACC for a highway-merging scenario from four to three lanes. The results show an improvement of traffic-flow stability and a slight increase in traffic-flow efficiency compared with the merging scenario without equipped vehicles", "title": "" }, { "docid": "a0c9d3c2b14395a6d476b12c5e8b28b0", "text": "Undergraduate research experiences enhance learning and professional development, but providing effective and scalable research training is often limited by practical implementation and orchestration challenges. We demonstrate Agile Research Studios (ARS)---a socio-technical system that expands research training opportunities by supporting research communities of practice without increasing faculty mentoring resources.", "title": "" }, { "docid": "2ca54e2e53027eb2ff441f0e2724d68f", "text": "Thanks to rapid advances in technologies like GPS and Wi-Fi positioning, smartphone users are able to determine their location almost everywhere they go. This is not true, however, of people who are traveling in underground public transportation networks, one of the few types of high-traffic areas where smartphones do not have access to accurate position information. In this paper, we introduce the problem of underground transport positioning on smartphones and present SubwayPS, an accelerometer-based positioning technique that allows smartphones to determine their location substantially better than baseline approaches, even deep beneath city streets. We highlight several immediate applications of positioning in subway networks in domains ranging from mobile advertising to mobile maps and present MetroNavigator, a proof-of-concept smartphone and smartwatch app that notifies users of upcoming points-of-interest and alerts them when it is time to get ready to exit the train.", "title": "" }, { "docid": "cc3d14ebbba039241634d45dad8bfb03", "text": "Digital humanities scholars strongly need a corpus exploration method that provides topics easier to interpret than standard LDA topic models. To move towards this goal, here we propose a combination of two techniques, called Entity Linking and Labeled LDA. Our method identifies in an ontology a series of descriptive labels for each document in a corpus. Then it generates a specific topic for each label. Having a direct relation between topics and labels makes interpretation easier; using an ontology as background knowledge limits label ambiguity. As our topics are described with a limited number of clear-cut labels, they promote interpretability and support the quantitative evaluation of the obtained results. We illustrate the potential of the approach by applying it to three datasets, namely the transcription of speeches from the European Parliament fifth mandate, the Enron Corpus and the Hillary Clinton Email Dataset. While some of these resources have already been adopted by the natural language processing community, they still hold a large potential for humanities scholars, part of which could be exploited in studies that will adopt the fine-grained exploration method presented in this paper.", "title": "" } ]
scidocsrr
3efaabcd2607368d2952f28610f436b4
Concept Hierarchy Extraction from Textbooks
[ { "docid": "9d918a69a2be2b66da6ecf1e2d991258", "text": "We designed and implemented TAGME, a system that is able to efficiently and judiciously augment a plain-text with pertinent hyperlinks to Wikipedia pages. The specialty of TAGME with respect to known systems [5,8] is that it may annotate texts which are short and poorly composed, such as snippets of search-engine results, tweets, news, etc.. This annotation is extremely informative, so any task that is currently addressed using the bag-of-words paradigm could benefit from using this annotation to draw upon (the millions of) Wikipedia pages and their inter-relations.", "title": "" }, { "docid": "74d45402acc9e05c6a8734f114253eea", "text": "Name ambiguity problem has raised an urgent demand for efficient, high-quality named entity disambiguation methods. The key problem of named entity disambiguation is to measure the similarity between occurrences of names. The traditional methods measure the similarity using the bag of words (BOW) model. The BOW, however, ignores all the semantic relations such as social relatedness between named entities, associative relatedness between concepts, polysemy and synonymy between key terms. So the BOW cannot reflect the actual similarity. Some research has investigated social networks as background knowledge for disambiguation. Social networks, however, can only capture the social relatedness between named entities, and often suffer the limited coverage problem.\n To overcome the previous methods' deficiencies, this paper proposes to use Wikipedia as the background knowledge for disambiguation, which surpasses other knowledge bases by the coverage of concepts, rich semantic information and up-to-date content. By leveraging Wikipedia's semantic knowledge like social relatedness between named entities and associative relatedness between concepts, we can measure the similarity between occurrences of names more accurately. In particular, we construct a large-scale semantic network from Wikipedia, in order that the semantic knowledge can be used efficiently and effectively. Based on the constructed semantic network, a novel similarity measure is proposed to leverage Wikipedia semantic knowledge for disambiguation. The proposed method has been tested on the standard WePS data sets. Empirical results show that the disambiguation performance of our method gets 10.7% improvement over the traditional BOW based methods and 16.7% improvement over the traditional social network based methods.", "title": "" } ]
[ { "docid": "a7c79045bcbd9fac03015295324745e3", "text": "Image saliency detection has recently witnessed rapid progress due to deep convolutional neural networks. However, none of the existing methods is able to identify object instances in the detected salient regions. In this paper, we present a salient instance segmentation method that produces a saliency mask with distinct object instance labels for an input image. Our method consists of three steps, estimating saliency map, detecting salient object contours and identifying salient object instances. For the first two steps, we propose a multiscale saliency refinement network, which generates high-quality salient region masks and salient object contours. Once integrated with multiscale combinatorial grouping and a MAP-based subset optimization framework, our method can generate very promising salient object instance segmentation results. To promote further research and evaluation of salient instance segmentation, we also construct a new database of 1000 images and their pixelwise salient instance annotations. Experimental results demonstrate that our proposed method is capable of achieving state-of-the-art performance on all public benchmarks for salient region detection as well as on our new dataset for salient instance segmentation.", "title": "" }, { "docid": "1e82d6acef7e5b5f0c2446d62cf03415", "text": "The purpose of this research is to characterize and model the self-heating effect of multi-finger n-channel MOSFETs. Self-heating effect (SHE) does not need to be analyzed for single-finger bulk CMOS devices. However, it should be considered for multi-finger n-channel MOSFETs that are mainly used for RF-CMOS applications. The SHE mechanism was analyzed based on a two-dimensional device simulator. A compact model, which is a BSIM6 model with additional equations, was developed and implemented in a SPICE simulator with Verilog-A language. Using the proposed model and extracted parameters excellent agreements have been obtained between measurements and simulations in DC and S-parameter domain whereas the original BSIM6 shows inconsistency between static DC and small signal AC simulations due to the lack of SHE. Unlike the generally-used sub-circuits based SHE models including in BSIMSOI models, the proposed SHE model can converge in large scale circuits.", "title": "" }, { "docid": "2e4c1818d7174be02306c5059379337b", "text": "Mid-level or semi-local features learnt using class-level information are potentially more distinctive than the traditional low-level local features constructed in a purely bottom-up fashion. At the same time they preserve some of the robustness properties with respect to occlusions and image clutter. In this paper we propose a new and effective scheme for extracting mid-level features for image classification, based on relevant pattern mining. In particular, we mine relevant patterns of local compositions of densely sampled low-level features. We refer to the new set of obtained patterns as Frequent Local Histograms or FLHs. During this process, we pay special attention to keeping all the local histogram information and to selecting the most relevant reduced set of FLH patterns for classification. The careful choice of the visual primitives and an extension to exploit both local and global spatial information allow us to build powerful bag-of-FLH-based image representations. We show that these bag-of-FLHs are more discriminative than traditional bag-of-words and yield state-of-the-art results on various image classification benchmarks, including Pascal VOC.", "title": "" }, { "docid": "39861e2759b709883f3d37a65d13834b", "text": "BACKGROUND\nDeveloping countries account for 99 percent of maternal deaths annually. While increasing service availability and maintaining acceptable quality standards, it is important to assess maternal satisfaction with care in order to make it more responsive and culturally acceptable, ultimately leading to enhanced utilization and improved outcomes. At a time when global efforts to reduce maternal mortality have been stepped up, maternal satisfaction and its determinants also need to be addressed by developing country governments. This review seeks to identify determinants of women's satisfaction with maternity care in developing countries.\n\n\nMETHODS\nThe review followed the methodology of systematic reviews. Public health and social science databases were searched. English articles covering antenatal, intrapartum or postpartum care, for either home or institutional deliveries, reporting maternal satisfaction from developing countries (World Bank list) were included, with no year limit. Out of 154 shortlisted abstracts, 54 were included and 100 excluded. Studies were extracted onto structured formats and analyzed using the narrative synthesis approach.\n\n\nRESULTS\nDeterminants of maternal satisfaction covered all dimensions of care across structure, process and outcome. Structural elements included good physical environment, cleanliness, and availability of adequate human resources, medicines and supplies. Process determinants included interpersonal behavior, privacy, promptness, cognitive care, perceived provider competency and emotional support. Outcome related determinants were health status of the mother and newborn. Access, cost, socio-economic status and reproductive history also influenced perceived maternal satisfaction. Process of care dominated the determinants of maternal satisfaction in developing countries. Interpersonal behavior was the most widely reported determinant, with the largest body of evidence generated around provider behavior in terms of courtesy and non-abuse. Other aspects of interpersonal behavior included therapeutic communication, staff confidence and competence and encouragement to laboring women.\n\n\nCONCLUSIONS\nQuality improvement efforts in developing countries could focus on strengthening the process of care. Special attention is needed to improve interpersonal behavior, as evidence from the review points to the importance women attach to being treated respectfully, irrespective of socio-cultural or economic context. Further research on maternal satisfaction is required on home deliveries and relative strength of various determinants in influencing maternal satisfaction.", "title": "" }, { "docid": "d6d30dbba9153bcc86ed8a4337821b78", "text": "Multiplayer video streaming scenario can be seen everywhere today as the video traffic is becoming the “killer” traffic over the Internet. The Quality of Experience fairness is critical for not only the users but also the content providers and ISP. Consequently, a QoE fairness adaptive method of multiplayer video streaming is of great importance. Previous studies focus on client-side solutions without network global view or network-assisted solution with extra reaction to client. In this paper, a pure network-based architecture using SDN is designed for monitoring network global performance information. With the flexible programming and network mastery capacity of SDN, we propose an online Q-learning-based dynamic bandwidth allocation algorithm Q-FDBA with the goal of QoE fairness. The results show the Q-FDBA could adaptively react to high frequency of bottleneck bandwidth switches and achieve better QoE fairness within a certain time dimension.", "title": "" }, { "docid": "05622842ebd89777570d7dc3c36a0693", "text": "Online antisocial behavior, such as cyberbullying, harassment, and trolling, is a widespread problem that threatens free discussion and has negative physical and mental health consequences for victims and communities. While prior work has proposed automated methods to identify hostile comments in online discussions, these methods work retrospectively on comments that have already been posted, making it difficult to intervene before an interaction escalates. In this paper we instead consider the problem of forecasting future hostilities in online discussions, which we decompose into two tasks: (1) given an initial sequence of non-hostile comments in a discussion, predict whether some future comment will contain hostility; and (2) given the first hostile comment in a discussion, predict whether this will lead to an escalation of hostility in subsequent comments. Thus, we aim to forecast both the presence and intensity of hostile comments based on linguistic and social features from earlier comments. To evaluate our approach, we introduce a corpus of over 30K annotated Instagram comments from over 1,100 posts. Our approach is able to predict the appearance of a hostile comment on an Instagram post ten or more hours in the future with an AUC of .82 (task 1), and can furthermore distinguish between high and low levels of future hostility with an AUC of .91 (task 2).", "title": "" }, { "docid": "16b78e470af247cc65fd1ef4e17ace4b", "text": "OBJECTIVES\nTo examine the effectiveness of using the 'mind map' study technique to improve factual recall from written information.\n\n\nDESIGN\nTo obtain baseline data, subjects completed a short test based on a 600-word passage of text prior to being randomly allocated to form two groups: 'self-selected study technique' and 'mind map'. After a 30-minute interval the self-selected study technique group were exposed to the same passage of text previously seen and told to apply existing study techniques. Subjects in the mind map group were trained in the mind map technique and told to apply it to the passage of text. Recall was measured after an interfering task and a week later. Measures of motivation were taken.\n\n\nSETTING\nBarts and the London School of Medicine and Dentistry, University of London.\n\n\nSUBJECTS\n50 second- and third-year medical students.\n\n\nRESULTS\nRecall of factual material improved for both the mind map and self-selected study technique groups at immediate test compared with baseline. However this improvement was only robust after a week for those in the mind map group. At 1 week, the factual knowledge in the mind map group was greater by 10% (adjusting for baseline) (95% CI -1% to 22%). However motivation for the technique used was lower in the mind map group; if motivation could have been made equal in the groups, the improvement with mind mapping would have been 15% (95% CI 3% to 27%).\n\n\nCONCLUSION\nMind maps provide an effective study technique when applied to written material. However before mind maps are generally adopted as a study technique, consideration has to be given towards ways of improving motivation amongst users.", "title": "" }, { "docid": "028cdddc5d61865d0ea288180cef91c0", "text": "This paper investigates the use of Convolutional Neural Networks for classification of painted symbolic road markings. Previous work on road marking recognition is mostly based on either template matching or on classical feature extraction followed by classifier training which is not always effective and based on feature engineering. However, with the rise of deep neural networks and their success in ADAS systems, it is natural to investigate the suitability of CNN for road marking recognition. Unlike others, our focus is solely on road marking recognition and not detection; which has been extensively explored and conventionally based on MSER feature extraction of the IPM images. We train five different CNN architectures with variable number of convolution/max-pooling and fully connected layers, and different resolution of road mark patches. We use a publicly available road marking data set and incorporate data augmentation to enhance the size of this data set which is required for training deep nets. The augmented data set is randomly partitioned in 70% and 30% for training and testing. The best CNN network results in an average recognition rate of 99.05% for 10 classes of road markings on the test set.", "title": "" }, { "docid": "fbc0784d94e09cab75ee5a970786c30b", "text": "Adequate conservation and management of shark populations is becoming increasingly important on a global scale, especially because many species are exceptionally vulnerable to overfishing. Yet, reported catch statistics for sharks are incomplete, and mortality estimates have not been available for sharks as a group. Here, the global catch and mortality of sharks from reported and unreported landings, discards, and shark finning are being estimated at 1.44 million metric tons for the year 2000, and at only slightly less in 2010 (1.41 million tons). Based on an analysis of average shark weights, this translates into a total annual mortality estimate of about 100 million sharks in 2000, and about 97 million sharks in 2010, with a total range of possible values between 63 and 273 million sharks per year. Further, the exploitation rate for sharks as a group was calculated by dividing two independent mortality estimates by an estimate of total global biomass. As an alternative approach, exploitation rates for individual shark populations were compiled and averaged from stock assessments and other published sources. The resulting three independent estimates of the average exploitation rate ranged between 6.4% and 7.9% of sharks killed per year. This exceeds the average rebound rate for many shark populations, estimated from the life history information on 62 shark species (rebound rates averaged 4.9% per year), and explains the ongoing declines in most populations for which data exist. The consequences of these unsustainable catch and mortality rates for marine ecosystems could be substantial. Global total shark mortality, therefore, needs to be reduced drastically in order to rebuild depleted populations and restore marine ecosystems with functional top predators. & 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "8683c83a7983d33242d46c16f6f06f72", "text": "Many engineering activities, including mechatronic design, require that a multidomain or ‘multi-physics’ system and its control system be designed as an integrated system. This contribution discusses the background and tools for a port-based approach to integrated modeling and simulation of physical systems and their controllers, with parameters that are directly related to the real-world system, thus improving insight and direct feedback on modeling decisions.", "title": "" }, { "docid": "093465aba11b82b768e4213b23c5911b", "text": "This paper describes the generation of large deformation diffeomorphisms phi:Omega=[0,1]3<-->Omega for landmark matching generated as solutions to the transport equation dphi(x,t)/dt=nu(phi(x,t),t),epsilon[0,1] and phi(x,0)=x, with the image map defined as phi(.,1) and therefore controlled via the velocity field nu(.,t),epsilon[0,1]. Imagery are assumed characterized via sets of landmarks {xn, yn, n=1, 2, ..., N}. The optimal diffeomorphic match is constructed to minimize a running smoothness cost parallelLnu parallel2 associated with a linear differential operator L on the velocity field generating the diffeomorphism while simultaneously minimizing the matching end point condition of the landmarks. Both inexact and exact landmark matching is studied here. Given noisy landmarks xn matched to yn measured with error covariances Sigman, then the matching problem is solved generating the optimal diffeomorphism phi;(x,1)=integral0(1)nu(phi(x,t),t)dt+x where nu(.)=argmin(nu.)integral1(0) integralOmega parallelLnu(x,t) parallel2dxdt +Sigman=1N[yn-phi(xn,1)] TSigman(-1)[yn-phi(xn,1)]. Conditions for the existence of solutions in the space of diffeomorphisms are established, with a gradient algorithm provided for generating the optimal flow solving the minimum problem. Results on matching two-dimensional (2-D) and three-dimensional (3-D) imagery are presented in the macaque monkey.", "title": "" }, { "docid": "ca1c232e84e7cb26af6852007f215715", "text": "Word embedding-based methods have received increasing attention for their flexibility and effectiveness in many natural language-processing (NLP) tasks, including Word Similarity (WS). However, these approaches rely on high-quality corpus and neglect prior knowledge. Lexicon-based methods concentrate on human’s intelligence contained in semantic resources, e.g., Tongyici Cilin, HowNet, and Chinese WordNet, but they have the drawback of being unable to deal with unknown words. This article proposes a three-stage framework for measuring the Chinese word similarity by incorporating prior knowledge obtained from lexicons and statistics into word embedding: in the first stage, we utilize retrieval techniques to crawl the contexts of word pairs from web resources to extend context corpus. In the next stage, we investigate three types of single similarity measurements, including lexicon similarities, statistical similarities, and embedding-based similarities. Finally, we exploit simple combination strategies with math operations and the counter-fitting combination strategy using optimization method. To demonstrate our system’s efficiency, comparable experiments are conducted on the PKU-500 dataset. Our final results are 0.561/0.516 of Spearman/Pearson rank correlation coefficient, which outperform the state-of-the-art performance to the best of our knowledge. Experiment results on Chinese MC-30 and SemEval-2012 datasets show that our system also performs well on other Chinese datasets, which proves its transferability. Besides, our system is not language-specific and can be applied to other languages, e.g., English.", "title": "" }, { "docid": "ddc556ae150e165dca607e4a674583ae", "text": "Increasing patient numbers, changing demographics and altered patient expectations have all contributed to the current problem with 'overcrowding' in emergency departments (EDs). The problem has reached crisis level in a number of countries, with significant implications for patient safety, quality of care, staff 'burnout' and patient and staff satisfaction. There is no single, clear definition of the cause of overcrowding, nor a simple means of addressing the problem. For some hospitals, the option of ambulance diversion has become a necessity, as overcrowded waiting rooms and 'bed-block' force emergency staff to turn patients away. But what are the options when ambulance diversion is not possible? Christchurch Hospital, New Zealand is a tertiary level facility with an emergency department that sees on average 65,000 patients per year. There are no other EDs to whom patients can be diverted, and so despite admission rates from the ED of up to 48%, other options need to be examined. In order to develop a series of unified responses, which acknowledge the multifactorial nature of the problem, the Emergency Department Cardiac Analogy model of ED flow, was developed. This model highlights the need to intervene at each of three key points, in order to address the issue of overcrowding and its associated problems.", "title": "" }, { "docid": "af754985968db6b59b2c4f6affd370c6", "text": "Many real networks that are collected or inferred from data are incomplete due to missing edges. Missing edges can be inherent to the dataset (Facebook friend links will never be complete) or the result of sampling (one may only have access to a portion of the data). The consequence is that downstream analyses that \"consume\" the network will often yield less accurate results than if the edges were complete. Community detection algorithms, in particular, often suffer when critical intra-community edges are missing. We propose a novel consensus clustering algorithm to enhance community detection on incomplete networks. Our framework utilizes existing community detection algorithms that process networks imputed by our link prediction based sampling algorithm and merges their multiple partitions into a final consensus output. On average our method boosts performance of existing algorithms by 7% on artificial data and 17% on ego networks collected from Facebook.", "title": "" }, { "docid": "3c2b68ac95f1a9300585b73ca4b83122", "text": "The success of various applications including robotics, digital content creation, and visualization demand a structured and abstract representation of the 3D world from limited sensor data. Inspired by the nature of human perception of 3D shapes as a collection of simple parts, we explore such an abstract shape representation based on primitives. Given a single depth image of an object, we present 3DPRNN, a generative recurrent neural network that synthesizes multiple plausible shapes composed of a set of primitives. Our generative model encodes symmetry characteristics of common man-made objects, preserves long-range structural coherence, and describes objects of varying complexity with a compact representation. We also propose a method based on Gaussian Fields to generate a large scale dataset of primitive-based shape representations to train our network. We evaluate our approach on a wide range of examples and show that it outperforms nearest-neighbor based shape retrieval methods and is on-par with voxelbased generative models while using a significantly reduced parameter space.", "title": "" }, { "docid": "1fd87c65968630b6388985a41b7890ce", "text": "Cyber Defense Exercises have received much attention in recent years, and are increasingly becoming the cornerstone for ensuring readiness in this new domain. Crossed Swords is an exercise directed at training Red Team members for responsive cyber defense. However, prior iterations have revealed the need for automated and transparent real-time feedback systems to help participants improve their techniques and understand technical challenges. Feedback was too slow and players did not understand the visibility of their actions. We developed a novel and modular open-source framework to address this problem, dubbed Frankenstack. We used this framework during Crossed Swords 2017 execution and evaluated its effectiveness by interviewing participants and conducting an online survey. Due to the novelty of Red Team-centric exercises, very little academic research exists on providing real-time feedback during such exercises. Thus, this paper serves as a first foray into a novel research field.", "title": "" }, { "docid": "9075024a29f1c0c9ca3f2cc90059b7f1", "text": "Users often wish to participate in online groups anonymously, but misbehaving users may abuse this anonymity to spam or disrupt the group. Messaging protocols such as Mix-nets and DC-nets leave online groups vulnerable to denial-of-service and Sybil attacks, while accountable voting protocols are unusable or inefficient for general anonymous messaging. We present the first general messaging protocol that offers provable anonymity with accountability for moderate-size groups, and efficiently handles unbalanced loads where few members have much data to transmit in a given round. The N group members first cooperatively shuffle an N ×N matrix of pseudorandom seeds, then use these seeds in N “preplanned” DC-nets protocol runs. Each DC-nets run transmits the variable-length bulk data comprising one member’s message, using the minimum number of bits required for anonymity under our attack model. The protocol preserves message integrity and one-to-one correspondence between members and messages, makes denial-of-service attacks by members traceable to the culprit, and efficiently handles large and unbalanced message loads. A working prototype demonstrates the protocol’s practicality for anonymous messaging in groups of 40+ member nodes.", "title": "" }, { "docid": "a87c60deb820064abaa9093398937ff3", "text": "Cardiac arrhythmia is one of the most important indicators of heart disease. Premature ventricular contractions (PVCs) are a common form of cardiac arrhythmia caused by ectopic heartbeats. The detection of PVCs by means of ECG (electrocardiogram) signals is important for the prediction of possible heart failure. This study focuses on the classification of PVC heartbeats from ECG signals and, in particular, on the performance evaluation of selected features using genetic algorithms (GA) to the classification of PVC arrhythmia. The objective of this study is to apply GA as a feature selection method to select the best feature subset from 200 time series features and to integrate these best features to recognize PVC forms. Neural networks, support vector machines and k-nearest neighbour classification algorithms were used. Findings were expressed in terms of accuracy, sensitivity, and specificity for the MIT-BIH Arrhythmia Database. The results showed that the proposed model achieved higher accuracy rates than those of other works on this topic.", "title": "" }, { "docid": "cce513c48e630ab3f072f334d00b67dc", "text": "We consider two algorithms for on-line prediction based on a linear model. The algorithms are the well-known gradient descent (GD) algorithm and a new algorithm, which we call EG. They both maintain a weight vector using simple updates. For the GD algorithm, the update is based on subtracting the gradient of the squared error made on a prediction. The EG algorithm uses the components of the gradient in the exponents of factors that are used in updating the weight vector multiplicatively. We present worst-case loss bounds for EG and compare them to previously known bounds for the GD algorithm. The bounds suggest that the losses of the algorithms are in general incomparable, but EG has a much smaller loss if only few components of the input are relevant for the predictions. We have performed experiments which show that our worst-case upper bounds are quite tight already on simple artificial data. ] 1997 Academic Press", "title": "" }, { "docid": "df09834abe25199ac7b3205d657fffb2", "text": "In modern wireless communications products it is required to incorporate more and more different functions to comply with current market trends. A very attractive function with steadily growing market penetration is local positioning. To add this feature to low-cost mass-market devices without additional power consumption, it is desirable to use commercial communication chips and standards for localization of the wireless units. In this paper we present a concept to measure the distance between two IEEE 802.15.4 (ZigBee) compliant devices. The presented prototype hardware consists of a low- cost 2.45 GHz ZigBee chipset. For localization we use standard communication packets as transmit signals. Thus simultaneous data transmission and transponder localization is feasible. To achieve high positioning accuracy even in multipath environments, a coherent synthesis of measurements in multiple channels and a special signal phase evaluation concept is applied. With this technique the full available ISM bandwidth of 80 MHz is utilized. In first measurements with two different frequency references-a low-cost oscillator and a temperatur-compensated crystal oscillator-a positioning bias error of below 16 cm and 9 cm was obtained. The standard deviation was less than 3 cm and 1 cm, respectively. It is demonstrated that compared to signal correlation in time, the phase processing technique yields an accuracy improvement of roughly an order of magnitude.", "title": "" } ]
scidocsrr
c2998fed4e899382b5d39ff452daddc4
REINFORCED CONCRETE WALL RESPONSE UNDER UNI-AND BI-DIRECTIONAL LOADING
[ { "docid": "7a06c1b73662a377875da0ea2526c610", "text": "a Earthquake Engineering and Structural Dynamics Laboratory (EESD), School of Architecture, Civil and Environmental Engineering (ENAC), École Polytechnique Fédérale de Lausanne (EPFL), EPFL ENAC IIC EESD, GC B2 515, Station 18, CH – 1015 Lausanne, Switzerland b Earthquake Engineering and Structural Dynamics Laboratory (EESD), School of Architecture, Civil and Environmental Engineering (ENAC), École Polytechnique Fédérale de Lausanne (EPFL), EPFL ENAC IIC EESD, GC B2 504, Station 18, CH – 1015 Lausanne, Switzerland", "title": "" } ]
[ { "docid": "4b7e71b412770cbfe059646159ec66ca", "text": "We present empirical evidence to demonstrate that there is little or no difference between the Java Virtual Machine and the .NET Common Language Runtime, as regards the compilation and execution of object-oriented programs. Then we give details of a case study that proves the superiority of the Common Language Runtime as a target for imperative programming language compilers (in particular GCC).", "title": "" }, { "docid": "76f9b2059a99eb9cc1ed2d9dc5686724", "text": "This paper surveys the results of various studies on 3-D image coding. Themes are focused on efficient compression and display-independent representation of 3-D images. Most of the works on 3-D image coding have been concentrated on the compression methods tuned for each of the 3-D image formats (stereo pairs, multi-view images, volumetric images, holograms and so on). For the compression of stereo images, several techniques concerned with the concept of disparity compensation have been developed. For the compression of multi-view images, the concepts of disparity compensation and epipolar plane image (EPI) are the efficient ways of exploiting redundancies between multiple views. These techniques, however, heavily depend on the limited camera configurations. In order to consider many other multi-view configurations and other types of 3-D images comprehensively, more general platform for the 3-D image representation is introduced, aiming to outgrow the framework of 3-D “image” communication and to open up a novel field of technology, which should be called the “spatial” communication. Especially, the light ray based method has a wide range of application, including efficient transmission of the physical world, as well as integration of the virtual and physical worlds. key words: 3-D image coding, stereo images, multi-view images, panoramic images, volumetric images, holograms, displayindependent representation, light rays, spatial communication", "title": "" }, { "docid": "9490f117f153a16152237a5a6b08c0a3", "text": "Evidence from macaque monkey tracing studies suggests connectivity-based subdivisions within the precuneus, offering predictions for similar subdivisions in the human. Here we present functional connectivity analyses of this region using resting-state functional MRI data collected from both humans and macaque monkeys. Three distinct patterns of functional connectivity were demonstrated within the precuneus of both species, with each subdivision suggesting a discrete functional role: (i) the anterior precuneus, functionally connected with the superior parietal cortex, paracentral lobule, and motor cortex, suggesting a sensorimotor region; (ii) the central precuneus, functionally connected to the dorsolateral prefrontal, dorsomedial prefrontal, and multimodal lateral inferior parietal cortex, suggesting a cognitive/associative region; and (iii) the posterior precuneus, displaying functional connectivity with adjacent visual cortical regions. These functional connectivity patterns were differentiated from the more ventral networks associated with the posterior cingulate, which connected with limbic structures such as the medial temporal cortex, dorsal and ventromedial prefrontal regions, posterior lateral inferior parietal regions, and the lateral temporal cortex. Our findings are consistent with predictions from anatomical tracer studies in the monkey, and provide support that resting-state functional connectivity (RSFC) may in part reflect underlying anatomy. These subdivisions within the precuneus suggest that neuroimaging studies will benefit from treating this region as anatomically (and thus functionally) heterogeneous. Furthermore, the consistency between functional connectivity networks in monkeys and humans provides support for RSFC as a viable tool for addressing cross-species comparisons of functional neuroanatomy.", "title": "" }, { "docid": "fc62b094df3093528c6846e405f55e39", "text": "Correctly classifying a skin lesion is one of the first steps towards treatment. We propose a novel convolutional neural network (CNN) architecture for skin lesion classification designed to learn based on information from multiple image resolutions while leveraging pretrained CNNs. While traditional CNNs are generally trained on a single resolution image, our CNN is composed of multiple tracts, where each tract analyzes the image at a different resolution simultaneously and learns interactions across multiple image resolutions using the same field-of-view. We convert a CNN, pretrained on a single resolution, to work for multi-resolution input. The entire network is fine-tuned in a fully learned end-to-end optimization with auxiliary loss functions. We show how our proposed novel multi-tract network yields higher classification accuracy, outperforming state-of-the-art multi-scale approaches when compared over a public skin lesion dataset.", "title": "" }, { "docid": "c7405ff209148bcba4283e57c91f63f9", "text": "Differential search algorithm (DS) is a relatively new evolutionary algorithm inspired by the Brownian-like random-walkmovement which is used by an organism to migrate. It has been verified to be more effective than ABC, JDE, JADE, SADE, EPSDE, GSA, PSO2011, and CMA-ES. In this paper, we propose four improved solution search algorithms, namely “DS/rand/1,” “DS/rand/2,” “DS/current to rand/1,” and “DS/current to rand/2” to search the new space and enhance the convergence rate for the global optimization problem. In order to verify the performance of different solution search methods, 23 benchmark functions are employed. Experimental results indicate that the proposed algorithm performs better than, or at least comparable to, the original algorithm when considering the quality of the solution obtained. However, these schemes cannot still achieve the best solution for all functions. In order to further enhance the convergence rate and the diversity of the algorithm, a composite differential search algorithm (CDS) is proposed in this paper. This new algorithm combines three new proposed search schemes including “DS/rand/1,” “DS/rand/2,” and “DS/current to rand/1” with three control parameters using a random method to generate the offspring. Experiment results show that CDS has a faster convergence rate and better search ability based on the 23 benchmark functions.", "title": "" }, { "docid": "0cf9ef0e5e406509f35c0dcd7ea598af", "text": "This paper proposes a method to reduce cogging torque of a single side Axial Flux Permanent Magnet (AFPM) motor according to analysis results of finite element analysis (FEA) method. First, the main cause of generated cogging torque will be studied using three dimensional FEA method. In order to reduce the cogging torque, a dual layer magnet step skewed (DLMSS) method is proposed to determine the shape of dual layer magnets. The skewed angle of magnetic poles between these two layers is determined using equal air gap flux of inner and outer layers. Finally, a single-sided AFPM motor based on the proposed methods is built as experimental platform to verify the effectiveness of the design. Meanwhile, the differences between design and tested results will be analyzed for future research and improvement.", "title": "" }, { "docid": "4016ad494a953023f982b8a4876bc8c1", "text": "Visual tracking is one of the most important field of computer vision. It has immense number of applications ranging from surveillance to hi-fi military applications. This paper is based on the application developed for automatic visual tracking and fire control system for anti-aircraft machine gun (AAMG). Our system mainly consists of camera, as visual sensor; mounted on a 2D-moving platform attached with 2GHz embedded system through RS-232 and AAMG mounted on the same moving platform. Camera and AAMG are both bore-sighted. Correlation based template matching algorithm has been used for automatic visual tracking. This is the algorithm used in civilian and military automatic target recognition, surveillance and tracking systems. The algorithm does not give robust performance in different environments, especially in clutter and obscured background, during tracking. So, motion and prediction algorithms have been integrated with it to achieve robustness and better performance for real-time tracking. Visual tracking is also used to calculate lead angle, which is a vital component of such fire control systems. Lead is angular correction needed to compensate for the target motion during the time of flight of the projectile, to accurately hit the target. Although at present lead computation is not robust due to some limitation as lead calculation mostly relies on gunner intuition. Even then by the integrated implementation of lead angle with visual tracking and control algorithm for moving platform, we have been able to develop a system which detects tracks and destroys the target of interest.", "title": "" }, { "docid": "12f717b4973a5290233d6f03ba05626b", "text": "We present a modular approach for analyzing calcium imaging recordings of large neuronal ensembles. Our goal is to simultaneously identify the locations of the neurons, demix spatially overlapping components, and denoise and deconvolve the spiking activity from the slow dynamics of the calcium indicator. Our approach relies on a constrained nonnegative matrix factorization that expresses the spatiotemporal fluorescence activity as the product of a spatial matrix that encodes the spatial footprint of each neuron in the optical field and a temporal matrix that characterizes the calcium concentration of each neuron over time. This framework is combined with a novel constrained deconvolution approach that extracts estimates of neural activity from fluorescence traces, to create a spatiotemporal processing algorithm that requires minimal parameter tuning. We demonstrate the general applicability of our method by applying it to in vitro and in vivo multi-neuronal imaging data, whole-brain light-sheet imaging data, and dendritic imaging data.", "title": "" }, { "docid": "002f49b0aa994b286a106d6b75ec8b2a", "text": "We introduce a library of geometric voxel features for CAD surface recognition/retrieval tasks. Our features include local versions of the intrinsic volumes (the usual 3D volume, surface area, integrated mean and Gaussian curvature) and a few closely related quantities. We also compute Haar wavelet and statistical distribution features by aggregating raw voxel features. We apply our features to object classification on the ESB data set and demonstrate accurate results with a small number of shallow decision trees.", "title": "" }, { "docid": "8cddb1fed30976de82d62de5066a5ce6", "text": "Today, more and more people have their virtual identities on the web. It is common that people are users of more than one social network and also their friends may be registered on multiple websites. A facility to aggregate our online friends into a single integrated environment would enable the user to keep up-to-date with their virtual contacts more easily, as well as to provide improved facility to search for people across different websites. In this paper, we propose a method to identify users based on profile matching. We use data from two popular social networks to study the similarity of profile definition. We evaluate the importance of fields in the web profile and develop a profile comparison tool. We demonstrate the effectiveness and efficiency of our tool in identifying and consolidating duplicated users on different websites.", "title": "" }, { "docid": "482bc3d151948bad9fbfa02519fbe61a", "text": "Evolution has resulted in highly developed abilities in many natural intelligences to quickly and accurately predict mechanical phenomena. Humans have successfully developed laws of physics to abstract and model such mechanical phenomena. In the context of artificial intelligence, a recent line of work has focused on estimating physical parameters based on sensory data and use them in physical simulators to make long-term predictions. In contrast, we investigate the effectiveness of a single neural network for end-to-end long-term prediction of mechanical phenomena. Based on extensive evaluation, we demonstrate that such networks can outperform alternate approaches having even access to ground-truth physical simulators, especially when some physical parameters are unobserved or not known a-priori. Further, our network outputs a distribution of outcomes to capture the inherent uncertainty in the data. Our approach demonstrates for the first time the possibility of making actionable long-term predictions from sensor data without requiring to explicitly model the underlying physical laws.", "title": "" }, { "docid": "dfb83ad16854797137e34a5c7cb110ae", "text": "The increasing computing requirements for GPUs (Graphics Processing Units) have favoured the design and marketing of commodity devices that nowadays can also be used to accelerate general purpose computing. Therefore, future high performance clusters intended for HPC (High Performance Computing) will likely include such devices. However, high-end GPU-based accelerators used in HPC feature a considerable energy consumption, so that attaching a GPU to every node of a cluster has a strong impact on its overall power consumption. In this paper we detail a framework that enables remote GPU acceleration in HPC clusters, thus allowing a reduction in the number of accelerators installed in the cluster. This leads to energy, acquisition, maintenance, and space savings.", "title": "" }, { "docid": "b73526f1fb0abb4373421994dbd07822", "text": "in our country around 2.78% of peoples are not able to speak (dumb). Their communications with others are only using the motion of their hands and expressions. We proposed a new technique called artificial speaking mouth for dumb people. It will be very helpful to them for conveying their thoughts to others. Some peoples are easily able to get the information from their motions. The remaining is not able to understand their way of conveying the message. In order to overcome the complexity the artificial mouth is introduced for the dumb peoples. This system is based on the motion sensor. According to dumb people, for every motion they have a meaning. That message is kept in a database. Likewise all templates are kept in the database. In the real time the template database is fed into a microcontroller and the motion sensor is fixed in their hand. For every action the motion sensors get accelerated and give the signal to the microcontroller. The microcontroller matches the motion with the database and produces the speech signal. The output of the system is using the speaker. By properly updating the database the dumb will speak like a normal person using the artificial mouth. The system also includes a text to speech conversion (TTS) block that interprets the matched gestures.", "title": "" }, { "docid": "12b115e3b759fcb87956680d6e89d7aa", "text": "The calibration system presented in this article enables to calculate optical parameters i.e. intrinsic and extrinsic of both thermal and visual cameras used for 3D reconstruction of thermal images. Visual cameras are in stereoscopic set and provide a pair of stereo images of the same object which are used to perform 3D reconstruction of the examined object [8]. The thermal camera provides information about temperature distribution on the surface of an examined object. In this case the term of 3D reconstruction refers to assigning to each pixel of one of the stereo images (called later reference image) a 3D coordinate in the respective camera reference frame [8]. The computed 3D coordinate is then re-projected on to the thermograph and thus to the known 3D position specific temperature is assigned. In order to remap the 3D coordinates on to thermal image it is necessary to know the position of thermal camera against visual camera and therefore a calibration of the set of the three cameras must be performed. The presented calibration system includes special calibration board (fig.1) whose characteristic points of well known position are recognizable both by thermal and visual cameras. In order to detect calibration board characteristic points’ image coordinates, especially in thermal camera, a new procedure was designed.", "title": "" }, { "docid": "79465d290ab299b9d75e9fa617d30513", "text": "In this paper we describe computational experience in solving unconstrained quadratic zero-one problems using a branch and bound algorithm. The algorithm incorporates dynamic preprocessing techniques for forcing variables and heuristics to obtain good starting points. Computational results and comparisons with previous studies on several hundred test problems with dimensions up to 200 demonstrate the efficiency of our algorithm. In dieser Arbeit beschreiben wir rechnerische Erfahrungen bei der Lösung von unbeschränkten quadratischen Null-Eins-Problemen mit einem “Branch and Bound”-Algorithmus. Der Algorithmus erlaubt dynamische Vorbereitungs-Techniken zur Erzwingung ausgewählter Variablen und Heuristiken zur Wahl von guten Startpunkten. Resultate von Berechnungen und Vergleiche mit früheren Arbeiten mit mehreren hundert Testproblemen mit Dimensionen bis 200 zeigen die Effizienz unseres Algorithmus.", "title": "" }, { "docid": "e112af9e35690b64acc7242611b39dd2", "text": "Body sensor network systems can help people by providing healthcare services such as medical monitoring, memory enhancement, medical data access, and communication with the healthcare provider in emergency situations through the SMS or GPRS [1,2]. Continuous health monitoring with wearable [3] or clothing-embedded transducers [4] and implantable body sensor networks [5] will increase detection of emergency conditions in at risk patients. Not only the patient, but also their families will benefit from these. Also, these systems provide useful methods to remotely acquire and monitor the physiological signals without the need of interruption of the patient’s normal life, thus improving life quality [6,7].", "title": "" }, { "docid": "9121462cf9ac2b2c55b7a1c96261472f", "text": "The main goal of this chapter is to give characteristics, evaluation methodologies, and research examples of collaborative augmented reality (AR) systems from a perspective of human-to-human communication. The chapter introduces classifications of conventional and 3D collaborative systems as well as typical characteristics and application examples of collaborative AR systems. Next, it discusses design considerations of collaborative AR systems from a perspective of human communication and then discusses evaluation methodologies of human communication behaviors. The next section discusses a variety of collaborative AR systems with regard to display devices used. Finally, the chapter gives conclusion with future directions. This will be a good starting point to learn existing collaborative AR systems, their advantages and limitations. This chapter will also contribute to the selection of appropriate hardware configurations and software designs of a collaborative AR system for given conditions.", "title": "" }, { "docid": "5cd8ee9a938ed087e2a3bc667991557d", "text": "Expense reimbursement is a time-consuming and labor-intensive process across organizations. In this paper, we present a prototype expense reimbursement system that dramatically reduces the elapsed time and costs involved, by eliminating paper from the process life cycle. Our complete solution involves (1) an electronic submission infrastructure that provides multi- channel image capture, secure transport and centralized storage of paper documents; (2) an unconstrained data mining approach to extracting relevant named entities from un-structured document images; (3) automation of auditing procedures that enables automatic expense validation with minimum human interaction.\n Extracting relevant named entities robustly from document images with unconstrained layouts and diverse formatting is a fundamental technical challenge to image-based data mining, question answering, and other information retrieval tasks. In many applications that require such capability, applying traditional language modeling techniques to the stream of OCR text does not give satisfactory result due to the absence of linguistic context. We present an approach for extracting relevant named entities from document images by combining rich page layout features in the image space with language content in the OCR text using a discriminative conditional random field (CRF) framework. We integrate this named entity extraction engine into our expense reimbursement solution and evaluate the system performance on large collections of real-world receipt images provided by IBM World Wide Reimbursement Center.", "title": "" }, { "docid": "4775bf71a5eea05b77cafa53daefcff9", "text": "There is mounting empirical evidence that interacting with nature delivers measurable benefits to people. Reviews of this topic have generally focused on a specific type of benefit, been limited to a single discipline, or covered the benefits delivered from a particular type of interaction. Here we construct novel typologies of the settings, interactions and potential benefits of people-nature experiences, and use these to organise an assessment of the benefits of interacting with nature. We discover that evidence for the benefits of interacting with nature is geographically biased towards high latitudes and Western societies, potentially contributing to a focus on certain types of settings and benefits. Social scientists have been the most active researchers in this field. Contributions from ecologists are few in number, perhaps hindering the identification of key ecological features of the natural environment that deliver human benefits. Although many types of benefits have been studied, benefits to physical health, cognitive performance and psychological well-being have received much more attention than the social or spiritual benefits of interacting with nature, despite the potential for important consequences arising from the latter. The evidence for most benefits is correlational, and although there are several experimental studies, little as yet is known about the mechanisms that are important for delivering these benefits. For example, we do not know which characteristics of natural settings (e.g., biodiversity, level of disturbance, proximity, accessibility) are most important for triggering a beneficial interaction, and how these characteristics vary in importance among cultures, geographic regions and socio-economic groups. These are key directions for future research if we are to design landscapes that promote high quality interactions between people and nature in a rapidly urbanising world.", "title": "" }, { "docid": "d1eed1d7875930865944c98fbab5f7e1", "text": "Optic disc (OD) and fovea locations are two important anatomical landmarks in automated analysis of retinal disease in color fundus photographs. This paper presents a new, fast, fully automatic optic disc and fovea localization algorithm developed for diabetic retinopathy (DR) screening. The optic disc localization methodology comprises of two steps. First, the OD location is identified using template matching and directional matched filter. To reduce false positives due to bright areas of pathology, we exploit vessel characteristics inside the optic disc. The location of the fovea is estimated as the point of lowest matched filter response within a search area determined by the optic disc location. Second, optic disc segmentation is performed. Based on the detected optic disc location, a fast hybrid level-set algorithm which combines the region information and edge gradient to drive the curve evolution is used to segment the optic disc boundary. Extensive evaluation was performed on 1200 images (Messidor) composed of 540 images of healthy retinas, 431 images with DR but no risk of macular edema (ME), and 229 images with DR and risk of ME. The OD location methodology obtained 98.3% success rate, while fovea location achieved 95% success rate. The average mean absolute distance (MAD) between the OD segmentation algorithm and “gold standard” is 10.5% of estimated OD radius. Qualitatively, 97% of the images achieved Excellent to Fair performance for OD segmentation. The segmentation algorithm performs well even on blurred images.", "title": "" } ]
scidocsrr
3c07b7f7bd1c49589aeb7400d7c88da0
An Analysis of Single-Layer Networks in Unsupervised Feature Learning
[ { "docid": "dba73424d6215af4a696765ddf03c09d", "text": "We describe how to train a two-layer convolutional Deep Belief Network (DBN) on the 1.6 million tiny images dataset. When training a convolutional DBN, one must decide what to do with the edge pixels of teh images. As the pixels near the edge of an image contribute to the fewest convolutional lter outputs, the model may see it t to tailor its few convolutional lters to better model the edge pixels. This is undesirable becaue it usually comes at the expense of a good model for the interior parts of the image. We investigate several ways of dealing with the edge pixels when training a convolutional DBN. Using a combination of locally-connected convolutional units and globally-connected units, as well as a few tricks to reduce the e ects of over tting, we achieve state-of-the-art performance in the classi cation task of the CIFAR-10 subset of the tiny images dataset.", "title": "" } ]
[ { "docid": "8d6cb15882c3a08ce8e2726ed65bf3cb", "text": "Natural language processing systems (NLP) that extract clinical information from textual reports were shown to be effective for limited domains and for particular applications. Because an NLP system typically requires substantial resources to develop, it is beneficial if it is designed to be easily extendible to multiple domains and applications. This paper describes multiple extensions of an NLP system called MedLEE, which was originally developed for the domain of radiological reports of the chest, but has subsequently been extended to mammography, discharge summaries, all of radiology, electrocardiography, echocardiography, and pathology.", "title": "" }, { "docid": "524914f80055ef1f3f974720577aeb5d", "text": "Generative models estimate the underlying distribution of a dataset to generate realistic samples according to that distribution. In this paper, we present the first membership inference attacks against generative models: given a data point, the adversary determines whether or not it was used to train the model. Our attacks leverage Generative Adversarial Networks (GANs), which combine a discriminative and a generative model, to detect overfitting and recognize inputs that were part of training datasets, using the discriminator’s capacity to learn statistical differences in distributions. We present attacks based on both white-box and black-box access to the target model, against several state-of-the-art generative models, over datasets of complex representations of faces (LFW), objects (CIFAR-10), and medical images (Diabetic Retinopathy). We also discuss the sensitivity of the attacks to different training parameters, and their robustness against mitigation strategies, finding that defenses are either ineffective or lead to significantly worse performances of the generative models in terms of training stability and/or sample quality.", "title": "" }, { "docid": "9a881f70dcc1725c057817df81112f33", "text": "Haptics is a valuable tool in minimally invasive surgical simulation and training. We discuss important aspects of haptics in MISST, such as haptic rendering and haptic recording and playback. Minimally invasive surgery has revolutionized many surgical procedures over the last few decades. MIS is performed using a small video camera, a video display, and a few customized surgical tools. In procedures such as gall bladder removal (laparoscopic cholesystectomy), surgeons insert a camera and long slender tools into the abdomen through small skin incisions to explore the internal cavity and manipulate organs from outside the body as they view their actions on a video display. Because the development of minimally invasive techniques has reduced the sense of touch compared to open surgery, surgeons must rely more on the feeling of net forces resulting from tool-tissue interactions and need more training to successfully operate on patients.", "title": "" }, { "docid": "0c86d5f2e0159fc84aae66ff0695d714", "text": "We have analyzed the properties of the HSV (Hue, Saturation and Value) color space with emphasis on the visual perception of the variation in Hue, Saturation and Intensity values of an image pixel. We extract pixel features by either choosing the Hue or the Intensity as the dominant property based on the Saturation value of a pixel. The feature extraction method has been applied for both image segmentation as well as histogram generation applications – two distinct approaches to content based image retrieval (CBIR). Segmentation using this method shows better identification of objects in an image. The histogram retains a uniform color transition that enables us to do a window-based smoothing during retrieval. The results have been compared with those generated using the RGB color space.", "title": "" }, { "docid": "6a7839b42c549e31740f70aa0079ad46", "text": "Deep learning has improved performance on many natural language processing (NLP) tasks individually. However, general NLP models cannot emerge within a paradigm that focuses on the particularities of a single metric, dataset, and task. We introduce the Natural Language Decathlon (decaNLP), a challenge that spans ten tasks: question answering, machine translation, summarization, natural language inference, sentiment analysis, semantic role labeling, relation extraction, goal-oriented dialogue, semantic parsing, and commonsense pronoun resolution. We cast all tasks as question answering over a context. Furthermore, we present a new multitask question answering network (MQAN) that jointly learns all tasks in decaNLP without any task-specific modules or parameters more effectively than sequence-to-sequence and reading comprehension baselines. MQAN shows improvements in transfer learning for machine translation and named entity recognition, domain adaptation for sentiment analysis and natural language inference, and zero-shot capabilities for text classification. We demonstrate that the MQAN’s multi-pointer-generator decoder is key to this success and that performance further improves with an anti-curriculum training strategy. Though designed for decaNLP, MQAN also achieves state of the art results on the WikiSQL semantic parsing task in the single-task setting. We also release code for procuring and processing data, training and evaluating models, and reproducing all experiments for decaNLP.", "title": "" }, { "docid": "365cadf5f980e7c99cc3c2416ca36ba1", "text": "Epidemiologic studies from numerous disparate populations reveal that individuals with the habit of daily moderate wine consumption enjoy significant reductions in all-cause and particularly cardiovascular mortality when compared with individuals who abstain or who drink alcohol to excess. Researchers are working to explain this observation in molecular and nutritional terms. Moderate ethanol intake from any type of beverage improves lipoprotein metabolism and lowers cardiovascular mortality risk. The question now is whether wine, particularly red wine with its abundant content of phenolic acids and polyphenols, confers additional health benefits. Discovering the nutritional properties of wine is a challenging task, which requires that the biological actions and bioavailability of the >200 individual phenolic compounds be documented and interpreted within the societal factors that stratify wine consumption and the myriad effects of alcohol alone. Further challenge arises because the health benefits of wine address the prevention of slowly developing diseases for which validated biomarkers are rare. Thus, although the benefits of the polyphenols from fruits and vegetables are increasingly accepted, consensus on wine is developing more slowly. Scientific research has demonstrated that the molecules present in grapes and in wine alter cellular metabolism and signaling, which is consistent mechanistically with reducing arterial disease. Future research must address specific mechanisms both of alcohol and of polyphenolic action and develop biomarkers of their role in disease prevention in individuals.", "title": "" }, { "docid": "6cfee185a7438811aafd16a03fb75852", "text": "The Internet-of-Things (IoT) envisions a world where billions of everyday objects and mobile devices communicate using a large number of interconnected wired and wireless networks. Maximizing the utilization of this paradigm requires fine-grained QoS support for differentiated application requirements, context-aware semantic information retrieval, and quick and easy deployment of resources, among many other objectives. These objectives can only be achieved if components of the IoT can be dynamically managed end-to-end across heterogeneous objects, transmission technologies, and networking architectures. Software-defined Networking (SDN) is a new paradigm that provides powerful tools for addressing some of these challenges. Using a software-based control plane, SDNs introduce significant flexibility for resource management and adaptation of network functions. In this article, we study some promising solutions for the IoT based on SDN architectures. Particularly, we analyze the application of SDN in managing resources of different types of networks such as Wireless Sensor Networks (WSN) and mobile networks, the utilization of SDN for information-centric networking, and how SDN can leverage Sensing-as-a-Service (SaaS) as a key cloud application in the IoT.", "title": "" }, { "docid": "d0cf952865b72f25d9b8b049f717d976", "text": "In this paper, we consider the problem of estimating the relative expertise score of users in community question and answering services (CQA). Previous approaches typically only utilize the explicit question answering relationship between askers and an-swerers and apply link analysis to address this problem. The im-plicit pairwise comparison between two users that is implied in the best answer selection is ignored. Given a question and answering thread, it's likely that the expertise score of the best answerer is higher than the asker's and all other non-best answerers'. The goal of this paper is to explore such pairwise comparisons inferred from best answer selections to estimate the relative expertise scores of users. Formally, we treat each pairwise comparison between two users as a two-player competition with one winner and one loser. Two competition models are proposed to estimate user expertise from pairwise comparisons. Using the NTCIR-8 CQA task data with 3 million questions and introducing answer quality prediction based evaluation metrics, the experimental results show that the pairwise comparison based competition model significantly outperforms link analysis based approaches (PageRank and HITS) and pointwise approaches (number of best answers and best answer ratio) for estimating the expertise of active users. Furthermore, it's shown that pairwise comparison based competi-tion models have better discriminative power than other methods. It's also found that answer quality (best answer) is an important factor to estimate user expertise.", "title": "" }, { "docid": "2245750e94df2d3e9eff8596a1d63193", "text": "This work studies automatic recognition of paralinguistic properties of speech. The focus is on selection of the most useful acoustic features for three classification tasks: 1) recognition of autism spectrum developmental disorders from child speech, 2) classification of speech into different affective categories, and 3) recognizing the level of social conflict from speech. The feature selection is performed using a new variant of random subset sampling methods with k-nearest neighbors (kNN) as a classifier. The experiments show that the proposed system is able to learn a set of important features for each recognition task, clearly exceeding the performance of the same classifier using the original full feature set. However, some effects of overfitting the feature sets to finite data are also observed and discussed.", "title": "" }, { "docid": "885fb29f5189381de351b634f4c7365c", "text": "The main objectives of this study were to determine the most frequent and the most significant individual and social factors related to students’ academic achievement and motivation for learning. The study was conducted among 740 students from the Faculty of Education and the Faculty of Philosophy in Vojvodina. The participants completed questionnaires measuring students’ dominant individual and social motivational factors, the level of their motivation for learning, the level of their academic achievement and students’ socio-demographic characteristics. The results of this study showed that the students reported that both individual and social factors are related to their academic achievement and motivation for learning. Individual factors – the perceived interest in content and perceived content usefulness for personal development proved to be the most significant predictors of a high level of motivation for learning and academic success, but social motivational factors showed themselves to be the most frequent among students. The results are especially important for university teachers as guidelines for improving students’ motivation.", "title": "" }, { "docid": "4c97621b15b1450fb43762157e2a8bd2", "text": "Current proposals for classifying female genital anomalies seem to be associated with limitations in effective categorization, creating the need for a new classification system that is as simple as possible, clear and accurate in its definitions, comprehensive, and correlated with patients' clinical presentation, prognosis, and treatment on an evidence-based foundation. Although creating a new classification system is not an easy task, it is feasible when taking into account the experience gained from applying the existing classification systems, mainly that of the American Fertility Society.", "title": "" }, { "docid": "8e465d1434932f21db514c49650863bb", "text": "Context aware recommender systems (CARS) adapt the recommendations to the specific situation in which the items will be consumed. In this paper we present a novel context-aware recommendation algorithm that extends Matrix Factorization. We model the interaction of the contextual factors with item ratings introducing additional model parameters. The performed experiments show that the proposed solution provides comparable results to the best, state of the art, and more complex approaches. The proposed solution has the advantage of smaller computational cost and provides the possibility to represent at different granularities the interaction between context and items. We have exploited the proposed model in two recommendation applications: places of interest and music.", "title": "" }, { "docid": "8da468bbb923b9d790e633c6a4fd9873", "text": "Building Information Modeling (BIM) and Lean Thinking have been used separately as key approaches to overall construction projects’ improvement. Their combination, given several scenarios, presents opportunities for improvement as well as challenges in implementation. However, the exploration of eventual interactions and relationships between BIM as a process and Lean Construction principles is recent in research. The objective of this paper is to identify BIM and Lean relationship aspects with a focus on the construction phase and from the perspective of the general contractor (GC). This paper is based on a case study where BIM is already heavily used by the GC and where the integration of Lean practices is recent. We explore areas of improvement and Lean contributions to BIM from two perspectives. First, from Sacks et al.’s (2010) Interaction Matrix perspective, we identify some existing interactions. Second, based on the Capability Maturity Model (CMM) of the National Building Information Modeling Standard (NBIMS), we measure the level of the project’s BIM maturity and highlight areas of improvement for Lean. The main contribution of the paper is concerned with the exploration of the BIM maturity levels that are enhanced by Lean implementation.", "title": "" }, { "docid": "a059fcf7c49db87bfbd3a7f452f0288d", "text": "This paper investigates the physical layer security of non-orthogonal multiple access (NOMA) in large-scale networks with invoking stochastic geometry. Both single-antenna and multiple-antenna aided transmission scenarios are considered, where the base station (BS) communicates with randomly distributed NOMA users. In the single-antenna scenario, we adopt a protected zone around the BS to establish an eavesdropper-exclusion area with the aid of careful channel ordering of the NOMA users. In the multiple-antenna scenario, artificial noise is generated at the BS for further improving the security of a beamforming-aided system. In order to characterize the secrecy performance, we derive new exact expressions of the security outage probability for both single-antenna and multiple-antenna aided scenarios. For the single-antenna scenario, we perform secrecy diversity order analysis of the selected user pair. The analytical results derived demonstrate that the secrecy diversity order is determined by the specific user having the worse channel condition among the selected user pair. For the multiple-antenna scenario, we derive the asymptotic secrecy outage probability, when the number of transmit antennas tends to infinity. Monte Carlo simulations are provided for verifying the analytical results derived and to show that: 1) the security performance of the NOMA networks can be improved by invoking the protected zone and by generating artificial noise at the BS and 2) the asymptotic secrecy outage probability is close to the exact secrecy outage probability.", "title": "" }, { "docid": "bc2dee76b561bffeead80e74d5b8a388", "text": "BACKGROUND AND PURPOSE\nCarotid artery stenosis causes up to 10% of all ischemic strokes. Carotid endarterectomy (CEA) was introduced as a treatment to prevent stroke in the early 1950s. Carotid stenting (CAS) was introduced as a treatment to prevent stroke in 1994.\n\n\nMETHODS\nThe Carotid Revascularization Endarterectomy versus Stenting Trial (CREST) is a randomized trial with blinded end point adjudication. Symptomatic and asymptomatic patients were randomized to CAS or CEA. The primary end point was the composite of any stroke, myocardial infarction, or death during the periprocedural period and ipsilateral stroke thereafter, up to 4 years.\n\n\nRESULTS\nThere was no significant difference in the rates of the primary end point between CAS and CEA (7.2% versus 6.8%; hazard ratio, 1.11; 95% CI, 0.81 to 1.51; P=0.51). Symptomatic status and sex did not modify the treatment effect, but an interaction with age and treatment was detected (P=0.02). Outcomes were slightly better after CAS for patients aged <70 years and better after CEA for patients aged >70 years. The periprocedural end point did not differ for CAS and CEA, but there were differences in the components, CAS versus CEA (stroke 4.1% versus 2.3%, P=0.012; and myocardial infarction 1.1% versus 2.3%, P=0.032).\n\n\nCONCLUSIONS\nIn CREST, CAS and CEA had similar short- and longer-term outcomes. During the periprocedural period, there was higher risk of stroke with CAS and higher risk of myocardial infarction with CEA. Clinical Trial Registration-www.clinicaltrials.gov. Unique identifier: NCT00004732.", "title": "" }, { "docid": "16426be05f066e805e48a49a82e80e2e", "text": "Ontologies have been developed and used by several researchers in different knowledge domains aiming to ease the structuring and management of knowledge, and to create a unique standard to represent concepts of such a knowledge domain. Considering the computer security domain, several tools can be used to manage and store security information. These tools generate a great amount of security alerts, which are stored in different formats. This lack of standard and the amount of data make the tasks of the security administrators even harder, because they have to understand, using their tacit knowledge, different security alerts to make correlation and solve security problems. Aiming to assist the administrators in executing these tasks efficiently, this paper presents the main features of the computer security incident ontology developed to model, using a unique standard, the concepts of the security incident domain, and how the ontology has been evaluated.", "title": "" }, { "docid": "85a09871ca341ca5f70a78b2df8fdc02", "text": "This paper presents a multi-channel frequency-modulated continuous-wave (FMCW) radar sensor operating in the frequency range from 91 to 97 GHz. The millimeter-wave radar sensor utilizes an SiGe chipset comprising a single signal-generation chip and multiple monostatic transceiver (TRX) chips, which are based on a 200-GHz fT HBT technology. The front end is built on an RF soft substrate in chip-on-board technology and employs a nonuniformly distributed antenna array to improve the angular resolution. The synthesis of ten virtual antennas achieved by a multiple-input multiple-output technique allows the virtual array aperture to be maximized. The fundamental-wave voltage-controlled oscillator achieves a single-sideband phase noise of -88 dBc/Hz at 1-MHz offset frequency. The TX provides a saturated output power of 6.5 dBm, and the mixer within the TRX achieves a gain and a double sideband noise figure of 11.5 and 12 dB, respectively. Possible applications include radar sensing for range and angle detection, material characterization, and imaging.", "title": "" }, { "docid": "75ccea636210f4b4df490a7babdf7790", "text": "BACKGROUND\nSmartphones are becoming a daily necessity for most undergraduates in Mainland China. Because the present scenario of problematic smartphone use (PSU) is largely unexplored, in the current study we aimed to estimate the prevalence of PSU and to screen suitable predictors for PSU among Chinese undergraduates in the framework of the stress-coping theory.\n\n\nMETHODS\nA sample of 1062 undergraduate smartphone users was recruited by means of the stratified cluster random sampling strategy between April and May 2015. The Problematic Cellular Phone Use Questionnaire was used to identify PSU. We evaluated five candidate risk factors for PSU by using logistic regression analysis while controlling for demographic characteristics and specific features of smartphone use.\n\n\nRESULTS\nThe prevalence of PSU among Chinese undergraduates was estimated to be 21.3%. The risk factors for PSU were majoring in the humanities, high monthly income from the family (≥1500 RMB), serious emotional symptoms, high perceived stress, and perfectionism-related factors (high doubts about actions, high parental expectations).\n\n\nCONCLUSIONS\nPSU among undergraduates appears to be ubiquitous and thus constitutes a public health issue in Mainland China. Although further longitudinal studies are required to test whether PSU is a transient phenomenon or a chronic and progressive condition, our study successfully identified socio-demographic and psychological risk factors for PSU. These results, obtained from a random and thus representative sample of undergraduates, opens up new avenues in terms of prevention and regulation policies.", "title": "" }, { "docid": "0e644fc1c567356a2e099221a774232c", "text": "We present a coupled two-way clustering approach to gene microarray data analysis. The main idea is to identify subsets of the genes and samples, such that when one of these is used to cluster the other, stable and significant partitions emerge. The search for such subsets is a computationally complex task. We present an algorithm, based on iterative clustering, that performs such a search. This analysis is especially suitable for gene microarray data, where the contributions of a variety of biological mechanisms to the gene expression levels are entangled in a large body of experimental data. The method was applied to two gene microarray data sets, on colon cancer and leukemia. By identifying relevant subsets of the data and focusing on them we were able to discover partitions and correlations that were masked and hidden when the full dataset was used in the analysis. Some of these partitions have clear biological interpretation; others can serve to identify possible directions for future research.", "title": "" }, { "docid": "fbcbf7d6a53299708ecf6a780cf0834c", "text": "We present an approach for weakly supervised learning of human actions from video transcriptions. Our system is based on the idea that, given a sequence of input data and a transcript, i.e. a list of the order the actions occur in the video, it is possible to infer the actions within the video stream and to learn the related action models without the need for any frame-based annotation. Starting from the transcript information at hand, we split the given data sequences uniformly based on the number of expected actions. We then learn action models for each class by maximizing the probability that the training video sequences are generated by the action models given the sequence order as defined by the transcripts. The learned model can be used to temporally segment an unseen video with or without transcript. Additionally, the inferred segments can be used as a starting point to train high-level fully supervised models. We evaluate our approach on four distinct activity datasets, namely Hollywood Extended, MPII Cooking, Breakfast and CRIM13. It shows that the proposed system is able to align the scripted actions with the video data, that the learned models localize and classify actions in the datasets, and that they outperform any current state-of-the-art approach for aligning transcripts with video data.", "title": "" } ]
scidocsrr
713ee77d9d1d75ba1676446766043a5b
Sustained attention in children with specific language impairment (SLI).
[ { "docid": "bb65decbaecb11cf14044b2a2cbb6e74", "text": "The ability to remain focused on goal-relevant stimuli in the presence of potentially interfering distractors is crucial for any coherent cognitive function. However, simply instructing people to ignore goal-irrelevant stimuli is not sufficient for preventing their processing. Recent research reveals that distractor processing depends critically on the level and type of load involved in the processing of goal-relevant information. Whereas high perceptual load can eliminate distractor processing, high load on \"frontal\" cognitive control processes increases distractor processing. These findings provide a resolution to the long-standing early and late selection debate within a load theory of attention that accommodates behavioural and neuroimaging data within a framework that integrates attention research with executive function.", "title": "" } ]
[ { "docid": "3b72c70213ccd3d5f3bda5cc2e2c6945", "text": "Neural language models (NLMs) have recently gained a renewed interest by achieving state-of-the-art performance across many natural language processing (NLP) tasks. However, NLMs are very computationally demanding largely due to the computational cost of the softmax layer over a large vocabulary. We observe that, in decoding of many NLP tasks, only the probabilities of the top-K hypotheses need to be calculated preciously and K is often much smaller than the vocabulary size. This paper proposes a novel softmax layer approximation algorithm, called Fast Graph Decoder (FGD), which quickly identifies, for a given context, a set of K words that are most likely to occur according to a NLM. We demonstrate that FGD reduces the decoding time by an order of magnitude while attaining close to the full softmax baseline accuracy on neural machine translation and language modeling tasks. We also prove the theoretical guarantee on the softmax approximation quality.", "title": "" }, { "docid": "7528af716f17f125b253597e8c3e596f", "text": "BACKGROUND\nEnhancement of the osteogenic potential of mesenchymal stem cells (MSCs) is highly desirable in the field of bone regeneration. This paper proposes a new approach for the improvement of osteogenesis combining hypergravity with osteoinductive nanoparticles (NPs).\n\n\nMATERIALS AND METHODS\nIn this study, we aimed to investigate the combined effects of hypergravity and barium titanate NPs (BTNPs) on the osteogenic differentiation of rat MSCs, and the hypergravity effects on NP internalization. To obtain the hypergravity condition, we used a large-diameter centrifuge in the presence of a BTNP-doped culture medium. We analyzed cell morphology and NP internalization with immunofluorescent staining and coherent anti-Stokes Raman scattering, respectively. Moreover, cell differentiation was evaluated both at the gene level with quantitative real-time reverse-transcription polymerase chain reaction and at the protein level with Western blotting.\n\n\nRESULTS\nFollowing a 20 g treatment, we found alterations in cytoskeleton conformation, cellular shape and morphology, as well as a significant increment of expression of osteoblastic markers both at the gene and protein levels, jointly pointing to a substantial increment of NP uptake. Taken together, our findings suggest a synergistic effect of hypergravity and BTNPs in the enhancement of the osteogenic differentiation of MSCs.\n\n\nCONCLUSION\nThe obtained results could become useful in the design of new approaches in bone-tissue engineering, as well as for in vitro drug-delivery strategies where an increment of nanocarrier internalization could result in a higher drug uptake by cell and/or tissue constructs.", "title": "" }, { "docid": "1cd77d97f27b45d903ffcecda02795a5", "text": "Molecular machine learning has been maturing rapidly over the last few years. Improved methods and the presence of larger datasets have enabled machine learning algorithms to make increasingly accurate predictions about molecular properties. However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods. This work introduces MoleculeNet, a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem open source library). MoleculeNet benchmarks demonstrate that learnable representations are powerful tools for molecular machine learning and broadly offer the best performance. However, this result comes with caveats. Learnable representations still struggle to deal with complex tasks under data scarcity and highly imbalanced classification. For quantum mechanical and biophysical datasets, the use of physics-aware featurizations can be more important than choice of particular learning algorithm.", "title": "" }, { "docid": "0441fb016923cd0b7676d3219951c230", "text": "Globally modeling and reasoning over relations between regions can be beneficial for many computer vision tasks on both images and videos. Convolutional Neural Networks (CNNs) excel at modeling local relations by convolution operations, but they are typically inefficient at capturing global relations between distant regions and require stacking multiple convolution layers. In this work, we propose a new approach for reasoning globally in which a set of features are globally aggregated over the coordinate space and then projected to an interaction space where relational reasoning can be efficiently computed. After reasoning, relation-aware features are distributed back to the original coordinate space for down-stream tasks. We further present a highly efficient instantiation of the proposed approach and introduce the Global Reasoning unit (GloRe unit) that implements the coordinate-interaction space mapping by weighted global pooling and weighted broadcasting, and the relation reasoning via graph convolution on a small graph in interaction space. The proposed GloRe unit is lightweight, end-to-end trainable and can be easily plugged into existing CNNs for a wide range of tasks. Extensive experiments show our GloRe unit can consistently boost the performance of state-of-the-art backbone architectures, including ResNet [15, 16], ResNeXt [33], SE-Net [18] and DPN [9], for both 2D and 3D CNNs, on image classification, semantic segmentation and video action recognition task.", "title": "" }, { "docid": "3bb6bfbb139ab9b488c4106c9d6cc3bd", "text": "BACKGROUND\nRecent evidence demonstrates growth in both the quality and quantity of evidence in physical therapy. Much of this work has focused on randomized controlled trials and systematic reviews.\n\n\nOBJECTIVE\nThe purpose of this study was to conduct a comprehensive bibliometric assessment of Physical Therapy (PTJ) over the past 30 years to examine trends for all types of studies.\n\n\nDESIGN\nThis was a bibliometric analysis.\n\n\nMETHODS\nAll manuscripts published in PTJ from 1980 to 2009 were reviewed. Research reports, topical reviews (including perspectives and nonsystematic reviews), and case reports were included. Articles were coded based on type, participant characteristics, physical therapy focus, research design, purpose of article, clinical condition, and intervention. Coding was performed by 2 independent reviewers, and author, institution, and citation information was obtained using bibliometric software.\n\n\nRESULTS\nOf the 4,385 publications identified, 2,519 were included in this analysis. Of these, 67.1% were research reports, 23.0% were topical reviews, and 9.9% were case reports. Percentage increases over the past 30 years were observed for research reports, inclusion of \"symptomatic\" participants (defined as humans with a current symptomatic condition), systematic reviews, qualitative studies, prospective studies, and articles focused on prognosis, diagnosis, or metric topics. Percentage decreases were observed for topical reviews, inclusion of only \"asymptomatic\" participants (defined as humans without a current symptomatic condition), education articles, nonsystematic reviews, and articles focused on anatomy/physiology.\n\n\nLIMITATIONS\nQuality assessment of articles was not performed.\n\n\nCONCLUSIONS\nThese trends provide an indirect indication of the evolution of the physical therapy profession through the publication record in PTJ. Collectively, the data indicated an increased emphasis on publishing articles consistent with evidence-based practice and clinically based research. Bibliometric analyses indicated the most frequent citations were metric studies and references in PTJ were from journals from a variety of disciplines.", "title": "" }, { "docid": "5c9ba6384b6983a26212e8161e502484", "text": "The field of medical diagnostics contains a wealth of challenges which closely resemble classical machine learning problems; practical constraints, however, complicate the translation of these endpoints naively into classical architectures. Many tasks in radiology, for example, are largely problems of multi-label classification wherein medical images are interpreted to indicate multiple present or suspected pathologies. Clinical settings drive the necessity for high accuracy simultaneously across a multitude of pathological outcomes and greatly limit the utility of tools which consider only a subset. This issue is exacerbated by a general scarcity of training data and maximizes the need to extract clinically relevant features from available samples – ideally without the use of pre-trained models which may carry forward undesirable biases from tangentially related tasks. We present and evaluate a partial solution to these constraints in using LSTMs to leverage interdependencies among target labels in predicting 14 pathologic patterns from chest x-rays and establish state of the art results on the largest publicly available chest x-ray dataset from the NIH without pre-training. Furthermore, we propose and discuss alternative evaluation metrics and their relevance in clinical practice.", "title": "" }, { "docid": "9b7654390d496cb041f3073dcfb07e67", "text": "Electronic commerce (EC) transactions are subject to multiple information security threats. Proposes that consumer trust in EC transactions is influenced by perceived information security and distinguishes it from the objective assessment of security threats. Proposes mechanisms of encryption, protection, authentication, and verification as antecedents of perceived information security. These mechanisms are derived from technological solutions to security threats that are visible to consumers and hence contribute to actual consumer perceptions. Tests propositions in a study of 179 consumers and shows a significant relationship between consumers’ perceived information security and trust in EC transactions. Explores the role of limited financial liability as a surrogate for perceived security. However, the findings show that there is a minimal effect of financial liability on consumers’ trust in EC. Engenders several new insights regarding the role of perceived security in EC transactions.", "title": "" }, { "docid": "52b354c9b1cfe53598f159b025ec749a", "text": "This paper describes a survey designed to determine the information seeking behavior of graduate students at the University of Macedonia (UoM). The survey is a continuation of a previous one undertaken in the Faculties of Philosophy and Engineering at the Aristotle University of Thessaloniki (AUTh). This paper primarily presents results from the UoM survey, but also makes comparisons with the findings from the earlier survey at AUTh. The 254 UoM students responding tend to use the simplest information search techniques with no critical variations between different disciplines. Their information seeking behavior seems to be influenced by their search experience, computer and web experience, perceived ability and frequency of use of esources, and not by specific personal characteristics or attendance at library instruction programs. Graduate students of both universities similar information seeking preferences, with the UoM students using more sophisticated techniques, such as Boolean search and truncation, more often than the AUTh students.", "title": "" }, { "docid": "247eb1c32cf3fd2e7a925d54cb5735da", "text": "Several applications in machine learning and machine-to-human interactions tolerate small deviations in their computations. Digital systems can exploit this fault-tolerance to increase their energy-efficiency, which is crucial in embedded applications. Hence, this paper introduces a new means of Approximate Computing: Dynamic-Voltage-Accuracy-Frequency-Scaling (DVAFS), a circuit-level technique enabling a dynamic trade-off of energy versus computational accuracy that outperforms other Approximate Computing techniques. The usage and applicability of DVAFS is illustrated in the context of Deep Neural Networks, the current state-of-the-art in advanced recognition. These networks are typically executed on CPU's or GPU's due to their high computational complexity, making their deployment on battery-constrained platforms only possible through wireless connections with the cloud. This work shows how deep learning can be brought to IoT devices by running every layer of the network at its optimal computational accuracy. Finally, we demonstrate a DVAFS processor for Convolutional Neural Networks, achieving efficiencies of multiple TOPS/W.", "title": "" }, { "docid": "d1a4abaa57f978858edf0d7b7dc506ba", "text": "Abstraction in imagery results from the strategic simplification and elimination of detail to clarify the visual structure of the depicted shape. It is a mainstay of artistic practice and an important ingredient of effective visual communication. We develop a computational method for the abstract depiction of 2D shapes. Our approach works by organizing the shape into parts using a new synthesis of holistic features of the part shape, local features of the shape boundary, and global aspects of shape organization. Our abstractions are new shapes with fewer and clearer parts.", "title": "" }, { "docid": "932b189b21703a4c50399f27395f37a6", "text": "An ultra-low power wake-up receiver for body channel communication (BCC) is implemented in 0.13 μm CMOS process. The proposed wake-up receiver uses the injection-locking ring-oscillator (ILRO) to replace the RF amplifier with low power consumption. Through the ILRO, the frequency modulated input signal is converted to the full swing rectangular signal which is directly demodulated by the following low power PLL based FSK demodulator. In addition, the relaxed sensitivity and selectivity requirement by the good channel quality of the BCC reduces the power consumption of the receiver. As a result, the proposed wake-up receiver achieves a sensitivity of -55.2 dbm at a data rate of 200 kbps while consuming only 39 μW from the 0.7 V supply.", "title": "" }, { "docid": "ba94bc5f5762017aed0c307ce89c0558", "text": "Carsharing has emerged as an alternative to vehicle ownership and is a rapidly expanding global market. Particularly through the flexibility of free-floating models, car sharing complements public transport since customers do not need to return cars to specific stations. We present a novel data analytics approach that provides decision support to car sharing operators -- from local start-ups to global players -- in maneuvering this constantly growing and changing market environment. Using a large set of rental data, as well as zero-inflated and geographically weighted regression models, we derive indicators for the attractiveness of certain areas based on points of interest in their vicinity. These indicators are valuable for a variety of operational and strategic decisions. As a demonstration project, we present a case study of Berlin, where the indicators are used to identify promising regions for business area expansion.", "title": "" }, { "docid": "e6640dc272e4142a2ddad8291cfaead7", "text": "We give a summary of R. Borcherds’ solution (with some modifications) to the following part of the Conway-Norton conjectures: Given the Monster M and Frenkel-Lepowsky-Meurman’s moonshine module V ♮, prove the equality between the graded characters of the elements of M acting on V ♮ (i.e., the McKay-Thompson series for V ♮) and the modular functions provided by Conway and Norton. The equality is established using the homology of a certain subalgebra of the monster Lie algebra, and the Euler-Poincaré identity.", "title": "" }, { "docid": "3af1e6d82d1c70a2602d52f47ddce665", "text": "Birds have a smaller repertoire of immune genes than mammals. In our efforts to study antiviral responses to influenza in avian hosts, we have noted key genes that appear to be missing. As a result, we speculate that birds have impaired detection of viruses and intracellular pathogens. Birds are missing TLR8, a detector for single-stranded RNA. Chickens also lack RIG-I, the intracellular detector for single-stranded viral RNA. Riplet, an activator for RIG-I, is also missing in chickens. IRF3, the nuclear activator of interferon-beta in the RIG-I pathway is missing in birds. Downstream of interferon (IFN) signaling, some of the antiviral effectors are missing, including ISG15, and ISG54 and ISG56 (IFITs). Birds have only three antibody isotypes and IgD is missing. Ducks, but not chickens, make an unusual truncated IgY antibody that is missing the Fc fragment. Chickens have an expanded family of LILR leukocyte receptor genes, called CHIR genes, with hundreds of members, including several that encode IgY Fc receptors. Intriguingly, LILR homologues appear to be missing in ducks, including these IgY Fc receptors. The truncated IgY in ducks, and the duplicated IgY receptor genes in chickens may both have resulted from selective pressure by a pathogen on IgY FcR interactions. Birds have a minimal MHC, and the TAP transport and presentation of peptides on MHC class I is constrained, limiting function. Perhaps removing some constraint, ducks appear to lack tapasin, a chaperone involved in loading peptides on MHC class I. Finally, the absence of lymphotoxin-alpha and beta may account for the observed lack of lymph nodes in birds. As illustrated by these examples, the picture that emerges is some impairment of immune response to viruses in birds, either a cause or consequence of the host-pathogen arms race and long evolutionary relationship of birds and RNA viruses.", "title": "" }, { "docid": "de408de1915d43c4db35702b403d0602", "text": "real-time population health assessment and monitoring D. L. Buckeridge M. Izadi A. Shaban-Nejad L. Mondor C. Jauvin L. Dubé Y. Jang R. Tamblyn The fragmented nature of population health information is a barrier to public health practice. Despite repeated demands by policymakers, administrators, and practitioners to develop information systems that provide a coherent view of population health status, there has been limited progress toward developing such an infrastructure. We are creating an informatics platform for describing and monitoring the health status of a defined population by integrating multiple clinical and administrative data sources. This infrastructure, which involves a population health record, is designed to enable development of detailed portraits of population health, facilitate monitoring of population health indicators, enable evaluation of interventions, and provide clinicians and patients with population context to assist diagnostic and therapeutic decision-making. In addition to supporting public health professionals, clinicians, and the public, we are designing the infrastructure to provide a platform for public health informatics research. This early report presents the requirements and architecture for the infrastructure and describes the initial implementation of the population health record, focusing on indicators of chronic diseases related to obesity.", "title": "" }, { "docid": "cd1af39ff72f2ff36708ed0bf820fb95", "text": "Classifying semantic relations between entity pairs in sentences is an important task in Natural Language Processing (NLP). Most previous models for relation classification rely on the high-level lexical and syntatic features obtained by NLP tools such as WordNet, dependency parser, part-ofspeech (POS) tagger, and named entity recognizers (NER). In addition, state-of-the-art neural models based on attention mechanisms do not fully utilize information of entity that may be the most crucial features for relation classification. To address these issues, we propose a novel end-to-end recurrent neural model which incorporates an entity-aware attention mechanism with a latent entity typing (LET) method. Our model not only utilizes entities and their latent types as features effectively but also is more interpretable by visualizing attention mechanisms applied to our model and results of LET. Experimental results on the SemEval-2010 Task 8, one of the most popular relation classification task, demonstrate that our model outperforms existing state-ofthe-art models without any high-level features.", "title": "" }, { "docid": "77f83ada0854e34ac60c725c21671434", "text": "OBJECTIVES\nThis subanalysis of the TNT (Treating to New Targets) study investigates the effects of intensive lipid lowering with atorvastatin in patients with coronary heart disease (CHD) with and without pre-existing chronic kidney disease (CKD).\n\n\nBACKGROUND\nCardiovascular disease is a major cause of morbidity and mortality in patients with CKD.\n\n\nMETHODS\nA total of 10,001 patients with CHD were randomized to double-blind therapy with atorvastatin 80 mg/day or 10 mg/day. Patients with CKD were identified at baseline on the basis of an estimated glomerular filtration rate (eGFR) <60 ml/min/1.73 m(2) using the Modification of Diet in Renal Disease equation. The primary efficacy outcome was time to first major cardiovascular event.\n\n\nRESULTS\nOf 9,656 patients with complete renal data, 3,107 had CKD at baseline and demonstrated greater cardiovascular comorbidity than those with normal eGFR (n = 6,549). After a median follow-up of 5.0 years, 351 patients with CKD (11.3%) experienced a major cardiovascular event, compared with 561 patients with normal eGFR (8.6%) (hazard ratio [HR] = 1.35; 95% confidence interval [CI] 1.18 to 1.54; p < 0.0001). Compared with atorvastatin 10 mg, atorvastatin 80 mg reduced the relative risk of major cardiovascular events by 32% in patients with CKD (HR = 0.68; 95% CI 0.55 to 0.84; p = 0.0003) and 15% in patients with normal eGFR (HR = 0.85; 95% CI 0.72 to 1.00; p = 0.049). Both doses of atorvastatin were well tolerated in patients with CKD.\n\n\nCONCLUSIONS\nAggressive lipid lowering with atorvastatin 80 mg was both safe and effective in reducing the excess of cardiovascular events in a high-risk population with CKD and CHD.", "title": "" }, { "docid": "d3c8903fed280246ea7cb473ee87c0e7", "text": "Reaction time has a been a favorite subject of experimental psychologists since the middle of the nineteenth century. However, most studies ask questions about the organization of the brain, so the authors spend a lot of time trying to determine if the results conform to some mathematical model of brain activity. This makes these papers hard to understand for the beginning student. In this review, I have ignored these brain organization questions and summarized the major literature conclusions that are applicable to undergraduate laboratories using my Reaction Time software. I hope this review helps you write a good report on your reaction time experiment. I also apologize to reaction time researchers for omissions and oversimplifications.", "title": "" }, { "docid": "40a181cc018d3050e41fe9e2659acd0a", "text": "Efforts to adapt and extend graphic arts printing techniques for demanding device applications in electronics, biotechnology and microelectromechanical systems have grown rapidly in recent years. Here, we describe the use of electrohydrodynamically induced fluid flows through fine microcapillary nozzles for jet printing of patterns and functional devices with submicrometre resolution. Key aspects of the physics of this approach, which has some features in common with related but comparatively low-resolution techniques for graphic arts, are revealed through direct high-speed imaging of the droplet formation processes. Printing of complex patterns of inks, ranging from insulating and conducting polymers, to solution suspensions of silicon nanoparticles and rods, to single-walled carbon nanotubes, using integrated computer-controlled printer systems illustrates some of the capabilities. High-resolution printed metal interconnects, electrodes and probing pads for representative circuit patterns and functional transistors with critical dimensions as small as 1 mum demonstrate potential applications in printed electronics.", "title": "" }, { "docid": "b0532d77781257c80024926c836f14e1", "text": "Various levels of automation can be introduced by intelligent decision support systems, from fully automated, where the operator is completely left out of the decision process, to minimal levels of automation, where the automation only makes recommendations and the operator has the final say. For rigid tasks that require no flexibility in decision-making and with a low probability of system failure, higher levels of automation often provide the best solution. However, in time critical environments with many external and changing constraints such as air traffic control and military command and control operations, higher levels of automation are not advisable because of the risks and the complexity of both the system and the inability of the automated decision aid to be perfectly reliable. Human-inthe-loop designs, which employ automation for redundant, manual, and monotonous tasks and allow operators active participation, provide not only safety benefits, but also allow a human operator and a system to respond more flexibly to uncertain and unexpected events. However, there can be measurable costs to human performance when automation is used, such as loss of situational awareness, complacency, skill degradation, and automation bias. This paper will discuss the influence of automation bias in intelligent decision support systems, particularly those in aviation domains. Automation bias occurs in decision-making because humans have a tendency to disregard or not search for contradictory information in light of a computer-generated solution that is accepted as correct and can be exacerbated in time critical domains. Automated decision aids are designed to reduce human error but actually can cause new errors in the operation of a system if not designed with human cognitive limitations in mind.", "title": "" } ]
scidocsrr
950344250abe2b91d045e3f7e3bff252
eXpose: A Character-Level Convolutional Neural Network with Embeddings For Detecting Malicious URLs, File Paths and Registry Keys
[ { "docid": "2657bb2a6b2fb59714417aa9e6c6c5eb", "text": "Mash extends the MinHash dimensionality-reduction technique to include a pairwise mutation distance and P value significance test, enabling the efficient clustering and search of massive sequence collections. Mash reduces large sequences and sequence sets to small, representative sketches, from which global mutation distances can be rapidly estimated. We demonstrate several use cases, including the clustering of all 54,118 NCBI RefSeq genomes in 33 CPU h; real-time database search using assembled or unassembled Illumina, Pacific Biosciences, and Oxford Nanopore data; and the scalable clustering of hundreds of metagenomic samples by composition. Mash is freely released under a BSD license ( https://github.com/marbl/mash ).", "title": "" } ]
[ { "docid": "c55cab85bc7f1903e4355168e6e4e07b", "text": "Objectives: Several quantitative studies have now examined the relationship between quality of life (QoL) and bipolar disorder (BD) and have generally indicated that QoL is markedly impaired in patients with BD. However, little qualitative research has been conducted to better describe patients’ own experiences of how BD impacts upon life quality. We report here on a series of in-depth qualitative interviews we conducted as part of the item generation phase for a disease-specific scale to assess QoL in BD. Methods: We conducted 52 interviews with people with BD (n=35), their caregivers (n=5) and healthcare professionals (n=12) identified by both convenience and purposive sampling. Clinical characteristics of the affected sample ranged widely between individuals who had been clinically stable for several years through to inpatients who were recovering from a severe episode of depression or mania. Interviews were tape recorded, transcribed verbatim and analyzed thematically. Results: Although several interwoven themes emerged from the data, we chose to focus on 6 for the purposes of this paper: routine, independence, stigma and disclosure, identity, social support and spirituality. When asked to prioritize the areas they thought were most important in determining QoL, the majority of participants ranked social support as most important, followed by mental health. Conclusions: Findings indicate that there is a complex, multifaceted relationship between BD and QoL. Most of the affected individuals we interviewed reported that BD had a profoundly negative effect upon their life quality, particularly in the areas of education, vocation, financial functioning, and social and intimate relationships. However, some people also reported that having BD opened up new doors of opportunity.", "title": "" }, { "docid": "9e3562c5d4baf6be3293486383e62b3e", "text": "Many philosophical and contemplative traditions teach that \"living in the moment\" increases happiness. However, the default mode of humans appears to be that of mind-wandering, which correlates with unhappiness, and with activation in a network of brain areas associated with self-referential processing. We investigated brain activity in experienced meditators and matched meditation-naive controls as they performed several different meditations (Concentration, Loving-Kindness, Choiceless Awareness). We found that the main nodes of the default-mode network (medial prefrontal and posterior cingulate cortices) were relatively deactivated in experienced meditators across all meditation types. Furthermore, functional connectivity analysis revealed stronger coupling in experienced meditators between the posterior cingulate, dorsal anterior cingulate, and dorsolateral prefrontal cortices (regions previously implicated in self-monitoring and cognitive control), both at baseline and during meditation. Our findings demonstrate differences in the default-mode network that are consistent with decreased mind-wandering. As such, these provide a unique understanding of possible neural mechanisms of meditation.", "title": "" }, { "docid": "dc4abae418c9df783d78f508cdc2187a", "text": "Biological sensors are becoming more important to monitor the quality of the aquatic environment. In this paper the valve movement response of freshwater (Dreissena polymorpha) and marine (Mytilus edulis) mussels is presented as a tool in monitoring studies. Examples of various methods for data storage and data treatment are presented, elucidating easier operation and lower detection limits. Several applications are mentioned, including an early warning system based on this valve movement response of mussels.", "title": "" }, { "docid": "8ffc37aeacd3136d3a5801f87a3140df", "text": "Syndromic surveillance detects and monitors individual and population health indicators through sources such as emergency department records. Automated classification of these records can improve outbreak detection speed and diagnosis accuracy. Current syndromic systems rely on hand-coded keyword-based methods to parse written fields and may benefit from the use of modern supervised-learning classifier models. In this paper we implement two recurrent neural network models based on long short-term memory (LSTM) and gated recurrent unit (GRU) cells and compare them to two traditional bag-of-words classifiers: multinomial naïve Bayes (MNB) and a support vector machine (SVM). The MNB classifier is one of only two machine learning algorithms currently being used for syndromic surveillance. All four models are trained to predict diagnostic code groups as defined by Clinical Classification Software, first to predict from discharge diagnosis, then from chief complaint fields. The classifiers are trained on 3.6 million de-identified emergency department records from a single United States jurisdiction. We compare performance of these models primarily using the F1 score. We measure absolute model performance to determine which conditions are the most amenable to surveillance based on chief complaint alone. Using discharge diagnoses The LSTM classifier performs best, though all models exhibit an F1 score above 96.00. The GRU performs best on chief complaints (F1=47.38), and MNB with bigrams performs worst (F1=39.40). Certain syndrome types are easier to detect than others. For examples, chief complaints using the GRU model predicts alcohol-related disorders well (F1=78.91) but predicts influenza poorly (F1=14.80). In all instances the RNN models outperformed the bag-of-word classifiers suggesting deep learning models could substantially improve the automatic classification of unstructured text for syndromic surveillance. INTRODUCTION Syndromic surveillance—detection and monitoring individual and population health indicators that are discernible before confirmed diagnoses are made (Mandl et al.2004)—can draw from many data sources. Electronic health records of emergency department encounters, especially the free-text chief complaint field, are a common focus for syndromic surveillance (Yoon, Ising, & Gunn 2017). In practice, a computer algorithm associates the text of the chief complaint field with predefined syndromes, often by picking out keywords or parts of keywords or a machine learning algorithm based on mathematical representation of the chief complaint text. In this paper, we explore recurrent neural networks as an alternative to existing methods for associating chief complaint text with syndromes. Overview of Chief Complaint Classifiers In a recent overview of chief complaint classifiers (Conway et al., 2013), the authors divide chief complaint classifiers into 3 categories: keyword-based classifiers, linguistic classifiers, and statistical classifiers.", "title": "" }, { "docid": "c953895c57d8906736352698a55c24a9", "text": "Data scientists and physicians are starting to use artificial intelligence (AI) even in the medical field in order to better understand the relationships among the huge amount of data coming from the great number of sources today available. Through the data interpretation methods made available by the recent AI tools, researchers and AI companies have focused on the development of models allowing to predict the risk of suffering from a specific disease, to make a diagnosis, and to recommend a treatment that is based on the best and most updated scientific evidence. Even if AI is used to perform unimaginable tasks until a few years ago, the awareness about the ongoing revolution has not yet spread through the medical community for several reasons including the lack of evidence about safety, reliability and effectiveness of these tools, the lack of regulation accompanying hospitals in the use of AI by health care providers, the difficult attribution of liability in case of errors and malfunctions of these systems, and the ethical and privacy questions that they raise and that, as of today, are still unanswered.", "title": "" }, { "docid": "982d7d2d65cddba4fa7dac3c2c920790", "text": "In this paper, we present our multichannel neural architecture for recognizing emerging named entity in social media messages, which we applied in the Novel and Emerging Named Entity Recognition shared task at the EMNLP 2017 Workshop on Noisy User-generated Text (W-NUT). We propose a novel approach, which incorporates comprehensive word representations with multichannel information and Conditional Random Fields (CRF) into a traditional Bidirectional Long Short-Term Memory (BiLSTM) neural network without using any additional hand-crafted features such as gazetteers. In comparison with other systems participating in the shared task, our system won the 3rd place in terms of the average of two evaluation metrics.", "title": "" }, { "docid": "f741eb8ca9fb9798fb89674a0e045de9", "text": "We investigate the issue of model uncertainty in cross-country growth regressions using Bayesian Model Averaging (BMA). We find that the posterior probability is very spread among many models suggesting the superiority of BMA over choosing any single model. Out-of-sample predictive results support this claim. In contrast with Levine and Renelt (1992), our results broadly support the more “optimistic” conclusion of Sala-i-Martin (1997b), namely that some variables are important regressors for explaining cross-country growth patterns. However, care should be taken in the methodology employed. The approach proposed here is firmly grounded in statistical theory and immediately leads to posterior and predictive inference.", "title": "" }, { "docid": "03f913234dc6d41aada7ce3fe8de1203", "text": "Epicanthoplasty is commonly performed on Asian eyelids. Consequently, overcorrection may appear. The aim of this study was to introduce a method of reconstructing the epicanthal fold and to apply this method to the patients. A V flap with an extension (eagle beak shaped) was designed on the medial canthal area. The upper incision line started near the medial end of the double-fold line, and it followed its curvature inferomedially. For the lower incision, starting at the tip (medial end) of the flap, a curvilinear incision was designed first diagonally and then horizontally along the lower blepharoplasty line. The V flap was elevated as thin as possible. Then, the upper flap was deeply undermined to make it thick. The lower flap was made a little thinner than the upper flap. Then, the upper and lower flaps were approximated to form the anteromedial surface of the epicanthal fold in a fashion sufficient to cover the red caruncle. The V flap was rotated inferolaterally over the caruncle. The tip of the V flap was sutured to the medial one-third point of the lower margin. The inferior border of the V flap and the residual lower margin were approximated. Thereafter, the posterolateral surface of the epicanthal fold was made. From 1999 to 2011, 246 patients were operated on using this method. Among them, 62 patients were followed up. The mean intercanthal distance was increased from 31.7 to 33.8 mm postoperatively. Among the 246 patients operated on, reoperation was performed for 6 patients. Among the 6 patients reoperated on, 3 cases were due to epicanthus inversus, 1 case was due to insufficient reconstruction, 1 case was due to making an infold, and 1 case was due to reopening the epicanthal fold.This V-Y and rotation flap can be a useful method for reconstruction of the epicanthal fold.", "title": "" }, { "docid": "afbd52acb39600e8a0804f2140ebf4fc", "text": "This paper presents the case study of a non-intrusive porting of a monolithic C++ library for real-time 3D hand tracking, to the domain of edge-based computation. Towards a proof of concept, the case study considers a pair of workstations, a computationally powerful and a computationallyweak one. Bywrapping the C++ library in Java container and by capitalizing on a Java-based offloading infrastructure that supports both CPU and GPGPU computations, we are able to establish automatically the required serverclient workflow that best addresses the resource allocation problem in the effort to execute from the weak workstation. As a result, the weak workstation can perform well at the task, despite lacking the sufficient hardware to do the required computations locally. This is achieved by offloading computations which rely on GPGPU, to the powerful workstation, across the network that connects them. We show the edge-based computation challenges associated with the information flow of the ported algorithm, demonstrate how we cope with them, and identify what needs to be improved for achieving even better performance.", "title": "" }, { "docid": "30a6a3df784c2a8cc69a1bd75ad1998b", "text": "Traditional stock market prediction approaches commonly utilize the historical price-related data of the stocks to forecast their future trends. As the Web information grows, recently some works try to explore financial news to improve the prediction. Effective indicators, e.g., the events related to the stocks and the people’s sentiments towards the market and stocks, have been proved to play important roles in the stocks’ volatility, and are extracted to feed into the prediction models for improving the prediction accuracy. However, a major limitation of previous methods is that the indicators are obtained from only a single source whose reliability might be low, or from several data sources but their interactions and correlations among the multi-sourced data are largely ignored. In this work, we extract the events from Web news and the users’ sentiments from social media, and investigate their joint impacts on the stock price movements via a coupled matrix and tensor factorization framework. Specifically, a tensor is firstly constructed to fuse heterogeneous data and capture the intrinsic ∗Corresponding author Email addresses: zhangx@bupt.edu.cn (Xi Zhang), 2011213120@bupt.edu.cn (Yunjia Zhang), szwang@nuaa.edu.cn (Senzhang Wang), yaoyuntao@bupt.edu.cn (Yuntao Yao), fangbx@bupt.edu.cn (Binxing Fang), psyu@uic.edu (Philip S. Yu) Preprint submitted to Journal of LTEX Templates September 2, 2018 ar X iv :1 80 1. 00 58 8v 1 [ cs .S I] 2 J an 2 01 8 relations among the events and the investors’ sentiments. Due to the sparsity of the tensor, two auxiliary matrices, the stock quantitative feature matrix and the stock correlation matrix, are constructed and incorporated to assist the tensor decomposition. The intuition behind is that stocks that are highly correlated with each other tend to be affected by the same event. Thus, instead of conducting each stock prediction task separately and independently, we predict multiple correlated stocks simultaneously through their commonalities, which are enabled via sharing the collaboratively factorized low rank matrices between matrices and the tensor. Evaluations on the China A-share stock data and the HK stock data in the year 2015 demonstrate the effectiveness of the proposed model.", "title": "" }, { "docid": "9a05c95de1484df50a5540b31df1a010", "text": "Resumen. Este trabajo trata sobre un sistema de monitoreo remoto a través de una pantalla inteligente para sensores de temperatura y corriente utilizando una red híbrida CAN−ZIGBEE. El CAN bus es usado como medio de transmisión de datos a corta distancia mientras que Zigbee es empleado para que cada nodo de la red pueda interactuar de manera inalámbrica con el nodo principal. De esta manera la red híbrida combina las ventajas de cada protocolo de comunicación para intercambiar datos. El sistema cuenta con cuatro nodos, dos son CAN y reciben la información de los sensores y el resto son Zigbee. Estos nodos están a cargo de transmitir la información de un nodo CAN de manera inalámbrica y desplegarla en una pantalla inteligente.", "title": "" }, { "docid": "4a4a868d64a653fac864b5a7a531f404", "text": "Metropolitan areas have come under intense pressure to respond to federal mandates to link planning of land use, transportation, and environmental quality; and from citizen concerns about managing the side effects of growth such as sprawl, congestion, housing affordability, and loss of open space. The planning models used by Metropolitan Planning Organizations (MPOs) were generally not designed to address these questions, creating a gap in the ability of planners to systematically assess these issues. UrbanSim is a new model system that has been developed to respond to these emerging requirements, and has now been applied in three metropolitan areas. This paper describes the model system and its application to Eugene-Springfield, Oregon.", "title": "" }, { "docid": "c77fad43abe34ecb0a451a3b0b5d684e", "text": "Search engine click logs provide an invaluable source of relevance information, but this information is biased. A key source of bias is presentation order: the probability of click is influenced by a document's position in the results page. This paper focuses on explaining that bias, modelling how probability of click depends on position. We propose four simple hypotheses about how position bias might arise. We carry out a large data-gathering effort, where we perturb the ranking of a major search engine, to see how clicks are affected. We then explore which of the four hypotheses best explains the real-world position effects, and compare these to a simple logistic regression model. The data are not well explained by simple position models, where some users click indiscriminately on rank 1 or there is a simple decay of attention over ranks. A â cascade' model, where users view results from top to bottom and leave as soon as they see a worthwhile document, is our best explanation for position bias in early ranks", "title": "" }, { "docid": "9a217426c46fbbb3065f141a5d70cb6b", "text": "BACKGROUND & AIMS\nAnti-tumor necrosis factors (anti-TNF) including infliximab, adalimumab and certolizumab pegol are used to treat Crohn's disease (CD) and ulcerative colitis (UC). Paradoxically, while also indicated for the treatment of psoriasis, anti-TNF therapy has been associated with development of psoriasiform lesions in IBD patients and can compel discontinuation of therapy. We aim to investigate IBD patient, clinical characteristics, and frequency for the development of and outcomes associated with anti-TNF induced psoriasiform rash.\n\n\nMETHODS\nWe identify IBD patients on anti-TNFs with an onset of a psoriasiform rash. Patient characteristics, duration of anti-TNF, concomitant immunosuppressants, lesion distribution, and outcomes of rash are described.\n\n\nRESULTS\nOf 1004 IBD patients with exposure to anti-TNF therapy, 27 patients (2.7%) developed psoriasiform lesions. Psoriasiform rash cases stratified by biologic use were 1.3% for infliximab, 4.1% for adalimumab, and 6.4% for certolizumab. Average time on treatment (206.3weeks) and time on treatment until onset of psoriasiform lesions (126.9weeks) was significantly higher in the infliximab group. The adalimumab group had the highest need for treatment discontinuation (60%). The majority (59.3%) of patients were able to maintain on anti-TNFs despite rash onset. Among patients that required discontinuation (40.7%), the majority experienced improvement with a subsequent anti-TNF (66.7%).\n\n\nCONCLUSION\n27 cases of anti-TNF associated psoriasiform lesions are reported. Discontinuation of anti-TNF treatment is unnecessary in the majority. Dermatologic improvement was achieved in the majority with a subsequent anti-TNF, suggesting anti-TNF induced psoriasiform rash is not necessarily a class effect.", "title": "" }, { "docid": "4e55d02fdd8ff4c5739cc433f4f15e9b", "text": "muchine, \" a progrum f o r uutomuticully generating syntacticully correct progrums (test cusrs> f o r checking compiler front ends. The notion of \" clynumic grammur \" is introduced und is used in a syntax-defining notution thut procides f o r context-sensitiuity. Exurnples demonstrute use of the syntax machine. The \" syntax machine \" discussed here automatically generates random test cases for any suitably defined programming language.' The test cases it produces are syntactically valid programs. But they are not \" meaningful, \" and if an attempt is made to execute them, the results are unpredictable and uncheckable. For this reason, they are less valuable than handwritten test cases. However, as an inexhaustible source of new test material, the syntax machine has shown itself to be a valuable tool. In the following sections, we characterize the use of this tool in testing different types of language processors, introduce the concept of \" dynamic grammar \" of a programming language, outline the structure of the system, and show what the syntax machine does by means of some examples. Test cases Test cases for a language processor are programs written following the rules of the language, as documented. The test cases, when processed, should give known results. If this does not happen, then either the processor or its documentation is in error. We can distinguish three categories of language processors and assess the usefulness of the syntax machine for testing them. For an interpreter, the syntax machine test cases are virtually useless,", "title": "" }, { "docid": "69b831bb25e5ad0f18054d533c313b53", "text": "In recent years, indoor positioning has emerged as a critical function in many end-user applications; including military, civilian, disaster relief and peacekeeping missions. In comparison with outdoor environments, sensing location information in indoor environments requires a higher precision and is a more challenging task in part because various objects reflect and disperse signals. Ultra WideBand (UWB) is an emerging technology in the field of indoor positioning that has shown better performance compared to others. In order to set the stage for this work, we provide a survey of the state-of-the-art technologies in indoor positioning, followed by a detailed comparative analysis of UWB positioning technologies. We also provide an analysis of strengths, weaknesses, opportunities, and threats (SWOT) to analyze the present state of UWB positioning technologies. While SWOT is not a quantitative approach, it helps in assessing the real status and in revealing the potential of UWB positioning to effectively address the indoor positioning problem. Unlike previous studies, this paper presents new taxonomies, reviews some major recent advances, and argues for further exploration by the research community of this challenging problem space.", "title": "" }, { "docid": "7bd7b0b85ae68f0ccd82d597667d8acb", "text": "Trust evaluation plays an important role in securing wireless sensor networks (WSNs), which is one of the most popular network technologies for the Internet of Things (IoT). The efficiency of the trust evaluation process is largely governed by the trust derivation, as it dominates the overhead in the process, and performance of WSNs is particularly sensitive to overhead due to the limited bandwidth and power. This paper proposes an energy-aware trust derivation scheme using game theoretic approach, which manages overhead while maintaining adequate security of WSNs. A risk strategy model is first presented to stimulate WSN nodes' cooperation. Then, a game theoretic approach is applied to the trust derivation process to reduce the overhead of the process. We show with the help of simulations that our trust derivation scheme can achieve both intended security and high efficiency suitable for WSN-based IoT networks.", "title": "" }, { "docid": "ca20d27b1e6bfd1f827f967473d8bbdd", "text": "We propose a simple yet effective detector for pedestrian detection. The basic idea is to incorporate common sense and everyday knowledge into the design of simple and computationally efficient features. As pedestrians usually appear up-right in image or video data, the problem of pedestrian detection is considerably simpler than general purpose people detection. We therefore employ a statistical model of the up-right human body where the head, the upper body, and the lower body are treated as three distinct components. Our main contribution is to systematically design a pool of rectangular templates that are tailored to this shape model. As we incorporate different kinds of low-level measurements, the resulting multi-modal & multi-channel Haar-like features represent characteristic differences between parts of the human body yet are robust against variations in clothing or environmental settings. Our approach avoids exhaustive searches over all possible configurations of rectangle features and neither relies on random sampling. It thus marks a middle ground among recently published techniques and yields efficient low-dimensional yet highly discriminative features. Experimental results on the INRIA and Caltech pedestrian datasets show that our detector reaches state-of-the-art performance at low computational costs and that our features are robust against occlusions.", "title": "" }, { "docid": "c592a75ae5b607f04bdb383a1a04ccba", "text": "Searching for influential spreaders in complex networks is an issue of great significance for applications across various domains, ranging from the epidemic control, innovation diffusion, viral marketing, social movement to idea propagation. In this paper, we first display some of the most important theoretical models that describe spreading processes, and then discuss the problem of locating both the individual and multiple influential spreaders respectively. Recent approaches in these two topics are presented. For the identification of privileged single spreaders, we summarize several widely used centralities, such as degree, betweenness centrality, PageRank, k-shell, etc. We investigate the empirical diffusion data in a large scale online social community – LiveJournal. With this extensive dataset, we find that various measures can convey very distinct information of nodes. Of all the users in LiveJournal social network, only a small fraction of them involve in spreading. For the spreading processes in LiveJournal, while degree can locate nodes participating in information diffusion with higher probability, k-shell is more effective in finding nodes with large influence. Our results should provide useful information for designing efficient spreading strategies in reality.", "title": "" }, { "docid": "0f3fc1501a5990e6219b13c906c5c9fa", "text": "Many wideband baluns have been presented in the past using coupled lines, pure magnetic coupling or slotlines. Their limitations were set whether in high frequency or low frequency performance. Due to their lumped element bandpass representation, many of them allow just certain bandwidth. The tapered coaxial coil structure allows balun operation beyond 26 GHz and down to the kHz range through partial ferrite filling. The cable losses, cable cut-off frequency, the number of windings, the permeability of the ferrite and the minimum coil diameter limit the bandwidth. The tapering allows resonance free operation through the whole band. Many microwave devices like mixers, power amplifiers, SWR-bridges, antennas, etc. can be made more broadband with this kind of balun. A stepwise approach to the proposed structure is presented and compared to previous balun implementations. Finally, a measurement is provided and some implementation possibilities are discussed.", "title": "" } ]
scidocsrr
334964f1a2956ea37f7d8a28d93ab9cf
Insider Threat Prediction Tool: Evaluating the probability of IT misuse
[ { "docid": "f1cb2ce5a32d09383745284cfa838e90", "text": "In the information age, as we have become increasingly dependent upon complex information systems, there has been a focus on the vulnerability of these systems to computer crime and security attacks, exemplified by the work of the President's Commission on Critical Infrastructure Protection. Because of the high-tech nature of these systems and the technological expertise required to develop and maintain them, it is not surprising that overwhelming attention has been devoted by computer security experts to technological vulnerabilities and solutions. Yet, as captured in the title of a 1993 conference sponsored by the Defense Personnel Security Research Center, 2 Computer Crime: A Peopleware Problem, it is people who designed the systems, people who attack the systems, and understanding the psychology of information systems criminals is crucial to protecting those systems. s A Management Information Systems (MIS) professional at a military facility learns she is going to be downsized. She decides to encrypt large parts of the organization's database and hold it hostage. She contacts the systems administrator responsible for the database and offers to decode the data for $10,000 in \" severance pay \" and a promise of no prosecution. He agrees to her terms before consulting with proper authorities. Prosecutors reviewing the case determine that the administrator's deal precludes them from pursuing charges. s A postcard written by an enlisted man is discovered during the arrest of several members of a well-known hacker organization by the FBI. Writing from his military base where he serves as a computer specialist, he has inquired about establishing a relationship with the group. Investigation reveals the enlisted man to be a convicted hacker and former group member who had been offered a choice between prison and enlistment. While performing computer duties for the military, he is caught breaking into local phone systems. s An engineer at an energy processing plant becomes angry with his new supervisor, a non-technical administrator. The engineer's wife is terminally ill, and he is on probation after a series of angry and disruptive episodes at work. After he is sent home, the engineering staff discovers that he has made a series of idiosyncratic modifications to plant controls and safety systems. In response to being confronted about these changes, the engineer decides to withhold the password, threatening the productivity and safety of the plant. s At the regional headquarters of an international energy company, an MIS contractor effectively \" captures …", "title": "" } ]
[ { "docid": "bfd94756f73fc7f9eb81437f5d192ac3", "text": "Technological advances in upper-limb prosthetic design offer dramatically increased possibilities for powered movement. The DEKA Arm system allows users 10 powered degrees of movement. Learning to control these movements by utilizing a set of motions that, in most instances, differ from those used to obtain the desired action prior to amputation is a challenge for users. In the Department of Veterans Affairs \"Study to Optimize the DEKA Arm,\" we attempted to facilitate motor learning by using a virtual reality environment (VRE) program. This VRE program allows users to practice controlling an avatar using the controls designed to operate the DEKA Arm in the real world. In this article, we provide highlights from our experiences implementing VRE in training amputees to use the full DEKA Arm. This article discusses the use of VRE in amputee rehabilitation, describes the VRE system used with the DEKA Arm, describes VRE training, provides qualitative data from a case study of a subject, and provides recommendations for future research and implementation of VRE in amputee rehabilitation. Our experience has led us to believe that training with VRE is particularly valuable for upper-limb amputees who must master a large number of controls and for those amputees who need a structured learning environment because of cognitive deficits.", "title": "" }, { "docid": "700c016add5f44c3fbd560d84b83b290", "text": "This paper describes a novel framework, called I<scp>n</scp>T<scp>ens</scp>L<scp>i</scp> (\"intensely\"), for producing fast single-node implementations of dense tensor-times-matrix multiply (T<scp>tm</scp>) of arbitrary dimension. Whereas conventional implementations of T<scp>tm</scp> rely on explicitly converting the input tensor operand into a matrix---in order to be able to use any available and fast general matrix-matrix multiply (G<scp>emm</scp>) implementation---our framework's strategy is to carry out the T<scp>tm</scp> <i>in-place</i>, avoiding this copy. As the resulting implementations expose tuning parameters, this paper also describes a heuristic empirical model for selecting an optimal configuration based on the T<scp>tm</scp>'s inputs. When compared to widely used single-node T<scp>tm</scp> implementations that are available in the Tensor Toolbox and Cyclops Tensor Framework (C<scp>tf</scp>), In-TensLi's in-place and input-adaptive T<scp>tm</scp> implementations achieve 4× and 13× speedups, showing Gemm-like performance on a variety of input sizes.", "title": "" }, { "docid": "7e99c34beafefdfcf11750e5acfc8ac0", "text": "Emerging technologies offer exciting new ways of using entertainment technology to create fantastic play experiences and foster interactions between players. Evaluating entertainment technology is challenging because success isn’ t defined in terms of productivity and performance, but in terms of enjoyment and interaction. Current subjective methods of evaluating entertainment technology aren’ t sufficiently robust. This paper describes two experiments designed to test the efficacy of physiological measures as evaluators of user experience with entertainment technologies. We found evidence that there is a different physiological response in the body when playing against a computer versus playing against a friend. These physiological results are mirrored in the subjective reports provided by the participants. In addition, we provide guidelines for collecting physiological data for user experience analysis, which were informed by our empirical investigations. This research provides an initial step towards using physiological responses to objectively evaluate a user’s experience with entertainment technology.", "title": "" }, { "docid": "4ac083b7e2900eb5cc80efd6022c76c1", "text": "We investigate the problem of reconstructing normals, albedo and lights of Lambertian surfaces in uncalibrated photometric stereo under the perspective projection model. Our analysis is based on establishing the integrability constraint. In the orthographic projection case, it is well-known that when such constraint is imposed, a solution can be identified only up to 3 parameters, the so-called generalized bas-relief (GBR) ambiguity. We show that in the perspective projection case the solution is unique. We also propose a closed-form solution which is simple, efficient and robust. We test our algorithm on synthetic data and publicly available real data. Our quantitative tests show that our method outperforms all prior work of uncalibrated photometric stereo under orthographic projection.", "title": "" }, { "docid": "d495f9ae71492df9225249147563a3d9", "text": "The control of a PWM rectifier with LCL-filter using a minimum number of sensors is analyzed. In addition to the DC-link voltage either the converter or line current is measured. Two different ways of current control are shown, analyzed and compared by simulations as well as experimental investigations. Main focus is spent on active damping of the LCL filter resonance and on robustness against line inductance variations.", "title": "" }, { "docid": "509731f3ae004c797c25add85faf6939", "text": "Based on the real data of a Chinese commercial bank’s credit card, in this paper, we classify the credit card customers into four classifications by K-means. Then we built forecasting models separately based on four data mining methods such as C5.0, neural network, chi-squared automatic interaction detector, and classification and regression tree according to the background information of the credit cards holders. Conclusively, we obtain some useful information of decision tree regulation by the best model among the four. The information is not only helpful for the bank to understand related characteristics of different customers, but also marketing representatives to find potential customers and to implement target marketing.", "title": "" }, { "docid": "6b0b505c9ec2686c775b9af353d3287b", "text": "OBJECTIVE\nTo determine the prevalence of additional injuries or bleeding disorders in a large population of young infants evaluated for abuse because of apparently isolated bruising.\n\n\nSTUDY DESIGN\nThis was a prospectively planned secondary analysis of an observational study of children<10 years (120 months) of age evaluated for possible physical abuse by 20 US child abuse teams. This analysis included infants<6 months of age with apparently isolated bruising who underwent diagnostic testing for additional injuries or bleeding disorders.\n\n\nRESULTS\nAmong 2890 children, 33.9% (980/2890) were <6 months old, and 25.9% (254/980) of these had bruises identified. Within this group, 57.5% (146/254) had apparently isolated bruises at presentation. Skeletal surveys identified new injury in 23.3% (34/146), neuroimaging identified new injury in 27.4% (40/146), and abdominal injury was identified in 2.7% (4/146). Overall, 50% (73/146) had at least one additional serious injury. Although testing for bleeding disorders was performed in 70.5% (103/146), no bleeding disorders were identified. Ultimately, 50% (73/146) had a high perceived likelihood of abuse.\n\n\nCONCLUSIONS\nInfants younger than 6 months of age with bruising prompting subspecialty consultation for abuse have a high risk of additional serious injuries. Routine medical evaluation for young infants with bruises and concern for physical abuse should include physical examination, skeletal survey, neuroimaging, and abdominal injury screening.", "title": "" }, { "docid": "65192c3b3e3bfe96e187bf391df049b4", "text": "This paper presents a new single-stage singleswitch (S4) high power factor correction (PFC) AC/DC converter suitable for low power applications (< 150 W) with a universal input voltage range (90–265 Vrms). The proposed topology integrates a buck-boost input current shaper followed by a buck and a buck-boost converter, respectively. As a result, the proposed converter can operate with larger duty cycles compared to the exiting S4 topologies; hence, making them suitable for extreme step-down voltage conversion applications. Several desirable features are gained when the three integrated converter cells operate in discontinuous conduction mode (DCM). These features include low semiconductor voltage stress, zero-current switch at turn-on, and simple control with a fast well-regulated output voltage. A detailed circuit analysis is performed to derive the design equations. The theoretical analysis and effectiveness of the proposed approach are confirmed by experimental results obtained from a 35-W/12-Vdc laboratory prototype.", "title": "" }, { "docid": "e46943cc1c73a56093d4194330d52d52", "text": "This paper deals with the compact modeling of an emerging technology: the carbon nanotube field-effect transistor (CNTFET). The paper proposed two design-oriented compact models, the first one for CNTFET with a classical behavior (MOSFET-like CNTFET), and the second one for CNTFET with an ambipolar behavior (Schottky-barrier CNTFET). Both models have been compared with exact numerical simulations and then implemented in VHDL-AMS", "title": "" }, { "docid": "cbe70e9372d1588f075d2037164b3077", "text": "Regularization is one of the crucial ingredients of deep learning, yet the term regularization has various definitions, and regularization methods are often studied separately from each other. In our work we present a systematic, unifying taxonomy to categorize existing methods. We distinguish methods that affect data, network architectures, error terms, regularization terms, and optimization procedures. We do not provide all details about the listed methods; instead, we present an overview of how the methods can be sorted into meaningful categories and sub-categories. This helps revealing links and fundamental similarities between them. Finally, we include practical recommendations both for users and for developers of new regularization methods.", "title": "" }, { "docid": "09fdc74a146a876e44bec1eca1bf7231", "text": "With more and more people around the world learning Chinese as a second language, the need of Chinese error correction tools is increasing. In the HSK dynamic composition corpus, word usage error (WUE) is the most common error type. In this paper, we build a neural network model that considers both target erroneous token and context to generate a correction vector and compare it against a candidate vocabulary to propose suitable corrections. To deal with potential alternative corrections, the top five proposed candidates are judged by native Chinese speakers. For more than 91% of the cases, our system can propose at least one acceptable correction within a list of five candidates. To the best of our knowledge, this is the first research addressing general-type Chinese WUE correction. Our system can help non-native Chinese learners revise their sentences by themselves. Title and Abstract in Chinese", "title": "" }, { "docid": "9807eace5f1f89f395fb8dff9dda13ab", "text": "This article provides a new, more comprehensive view of event-related brain dynamics founded on an information-based approach to modeling electroencephalographic (EEG) dynamics. Most EEG research focuses either on peaks 'evoked' in average event-related potentials (ERPs) or on changes 'induced' in the EEG power spectrum by experimental events. Although these measures are nearly complementary, they do not fully model the event-related dynamics in the data, and cannot isolate the signals of the contributing cortical areas. We propose that many ERPs and other EEG features are better viewed as time/frequency perturbations of underlying field potential processes. The new approach combines independent component analysis (ICA), time/frequency analysis, and trial-by-trial visualization that measures EEG source dynamics without requiring an explicit head model.", "title": "" }, { "docid": "63efc2ce1756f64a0328ecb64cb9200b", "text": "Memory analysis has gained popularity in recent years proving to be an effective technique for uncovering malware in compromised computer systems. The process of memory acquisition presents unique evidentiary challenges since many acquisition techniques require code to be run on a potential compromised system, presenting an avenue for anti-forensic subversion. In this paper, we examine a number of simple anti-forensic techniques and test a representative sample of current commercial and free memory acquisition tools. We find that current tools are not resilient to very simple anti-forensic measures. We present a novel memory acquisition technique, based on direct page table manipulation and PCI hardware introspection, without relying on operating system facilities making it more difficult to subvert. We then evaluate this technique’s further vulnerability to subversion by considering more advanced anti-forensic attacks. a 2013 Johannes Stüttgen and Michael Cohen. Published by Elsevier Ltd. All rights", "title": "" }, { "docid": "b9a84b723f946ab8c3dd17ae98b5868a", "text": "For many NLP applications such as Information Extraction and Sentiment Detection, it is of vital importance to distinguish between synonyms and antonyms. While the general assumption is that distributional models are not suitable for this task, we demonstrate that using suitable features, differences in the contexts of synonymous and antonymous German adjective pairs can be identified with a simple word space model. Experimenting with two context settings (a simple windowbased model and a ‘co-disambiguation model’ to approximate adjective sense disambiguation), our best model significantly outperforms the 50% baseline and achieves 70.6% accuracy in a synonym/antonym classification task.", "title": "" }, { "docid": "9581483f301b3522b88f6690b2668217", "text": "AI researchers employ not only the scientific method, but also methodology from mathematics and engineering. However, the use of the scientific method – specifically hypothesis testing – in AI is typically conducted in service of engineering objectives. Growing interest in topics such as fairness and algorithmic bias show that engineering-focused questions only comprise a subset of the important questions about AI systems. This results in the AI Knowledge Gap: the number of unique AI systems grows faster than the number of studies that characterize these systems’ behavior. To close this gap, we argue that the study of AI could benefit from the greater inclusion of researchers who are well positioned to formulate and test hypotheses about the behavior of AI systems. We examine the barriers preventing social and behavioral scientists from conducting such studies. Our diagnosis suggests that accelerating the scientific study of AI systems requires new incentives for academia and industry, mediated by new tools and institutions. To address these needs, we propose a two-sided marketplace called TuringBox. On one side, AI contributors upload existing and novel algorithms to be studied scientifically by others. On the other side, AI examiners develop and post machine intelligence tasks designed to evaluate and characterize algorithmic behavior. We discuss this market’s potential to democratize the scientific study of AI behavior, and thus narrow the AI Knowledge Gap. 1 The Many Facets of AI Research Although AI is a sub-discipline of computer science, AI researchers do not exclusively use the scientific method in their work. For example, the methods used by early AI researchers often drew from logic, a subfield of mathematics, and are distinct from the scientific method we think of today. Indeed AI has adopted many techniques and approaches over time. In this section, we distinguish and explore the history of these ∗Equal contribution. methodologies with a particular emphasis on characterizing the evolving science of AI.", "title": "" }, { "docid": "5d15118fcb25368fc662deeb80d4ef28", "text": "A5-GMR-1 is a synchronous stream cipher used to provide confidentiality for communications between satellite phones and satellites. The keystream generator may be considered as a finite state machine, with an internal state of 81 bits. The design is based on four linear feedback shift registers, three of which are irregularly clocked. The keystream generator takes a 64-bit secret key and 19-bit frame number as inputs, and produces an output keystream of length berween 28 and 210 bits.\n Analysis of the initialisation process for the keystream generator reveals serious flaws which significantly reduce the number of distinct keystreams that the generator can produce. Multiple (key, frame number) pairs produce the same keystream, and the relationship between the various pairs is easy to determine. Additionally, many of the keystream sequences produced are phase shifted versions of each other, for very small phase shifts. These features increase the effectiveness of generic time-memory tradeoff attacks on the cipher, making such attacks feasible.", "title": "" }, { "docid": "15e866c21b0739b7a2e24dc8ee5f1833", "text": "Plastics have outgrown most man-made materials and have long been under environmental scrutiny. However, robust global information, particularly about their end-of-life fate, is lacking. By identifying and synthesizing dispersed data on production, use, and end-of-life management of polymer resins, synthetic fibers, and additives, we present the first global analysis of all mass-produced plastics ever manufactured. We estimate that 8300 million metric tons (Mt) as of virgin plastics have been produced to date. As of 2015, approximately 6300 Mt of plastic waste had been generated, around 9% of which had been recycled, 12% was incinerated, and 79% was accumulated in landfills or the natural environment. If current production and waste management trends continue, roughly 12,000 Mt of plastic waste will be in landfills or in the natural environment by 2050.", "title": "" }, { "docid": "b5831795da97befd3241b9d7d085a20f", "text": "Want to learn more about the background and concepts of Internet congestion control? This indispensable text draws a sketch of the future in an easily comprehensible fashion. Special attention is placed on explaining the how and why of congestion control mechanisms complex issues so far hardly understood outside the congestion control research community. A chapter on Internet Traffic Management from the perspective of an Internet Service Provider demonstrates how the theory of congestion control impacts on the practicalities of service delivery.", "title": "" }, { "docid": "3357bcf236fdb8077a6848423a334b45", "text": "According to the latest investigation, there are 1.7 million active social network users in Taiwan. Previous researches indicated social network posts have a great impact on users, and mostly, the negative impact is from the rising demands of social support, which further lead to heavier social overload. In this study, we propose social overloaded posts detection model (SODM) by deploying the latest text mining and deep learning techniques to detect the social overloaded posts and, then with the developed social overload prevention system (SOS), the social overload posts and non-social overload ones are rearranged with different sorting methods to prevent readers from excessive demands of social support or social overload. The empirical results show that our SOS helps readers to alleviate social overload when reading via social media.", "title": "" }, { "docid": "58b825902e652cc2ae0bfd867bd4f5d9", "text": "Considers present and future practical applications of cross-reality. From tools to build new 3D virtual worlds to the products of those tools, cross-reality is becoming a staple of our everyday reality. Practical applications of cross-reality include the ability to virtually visit a factory to manage and maintain resources from the comfort of your laptop or desktop PC as well as sentient visors that augment reality with additional information so that users can make more informed choices. Tools and projects considered are:Project Wonderland for multiuser mixed reality;ClearWorlds: mixed- reality presence through virtual clearboards; VICI (Visualization of Immersive and Contextual Information) for ubiquitous augmented reality based on a tangible user interface; Mirror World Chocolate Factory; and sentient visors for browsing the world.", "title": "" } ]
scidocsrr
54d666b4b04de6cb9f79d5cd8fbffff5
"What happens if..." Learning to Predict the Effect of Forces in Images
[ { "docid": "503ccd79172e5b8b3cc3a26cf0d1b485", "text": "The field-of-view of standard cameras is very small, which is one of the main reasons that contextual information is not as useful as it should be for object detection. To overcome this limitation, we advocate the use of 360◦ full-view panoramas in scene understanding, and propose a whole-room context model in 3D. For an input panorama, our method outputs 3D bounding boxes of the room and all major objects inside, together with their semantic categories. Our method generates 3D hypotheses based on contextual constraints and ranks the hypotheses holistically, combining both bottom-up and top-down context information. To train our model, we construct an annotated panorama dataset and reconstruct the 3D model from single-view using manual annotation. Experiments show that solely based on 3D context without any image-based object detector, we can achieve a comparable performance with the state-of-the-art object detector. This demonstrates that when the FOV is large, context is as powerful as object appearance. All data and source code are available online.", "title": "" } ]
[ { "docid": "9e0de5990eb093698628b8f625a5be6b", "text": "A team of RAND Corporation researchers projected in 2005 that rapid adoption of health information technology (IT) could save the United States more than $81 billion annually. Seven years later the empirical data on the technology's impact on health care efficiency and safety are mixed, and annual health care expenditures in the United States have grown by $800 billion. In our view, the disappointing performance of health IT to date can be largely attributed to several factors: sluggish adoption of health IT systems, coupled with the choice of systems that are neither interoperable nor easy to use; and the failure of health care providers and institutions to reengineer care processes to reap the full benefits of health IT. We believe that the original promise of health IT can be met if the systems are redesigned to address these flaws by creating more-standardized systems that are easier to use, are truly interoperable, and afford patients more access to and control over their health data. Providers must do their part by reengineering care processes to take full advantage of efficiencies offered by health IT, in the context of redesigned payment models that favor value over volume.", "title": "" }, { "docid": "08dbe11a42f7018966c9ca2db5c1fa98", "text": "Person re-identification has important applications in video surveillance. It is particularly challenging because observed pedestrians undergo significant variations across camera views, and there are a large number of pedestrians to be distinguished given small pedestrian images from surveillance videos. This chapter discusses different approaches of improving the key components of a person reidentification system, including feature design, feature learning and metric learning, as well as their strength and weakness. It provides an overview of various person reidentification systems and their evaluation on benchmark datasets. Mutliple benchmark datasets for person re-identification are summarized and discussed. The performance of some state-of-the-art person identification approaches on benchmark datasets is compared and analyzed. It also discusses a few future research directions on improving benchmark datasets, evaluation methodology and system desgin.", "title": "" }, { "docid": "b0b024072e7cde0b404a9be5862ecdd1", "text": "Recent studies have led to the recognition of the epidermal growth factor receptor HER3 as a key player in cancer, and consequently this receptor has gained increased interest as a target for cancer therapy. We have previously generated several Affibody molecules with subnanomolar affinity for the HER3 receptor. Here, we investigate the effects of two of these HER3-specific Affibody molecules, Z05416 and Z05417, on different HER3-overexpressing cancer cell lines. Using flow cytometry and confocal microscopy, the Affibody molecules were shown to bind to HER3 on three different cell lines. Furthermore, the receptor binding of the natural ligand heregulin (HRG) was blocked by addition of Affibody molecules. In addition, both molecules suppressed HRG-induced HER3 and HER2 phosphorylation in MCF-7 cells, as well as HER3 phosphorylation in constantly HER2-activated SKBR-3 cells. Importantly, Western blot analysis also revealed that HRG-induced downstream signalling through the Ras-MAPK pathway as well as the PI3K-Akt pathway was blocked by the Affibody molecules. Finally, in an in vitro proliferation assay, the two Affibody molecules demonstrated complete inhibition of HRG-induced cancer cell growth. Taken together, our findings demonstrate that Z05416 and Z05417 exert an anti-proliferative effect on two breast cancer cell lines by inhibiting HRG-induced phosphorylation of HER3, suggesting that the Affibody molecules are promising candidates for future HER3-targeted cancer therapy.", "title": "" }, { "docid": "0f58d491e74620f43df12ba0ec19cda8", "text": "Latent Dirichlet allocation (LDA) (Blei, Ng, Jordan 2003) is a fully generative statistical language model on the content and topics of a corpus of documents. In this paper we apply a modification of LDA, the novel multi-corpus LDA technique for web spam classification. We create a bag-of-words document for every Web site and run LDA both on the corpus of sites labeled as spam and as non-spam. In this way collections of spam and non-spam topics are created in the training phase. In the test phase we take the union of these collections, and an unseen site is deemed spam if its total spam topic probability is above a threshold. As far as we know, this is the first web retrieval application of LDA. We test this method on the UK2007-WEBSPAM corpus, and reach a relative improvement of 11% in F-measure by a logistic regression based combination with strong link and content baseline classifiers.", "title": "" }, { "docid": "570d08da0139a6910423e4a41e76d8b1", "text": "One of the most important application areas of signal processing (SP) is, without a doubt, the software-defined radio (SDR) field [1]-[3]. Although their introduction dates back to the 1980s, SDRs are now becoming the dominant technology in radio communications, thanks to the dramatic development of SP-optimized programmable hardware, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs). Today, the computational throughput of these devices is such that sophisticated SP tasks can be efficiently handled, so that both the baseband and intermediate frequency (IF) sections of current communication systems are usually implemented, according to the SDR paradigm, by the FPGA's reconfigurable circuitry (e.g., [4]-[6]), or by the software running on DSPs.", "title": "" }, { "docid": "6970acb72318375a5af6aa03ad634f7e", "text": "BACKGROUND\nMyopia is an important public health problem because it is common and is associated with increased risk for chorioretinal degeneration, retinal detachment, and other vision- threatening abnormalities. In animals, ocular elongation and myopia progression can be lessened with atropine treatment. This study provides information about progression of myopia and atropine therapy for myopia in humans.\n\n\nMETHODS\nA total of 214 residents of Olmsted County, Minnesota (118 girls and 96 boys, median age, 11 years; range 6 to 15 years) received atropine for myopia from 1967 through 1974. Control subjects were matched by age, sex, refractive error, and date of baseline examination to 194 of those receiving atropine. Duration of treatment with atropine ranged from 18 weeks to 11.5 years (median 3.5 years).\n\n\nRESULTS\nMedian followup from initial to last refraction in the atropine group (11.7 years) was similar to that in the control group (12.4 years). Photophobia and blurred vision were frequently reported, but no serious adverse effects were associated with atropine therapy. Mean myopia progression during atropine treatment adjusted for age and refractive error (0.05 diopters per year) was significantly less than that among control subjects (0.36 diopters per year)(P<.001). Final refractions standardized to the age of 20 years showed a greater mean level of myopia in the control group (3.78 diopters) than in the atropine group (2.79 diopters) (P<.001).\n\n\nCONCLUSIONS\nThe data support the view that atropine therapy is associated with decreased progression of myopia and that beneficial effects remain after treatment has been discontinued.", "title": "" }, { "docid": "b8c5aa7628cf52fac71b31bb77ccfac0", "text": "Continual learning is the problem of learning new tasks or knowledge while protecting old knowledge and ideally generalizing from old experience to learn new tasks faster. Neural networks trained by stochastic gradient descent often degrade on old tasks when trained successively on new tasks with different data distributions. This phenomenon, referred to as catastrophic forgetting, is considered a major hurdle to learning with non-stationary data or sequences of new tasks, and prevents networks from continually accumulating knowledge and skills. We examine this issue in the context of reinforcement learning, in a setting where an agent is exposed to tasks in a sequence. Unlike most other work, we do not provide an explicit indication to the model of task boundaries, which is the most general circumstance for a learning agent exposed to continuous experience. While various methods to counteract catastrophic forgetting have recently been proposed, we explore a straightforward, general, and seemingly overlooked solution – that of using experience replay buffers for all past events – with a mixture of onand off-policy learning, leveraging behavioral cloning. We show that this strategy can still learn new tasks quickly yet can substantially reduce catastrophic forgetting in both Atari and DMLab domains, even matching the performance of methods that require task identities. When buffer storage is constrained, we confirm that a simple mechanism for randomly discarding data allows a limited size buffer to perform almost as well as an unbounded one.", "title": "" }, { "docid": "3fe30c4d898ec34b83a36efbba8019ff", "text": "Find the secret to improve the quality of life by reading this introduction to pattern recognition statistical structural neural and fuzzy logic approaches. This is a kind of book that you need now. Besides, it can be your favorite book to read after having this book. Do you ask why? Well, this is a book that has different characteristic with others. You may not need to know who the author is, how well-known the work is. As wise word, never judge the words from who speaks, but make the words as your good value to your life.", "title": "" }, { "docid": "5868ec5c17bf7349166ccd0600cc6b07", "text": "Secure devices are often subject to attacks and behavioural analysis in order to inject faults on them and/or extract otherwise secret information. Glitch attacks, sudden changes on the power supply rails, are a common technique used to inject faults on electronic devices. Detectors are designed to catch these attacks. As the detectors become more efficient, new glitches that are harder to detect arise. Common glitch detection approaches, such as directly monitoring the power rails, can potentially find it hard to detect fast glitches, as these become harder to differentiate from noise. This paper proposes a design which, instead of monitoring the power rails, monitors the effect of a glitch on a sensitive circuit, hence reducing the risk of detecting noise as glitches.", "title": "" }, { "docid": "644d262f1d2f64805392c15506764558", "text": "In this paper, we present a comprehensive survey of Markov Random Fields (MRFs) in computer vision and image understanding, with respect to the modeling, the inference and the learning. While MRFs were introduced into the computer vision eld about two decades ago, they started to become a ubiquitous tool for solving visual perception problems around the turn of the millennium following the emergence of efficient inference methods. During the past decade, a variety of MRF models as well as inference and learning methods have been developed for addressing numerous low, mid and high-level vision problems. While most of the literature concerns pairwise MRFs, in recent years we have also witnessed signi cant progress in higher-order MRFs, which substantially enhances the expressiveness of graph-based models and expands the domain of solvable problems. This survey provides a compact and informative summary of the major literature in this research topic.", "title": "" }, { "docid": "7aded3885476c7d37228855916255d79", "text": "The web is a rich resource of structured data. There has been an increasing interest in using web structured data for many applications such as data integration, web search and question answering. In this paper, we present DEXTER, a system to find product sites on the web, and detect and extract product specifications from them. Since product specifications exist in multiple product sites, our focused crawler relies on search queries and backlinks to discover product sites. To perform the detection, and handle the high diversity of specifications in terms of content, size and format, our system uses supervised learning to classify HTML fragments (e.g., tables and lists) present in web pages as specifications or not. To perform large-scale extraction of the attribute-value pairs from the HTML fragments identified by the specification detector, DEXTER adopts two lightweight strategies: a domain-independent and unsupervised wrapper method, which relies on the observation that these HTML fragments have very similar structure; and a combination of this strategy with a previous approach, which infers extraction patterns by annotations generated by automatic but noisy annotators. The results show that our crawler strategy to locate product specification pages is effective: (1) it discovered 1.46M product specification pages from 3, 005 sites and 9 different categories; (2) the specification detector obtains high values of F-measure (close to 0.9) over a heterogeneous set of product specifications; and (3) our efficient wrapper methods for attribute-value extraction get very high values of precision (0.92) and recall (0.95) and obtain better results than a state-of-the-art, supervised rule-based wrapper.", "title": "" }, { "docid": "b6ea053b02ebdb3519effdd55a4acf16", "text": "The naive Bayes classifier is an efficient classification model that is easy to learn and has a high accuracy in many domains. However, it has two main drawbacks: (i) its classification accuracy decreases when the attributes are not independent, and (ii) it can not deal with nonparametric continuous attributes. In this work we propose a method that deals with both problems, and learns an optimal naive Bayes classifier. The method includes two phases, discretization and structural improvement, which are repeated alternately until the classification accuracy can not be improved. Discretization is based on the minimum description length principle. To deal with dependent and irrelevant attributes, we apply a structural improvement method that eliminates and/or joins attributes, based on mutual and conditional information measures. The method has been tested in two different domains with good results", "title": "" }, { "docid": "eb4f7427eb73ac0a0486e8ecb2172b52", "text": "In this work we propose the use of a modified version of the correlation coefficient as a performance criterion for the image alignment problem. The proposed modification has the desirable characteristic of being invariant with respect to photometric distortions. Since the resulting similarity measure is a nonlinear function of the warp parameters, we develop two iterative schemes for its maximization, one based on the forward additive approach and the second on the inverse compositional method. As it is customary in iterative optimization, in each iteration the nonlinear objective function is approximated by an alternative expression for which the corresponding optimization is simple. In our case we propose an efficient approximation that leads to a closed form solution (per iteration) which is of low computational complexity, the latter property being particularly strong in our inverse version. The proposed schemes are tested against the forward additive Lucas-Kanade and the simultaneous inverse compositional algorithm through simulations. Under noisy conditions and photometric distortions our forward version achieves more accurate alignments and exhibits faster convergence whereas our inverse version has similar performance as the simultaneous inverse compositional algorithm but at a lower computational complexity.", "title": "" }, { "docid": "3d4a112bd166027a526e57f4969b3bd6", "text": "Two acid phosphatases isolated from culturedIpomoea (moring glory) cells were separated by column chromatography on DEAE-cellulose. The two acid phosphatases have different pH optima (pH 4.8–5.0 and 6.0) and do not require the presence of divalent ions. The enzymes possess high activity toward pyrophosphate,p-nitrophenylphosphate, nucleoside di- and triphosphates, and much less activity toward nucleoside monophosphates and sugar esters. The two phosphatases differ from each other in Michaelis constants, in the degree of inhibition by arsenate, fluoride and phosphate and have quantitative differences of substrate specificity. In addition, they also differ in their response to various ions.", "title": "" }, { "docid": "c7e3fc9562a02818bba80d250241511d", "text": "Convolutional networks trained on large supervised dataset produce visual features which form the basis for the state-of-the-art in many computer-vision problems. Further improvements of these visual features will likely require even larger manually labeled data sets, which severely limits the pace at which progress can be made. In this paper, we explore the potential of leveraging massive, weaklylabeled image collections for learning good visual features. We train convolutional networks on a dataset of 100 million Flickr photos and captions, and show that these networks produce features that perform well in a range of vision problems. We also show that the networks appropriately capture word similarity, and learn correspondences between different languages.", "title": "" }, { "docid": "db93b1e7b56f0d37c69fce9094b72bc3", "text": "The Man-In-The-Middle (MITM) attack is one of the most well known attacks in computer security, representing one of the biggest concerns for security professionals. MITM targets the actual data that flows between endpoints, and the confidentiality and integrity of the data itself. In this paper, we extensively review the literature on MITM to analyse and categorize the scope of MITM attacks, considering both a reference model, such as the open systems interconnection (OSI) model, as well as two specific widely used network technologies, i.e., GSM and UMTS. In particular, we classify MITM attacks based on several parameters, like location of an attacker in the network, nature of a communication channel, and impersonation techniques. Based on an impersonation techniques classification, we then provide execution steps for each MITM class. We survey existing countermeasures and discuss the comparison among them. Finally, based on our analysis, we propose a categorisation of MITM prevention mechanisms, and we identify some possible directions for future research.", "title": "" }, { "docid": "cab97e23b7aa291709ecf18e29f580cf", "text": "Recent findings show that coding genes are not the only targets that miRNAs interact with. In fact, there is a pool of different RNAs competing with each other to attract miRNAs for interactions, thus acting as competing endogenous RNAs (ceRNAs). The ceRNAs indirectly regulate each other via the titration mechanism, i.e. the increasing concentration of a ceRNA will decrease the number of miRNAs that are available for interacting with other targets. The cross-talks between ceRNAs, i.e. their interactions mediated by miRNAs, have been identified as the drivers in many disease conditions, including cancers. In recent years, some computational methods have emerged for identifying ceRNA-ceRNA interactions. However, there remain great challenges and opportunities for developing computational methods to provide new insights into ceRNA regulatory mechanisms.In this paper, we review the publically available databases of ceRNA-ceRNA interactions and the computational methods for identifying ceRNA-ceRNA interactions (also known as miRNA sponge interactions). We also conduct a comparison study of the methods with a breast cancer dataset. Our aim is to provide a current snapshot of the advances of the computational methods in identifying miRNA sponge interactions and to discuss the remaining challenges.", "title": "" }, { "docid": "e4a1f577cb232f6f76fba149a69db58f", "text": "During software development, the activities of requirements analysis, functional specification, and architectural design all require a team of developers to converge on a common vision of what they are developing. There have been remarkably few studies of conceptual design during real projects. In this paper, we describe a detailed field study of a large industrial software project. We observed the development team's conceptual design activities for three months with follow-up observations and discussions over the following eight months. In this paper, we emphasize the organization of the project and how patterns of collaboration affected the team's convergence on a common vision. Three observations stand out: First, convergence on a common vision was not only painfully slow but was punctuated by several reorientations of direction; second, the design process seemed to be inherently forgetful, involving repeated resurfacing of previously discussed issues; finally, a conflict of values persisted between team members responsible for system development and those responsible for overseeing the development process. These findings have clear implications for collaborative support tools and process interventions.", "title": "" }, { "docid": "5b131fbca259f07bd1d84d4f61761903", "text": "We aimed to identify a blood flow restriction (BFR) endurance exercise protocol that would both maximize cardiopulmonary and metabolic strain, and minimize the perception of effort. Twelve healthy males (23 ± 2 years, 75 ± 7 kg) performed five different exercise protocols in randomized order: HI, high-intensity exercise starting at 105% of the incremental peak power (P peak); I-BFR30, intermittent BFR at 30% P peak; C-BFR30, continuous BFR at 30% P peak; CON30, control exercise without BFR at 30% P peak; I-BFR0, intermittent BFR during unloaded exercise. Cardiopulmonary, gastrocnemius oxygenation (StO2), capillary lactate ([La]), and perceived exertion (RPE) were measured. V̇O2, ventilation (V̇ E), heart rate (HR), [La] and RPE were greater in HI than all other protocols. However, muscle StO2 was not different between HI (set1—57.8 ± 5.8; set2—58.1 ± 7.2%) and I-BRF30 (set1—59.4 ± 4.1; set2—60.5 ± 6.6%, p < 0.05). While physiologic responses were mostly similar between I-BFR30 and C-BFR30, [La] was greater in I-BFR30 (4.2 ± 1.1 vs. 2.6 ± 1.1 mmol L−1, p = 0.014) and RPE was less (5.6 ± 2.1 and 7.4 ± 2.6; p = 0.014). I-BFR30 showed similar reduced muscle StO2 compared with HI, and increased blood lactate compared to C-BFR30 exercise. Therefore, this study demonstrate that endurance cycling with intermittent BFR promotes muscle deoxygenation and metabolic strain, which may translate into increased endurance training adaptations while minimizing power output and RPE.", "title": "" } ]
scidocsrr
784d969d17bd5e7794110c0b97f47661
Knowledge Base Population using Stacked Ensembles of Information Extractors
[ { "docid": "3f2312e385fc1c9aafc6f9f08e2e2d4f", "text": "Entity relation detection is a form of information extraction that finds predefined relations between pairs of entities in text. This paper describes a relation detection approach that combines clues from different levels of syntactic processing using kernel methods. Information from three different levels of processing is considered: tokenization, sentence parsing and deep dependency analysis. Each source of information is represented by kernel functions. Then composite kernels are developed to integrate and extend individual kernels so that processing errors occurring at one level can be overcome by information from other levels. We present an evaluation of these methods on the 2004 ACE relation detection task, using Support Vector Machines, and show that each level of syntactic processing contributes useful information for this task. When evaluated on the official test data, our approach produced very competitive ACE value scores. We also compare the SVM with KNN on different kernels.", "title": "" } ]
[ { "docid": "e5ed312b0c3aaa26240a9f3aaa2bd36e", "text": "This paper presents PDF-TREX, an heuristic approach for table recognition and extraction from PDF documents.The heuristics starts from an initial set of basic content elements and aligns and groups them, in bottom-up way by considering only their spatial features, in order to identify tabular arrangements of information. The scope of the approach is to recognize tables contained in PDF documents as a 2-dimensional grid on a Cartesian plane and extract them as a set of cells equipped by 2-dimensional coordinates. Experiments, carried out on a dataset composed of tables contained in documents coming from different domains, shows that the approach is well performing in recognizing table cells.The approach aims at improving PDF document annotation and information extraction by providing an output that can be further processed for understanding table and document contents.", "title": "" }, { "docid": "ba149d78edb0835702fe1584947118d1", "text": "Studies on the adoption of business-to-consumer e-commerce have not simultaneously considered trust and risk as important determinants of adoption behavior. Further, trust in information technology has not been addressed to a great extent in the context of e-commerce. This research explicitly encompasses the electronic channel and the firm as objects to be trusted in e-commerce. Our conceptual model leads us to believe that trust in the electronic channel and perceived risks of e-commerce are the major determinants of the adoption behavior. Based on the social network theory and the trust theory, determinants of trust in the electronic channel are included in the research model. This research is expected to provide both theoretical explanations and empirical validation on the adoption of e-commerce. We will also be able to offer specific recommendations on marketing strategies for practitioners, regarding the adoption of Internet banking.", "title": "" }, { "docid": "ca9a0e293accd5e3c40d9ae58785ce77", "text": "Structural and incremental validity of the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV; Wechsler, 2008a) was examined with a sample of 300 individuals referred for evaluation at a university-based clinic. Confirmatory factor analysis indicated that the WAIS-IV structure was best represented by 4 first-order factors as well as a general intelligence factor in a direct hierarchical model. The general intelligence factor accounted for the most common and total variance among the subtests. Incremental validity analyses indicated that the Full Scale IQ (FSIQ) generally accounted for medium to large portions of academic achievement variance. For all measures of academic achievement, the first-order factors combined accounted for significant achievement variance beyond that accounted for by the FSIQ, but individual factor index scores contributed trivial amounts of achievement variance. Implications for interpreting WAIS-IV results are discussed.", "title": "" }, { "docid": "00c19e68020aff7fd86aa7e514cc0668", "text": "Network forensic techniques help in tracking different types of cyber attack by monitoring and inspecting network traffic. However, with the high speed and large sizes of current networks, and the sophisticated philosophy of attackers, in particular mimicking normal behaviour and/or erasing traces to avoid detection, investigating such crimes demands intelligent network forensic techniques. This paper suggests a real-time collaborative network Forensic scheme (RCNF) that can monitor and investigate cyber intrusions. The scheme includes three components of capturing and storing network data, selecting important network features using chi-square method and investigating abnormal events using a new technique called correntropy-variation. We provide a case study using the UNSW-NB15 dataset for evaluating the scheme, showing its high performance in terms of accuracy and false alarm rate compared with three recent state-of-the-art mechanisms.", "title": "" }, { "docid": "30e22be2c7383e90a6fd16becc34a586", "text": "SUMMARY\nThe etiology of age-related facial changes has many layers. Multiple theories have been presented over the past 50-100 years with an evolution of understanding regarding facial changes related to skin, soft tissue, muscle, and bone. This special topic will provide an overview of the current literature and evidence and theories of facial changes of the skeleton, soft tissues, and skin over time.", "title": "" }, { "docid": "115ed03ccee62fafc1606e6f6fdba1ce", "text": "High voltage SF6 circuit breaker must meet the breaking requirement for large short-circuit current, and ensure absence of breakdown after breaking small current. A 126kV high voltage SF6 circuit breaker was used as the research object in this paper. Based on the calculation results of non-equilibrium arc plasma material parameters, the distribution of pressure, temperature and density were calculated during the breaking progress. The electric field distribution was calculated in the course of flow movement, considering the influence of space charge on dielectric voltage. The change rule of the dielectric recovery progress was given based on the stream theory. The dynamic breakdown test circuit was built to measure the values of breakdown voltage under different open distance. The simulation results and experimental data are analyzed and the results show that: 1) Dielectric recovery speed (175kV/ms) is significantly faster than the voltage recovery rate (37.7kV/ms) during the arc extinguishing process. 2) The shorter the small current arcing time, the smaller the breakdown margin, so it is necessary to keep the arcing time longer than 0.5ms to ensure a large breakdown margin. 3) The calculated results are in good agreement with the experimental results. Since the breakdown voltage is less than the TRV in some test points, restrike may occur within 0.5ms after breaking, so arc extinguishment should be avoid in this time range.", "title": "" }, { "docid": "d9240bad8516bea63f9340bcde366ee4", "text": "This paper describes a novel feature selection algorithm for unsupervised clustering, that combines the clustering ensembles method and the population based incremental learning algorithm. The main idea of the proposed unsupervised feature selection algorithm is to search for a subset of all features such that the clustering algorithm trained on this feature subset can achieve the most similar clustering solution to the one obtained by an ensemble learning algorithm. In particular, a clustering solution is firstly achieved by a clustering ensembles method, then the population based incremental learning algorithm is adopted to find the feature subset that best fits the obtained clustering solution. One advantage of the proposed unsupervised feature selection algorithm is that it is dimensionality-unbiased. In addition, the proposed unsupervised feature selection algorithm leverages the consensus across multiple clustering solutions. Experimental results on several real data sets demonstrate that the proposed unsupervised feature selection algorithm is often able to obtain a better feature subset when compared with other existing unsupervised feature selection algorithms.", "title": "" }, { "docid": "6b40e5722506915a18d47e7ab8365059", "text": "Review history is widely used by recommender systems to infer users’ preferences and help find the potential interests from the huge volumes of data, whereas it also brings in great concerns on the sparsity and cold-start problems due to its inadequacy. Psychology and sociology research has shown that emotion information is a strong indicator for users’ preferences. Meanwhile, with the fast development of online services, users are willing to express their emotion on others’ reviews, which makes the emotion information pervasively available. Besides, recent research shows that the number of emotion on reviews is always much larger than the number of reviews. Therefore incorporating emotion on reviews may help to alleviate the data sparsity and cold-start problems for recommender systems. In this paper, we provide a principled and mathematical way to exploit both positive and negative emotion on reviews, and propose a novel framework MIRROR, exploiting eMotIon on Reviews for RecOmmendeR systems from both global and local perspectives. Empirical results on real-world datasets demonstrate the effectiveness of our proposed framework and further experiments are conducted to understand how emotion on reviews works for the proposed framework.", "title": "" }, { "docid": "d6e5f280fc760c2791b80fecd8da2447", "text": "The increased importance of lowering power in memory design has produced a trend of operating memories at lower supply voltages. Recent explorations into sub-threshold operation for logic show that minimum energy operation is possible in this region. These two trends suggest a meeting point for energy-constrained applications in which SRAM operates at sub-threshold voltages compatible with the logic. Since sub-threshold voltages leave less room for large static noise margin (SNM), a thorough understanding of the impact of various design decisions and other parameters becomes critical. This paper analyzes SNM for sub-threshold bitcells in a 65-nm process for its dependency on sizing, VDD, temperature, and local and global threshold variation. The VT variation has the greatest impact on SNM, so we provide a model that allows estimation of the SNM along the worst-case tail of the distribution", "title": "" }, { "docid": "d57119797b3719664eaf1cb29595c693", "text": "Nowadays, CEOs from most of the industries investigate digitalization opportunities. Customer preferences and behavior are driving enterprise technology choices, and internal organizational change is essential to maintain focus on the customer in today's digital world. To the enterprise architect, such initiatives sound like a perfect application of Enterprise Architecture (EA). Despite the ongoing research in academia, the benefits and the role of EA management in digital context are still a topic of lively discussions, and there is a gap in research on how to leverage EA for digital transformation. This paper explores the concepts of digital transformation and EA based on selected publications drawn from the literature and discusses the potential role of EA in supporting digital transformation. We also sketch an approach based on the TOGAF framework to position EA as a problem-solving tool for digital transformation initiatives.", "title": "" }, { "docid": "a3ace9ac6ae3f3d2dd7e02bd158a5981", "text": "The problem of combining preferences arises in several applications, such as combining the results of different search engines. This work describes an efficient algorithm for combining multiple preferences. We first give a formal framework for the problem. We then describe and analyze a new boosting algorithm for combining preferences called RankBoost. We also describe an efficient implementation of the algorithm for certain natural cases. We discuss two experiments we carried out to assess the performance of RankBoost. In the first experiment, we used the algorithm to combine different WWW search strategies, each of which is a query expansion for a given domain. For this task, we compare the performance of RankBoost to the individual search strategies. The second experiment is a collaborative-filtering task for making movie recommendations. Here, we present results comparing RankBoost to nearest-neighbor and regression algorithms. Thesis Supervisor: David R. Karger Title: Associate Professor", "title": "" }, { "docid": "10d69148c3a419e4ffe3bf1ca4c7c9d7", "text": "Discovering object classes from images in a fully unsupervised way is an intrinsically ambiguous task; saliency detection approaches however ease the burden on unsupervised learning. We develop an algorithm for simultaneously localizing objects and discovering object classes via bottom-up (saliency-guided) multiple class learning (bMCL), and make the following contributions: (1) saliency detection is adopted to convert unsupervised learning into multiple instance learning, formulated as bottom-up multiple class learning (bMCL); (2) we utilize the Discriminative EM (DiscEM) to solve our bMCL problem and show DiscEM's connection to the MIL-Boost method[34]; (3) localizing objects, discovering object classes, and training object detectors are performed simultaneously in an integrated framework; (4) significant improvements over the existing methods for multi-class object discovery are observed. In addition, we show single class localization as a special case in our bMCL framework and we also demonstrate the advantage of bMCL over purely data-driven saliency methods.", "title": "" }, { "docid": "107c816554054e77748cb62c3a990846", "text": "The focus of this paper is on entrepreneurial development and analysis of Interventionist Agencies in Nigeria. It examines the critical stages or sphere of development required of the entrepreneur in order to enable him perform his strategic functions in the organization and in the context of organizational strategic management in Nigeria. In pursuit of the focus of this paper, it treats numerous issues (an overview inclusive). It also examines the entrepreneurial roles and factors affecting its strategic management importance. Furthermore it x-rays in detail the three-skill approach to entrepreneurial development. These include technical, human and conceptual skills. It analyzes some government interventionist institutions and agencies established to encourage entrepreneurial development in Nigeria. The paper posits that though there is a widespread knowledge of the efficacy of entrepreneurial development mix, integrated entrepreneurial development efforts indicates that several of the institutions established by government concentrated on a partial approach to entrepreneurial development programme. Finally, it concludes and recommends four priorities agenda to enhance the entrepreneurial development in Nigeria.", "title": "" }, { "docid": "a6fd8b8506a933a7cc0530c6ccda03a8", "text": "Native ecosystems are continuously being transformed mostly into agricultural lands. Simultaneously, a large proportion of fields are abandoned after some years of use. Without any intervention, altered landscapes usually show a slow reversion to native ecosystems, or to novel ecosystems. One of the main barriers to vegetation regeneration is poor propagule supply. Many restoration programs have already implemented the use of artificial perches in order to increase seed availability in open areas where bird dispersal is limited by the lack of trees. To evaluate the effectiveness of this practice, we performed a series of meta-analyses comparing the use of artificial perches versus control sites without perches. We found that setting-up artificial perches increases the abundance and richness of seeds that arrive in altered areas surrounding native ecosystems. Moreover, density of seedlings is also higher in open areas with artificial perches than in control sites without perches. Taken together, our results support the use of artificial perches to overcome the problem of poor seed availability in degraded fields, promoting and/or accelerating the restoration of vegetation in concordance with the surrounding landscape.", "title": "" }, { "docid": "068e9a6840e365df38c2e66896d2889c", "text": "We investigate security of key exchange protocols supporting so-called zero round-trip time (0-RTT), enabling a client to establish a fresh provisional key without interaction, based only on cryptographic material obtained in previous connections. This key can then be already used to protect early application data, transmitted to the server before both parties interact further to switch to fully secure keys. Two recent prominent examples supporting such 0-RTT modes are Google's QUIC protocol and the latest drafts for the upcoming TLS version 1.3. We are especially interested in the question how replay attacks, enabled through the lack of contribution from the server, affect security in the 0-RTT case. Whereas the first proposal of QUIC uses state on the server side to thwart such attacks, the latest version of QUIC and TLS 1.3 rather accept them as inevitable. We analyze what this means for the key secrecy of both the preshared-key-based 0-RTT handshake in draft-14 of TLS 1.3 as well as the Diffie-Hellman-based 0-RTT handshake in TLS 1.3 draft-12. As part of this we extend previous security models to capture such cases, also shedding light on the limitations and options for 0-RTT security under replay attacks.", "title": "" }, { "docid": "c3ee32ebe664e325ee29d0cee9130847", "text": "Many real-world brain–computer interface (BCI) applications rely on single-trial classification of event-related potentials (ERPs) in EEG signals. However, because different subjects have different neural responses to even the same stimulus, it is very difficult to build a generic ERP classifier whose parameters fit all subjects. The classifier needs to be calibrated for each individual subject, using some labeled subject-specific data. This paper proposes both online and offline weighted adaptation regularization (wAR) algorithms to reduce this calibration effort, i.e., to minimize the amount of labeled subject-specific EEG data required in BCI calibration, and hence to increase the utility of the BCI system. We demonstrate using a visually evoked potential oddball task and three different EEG headsets that both online and offline wAR algorithms significantly outperform several other algorithms. Moreover, through source domain selection, we can reduce their computational cost by about $\\text{50}\\%$, making them more suitable for real-time applications.", "title": "" }, { "docid": "8c32fef40bce45bcd84726895732fe1a", "text": "ScratchJr is a graphical programming language based on Scratch and redesigned for the unique developmental and learning needs of children in kindergarten to second grade. The creation of ScratchJr addresses the relative lack of powerful technologies for digital creation and computer programming in early childhood education. ScratchJr will provide software for children to create interactive, animated stories as well as curricula and online resources to support adoption by educators. This paper describes the goals and challenges of creating a developmentally appropriate programming tool for children ages 5-7 and presents the path from guiding principles and studies with young children to current ScratchJr designs and plans for future work.", "title": "" }, { "docid": "ac6d474171bfe6bc2457bfb3674cc5a6", "text": "The energy consumption problem in the mobile industry has become crucial. For the sustainable growth of the mobile industry, energy efficiency (EE) of wireless systems has to be significantly improved. Plenty of efforts have been invested in achieving green wireless communications. This article provides an overview of network energy saving studies currently conducted in the 3GPP LTE standard body. The aim is to gain a better understanding of energy consumption and identify key EE research problems in wireless access networks. Classifying network energy saving technologies into the time, frequency, and spatial domains, the main solutions in each domain are described briefly. As presently the attention is mainly focused on solutions involving a single radio base station, we believe network solutions involving multiple networks/systems will be the most promising technologies toward green wireless access networks.", "title": "" }, { "docid": "8da9e8193d4fead65bd38d62a22998a1", "text": "Cloud computing has been considered as a solution for solving enterprise application distribution and configuration challenges in the traditional software sales model. Migrating from traditional software to Cloud enables on-going revenue for software providers. However, in order to deliver hosted services to customers, SaaS companies have to either maintain their own hardware or rent it from infrastructure providers. This requirement means that SaaS providers will incur extra costs. In order to minimize the cost of resources, it is also important to satisfy a minimum service level to customers. Therefore, this paper proposes resource allocation algorithms for SaaS providers who want to minimize infrastructure cost and SLA violations. Our proposed algorithms are designed in a way to ensure that Saas providers are able to manage the dynamic change of customers, mapping customer requests to infrastructure level parameters and handling heterogeneity of Virtual Machines. We take into account the customers' Quality of Service parameters such as response time, and infrastructure level parameters such as service initiation time. This paper also presents an extensive evaluation study to analyze and demonstrate that our proposed algorithms minimize the SaaS provider's cost and the number of SLA violations in a dynamic resource sharing Cloud environment.", "title": "" }, { "docid": "3199d1e0000458a81d9b66cd99225c37", "text": "There has been little attempt to summarise and synthesise qualitative studies concerning the experience and perception of living with Parkinson's disease. Bringing this information together would provide a background to understand the importance of an individual's social identity on their well-being and hope. Three primary aims were identified (a) understanding the importance of social identity and meaningful activities on individuals' well-being, (b) identifying factors and strategies that influence well-being and hope, and (c) establishing a model that relates to an individual's hope and well-being. Three stages were undertaken including a traditional electronic search, a critical appraisal of articles, and a synthesis of studies. Qualitative articles were included that considered the experience of living with Parkinson's disease. Thirty seven articles were located and included in the review. Five themes were identified and the themes were used to inform development of a new model of hope enablement. The current review furthered understanding of how physical symptoms and the experience of Parkinson's disease affect the individual's well-being and hope. Social identity was established as a key factor that influenced an individual's well-being. Being able to maintain, retain, or develop social identities was essential for the well-being and hope of individuals with Parkinson's disease. Understanding the factors which prevent or can facilitate this is essential.", "title": "" } ]
scidocsrr
5d0465cc9f84e8cae399f688848ecd67
Video super-resolution via dynamic texture synthesis
[ { "docid": "784dc5ac8e639e3ba4103b4b8653b1ff", "text": "Super-resolution reconstruction produces one or a set of high-resolution images from a set of low-resolution images. In the last two decades, a variety of super-resolution methods have been proposed. These methods are usually very sensitive to their assumed model of data and noise, which limits their utility. This paper reviews some of these methods and addresses their shortcomings. We propose an alternate approach using L/sub 1/ norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models. This computationally inexpensive method is robust to errors in motion and blur estimation and results in images with sharp edges. Simulation results confirm the effectiveness of our method and demonstrate its superiority to other super-resolution methods.", "title": "" } ]
[ { "docid": "529c514971f88b433f594c8c6e825d76", "text": "Permanent-magnet motors with rare-earth magnets are among the best candidates for high-performance applications such as automotive applications. However, due to their cost and risks relating to the security of supply, alternative solutions such as ferrite magnets have recently become popular. In this paper, the two major design challenges of using ferrite magnets for a high-torque-density and high-speed application, i.e., their low remanent flux density and low coercivity, are addressed. It is shown that a spoke-type design utilizing a distributed winding may overcome the torque density challenge due to a simultaneous flux concentration and a reluctance torque possibility. Furthermore, the demagnetization challenge can be overcome through the careful optimization of the rotor structure, with the inclusion of nonmagnetic voids on the top and bottom of the magnets. To meet the challenges of a high-speed operation, an extensive rotor structural analysis has been undertaken, during which electromagnetics and manufacturing tolerances are taken into account. Electromagnetic studies are validated through the testing of a prototype, which is custom built for static torque and demagnetization evaluation. The disclosed motor design surpasses the state-of-the-art performance and cost, merging the theories into a multidisciplinary product.", "title": "" }, { "docid": "7d2f5505b2a60fb113524903aa5acc7d", "text": "Robust object recognition is a crucial skill for robots operating autonomously in real world environments. Range sensors such as LiDAR and RGBD cameras are increasingly found in modern robotic systems, providing a rich source of 3D information that can aid in this task. However, many current systems do not fully utilize this information and have trouble efficiently dealing with large amounts of point cloud data. In this paper, we propose VoxNet, an architecture to tackle this problem by integrating a volumetric Occupancy Grid representation with a supervised 3D Convolutional Neural Network (3D CNN). We evaluate our approach on publicly available benchmarks using LiDAR, RGBD, and CAD data. VoxNet achieves accuracy beyond the state of the art while labeling hundreds of instances per second.", "title": "" }, { "docid": "60718ad958d65eb60a520d516f1dd4ea", "text": "With the advent of the Internet, more and more public universities in Malaysia are putting in effort to introduce e-learning in their respective universities. Using a structured questionnaire derived from the literature, data was collected from 250 undergraduate students from a public university in Penang, Malaysia. Data was analyzed using AMOS version 16. The results of the structural equation model indicated that service quality (β = 0.20, p < 0.01), information quality (β = 0.37, p < 0.01) and system quality (β = 0.20, p < 0.01) were positively related to user satisfaction explaining a total of 45% variance. The second regression analysis was to examine the impact of user satisfaction on continuance intention. The results showed that satisfaction (β = 0.31, p < 0.01), system quality (β = 0.18, p < 0.01) and service quality (β = 0.30, p < 0.01) were positively related to continuance intention explaining 44% of the variance. Implications from these findings to e-learning system developers and implementers were further elaborated.", "title": "" }, { "docid": "45cbfbe0a0bcf70910a6d6486fb858f0", "text": "Grid cells in the entorhinal cortex of freely moving rats provide a strikingly periodic representation of self-location which is indicative of very specific computational mechanisms. However, the existence of grid cells in humans and their distribution throughout the brain are unknown. Here we show that the preferred firing directions of directionally modulated grid cells in rat entorhinal cortex are aligned with the grids, and that the spatial organization of grid-cell firing is more strongly apparent at faster than slower running speeds. Because the grids are also aligned with each other, we predicted a macroscopic signal visible to functional magnetic resonance imaging (fMRI) in humans. We then looked for this signal as participants explored a virtual reality environment, mimicking the rats’ foraging task: fMRI activation and adaptation showing a speed-modulated six-fold rotational symmetry in running direction. The signal was found in a network of entorhinal/subicular, posterior and medial parietal, lateral temporal and medial prefrontal areas. The effect was strongest in right entorhinal cortex, and the coherence of the directional signal across entorhinal cortex correlated with spatial memory performance. Our study illustrates the potential power of combining single-unit electrophysiology with fMRI in systems neuroscience. Our results provide evidence for grid-cell-like representations in humans, and implicate a specific type of neural representation in a network of regions which supports spatial cognition and also autobiographical memory.", "title": "" }, { "docid": "024e9600707203ffcf35ca96dff42a87", "text": "The blockchain technology is gaining momentum because of its possible application to other systems than the cryptocurrency one. Indeed, blockchain, as a de-centralized system based on a distributed digital ledger, can be utilized to securely manage any kind of assets, constructing a system that is independent of any authorization entity. In this paper, we briefly present blockchain and our work in progress, the VMOA blockchain, to secure virtual machine orchestration operations for cloud computing and network functions virtualization systems. Using tutorial examples, we describe our design choices and draw implementation plans.", "title": "" }, { "docid": "e121891a063a2a05a83c369a54b0ecea", "text": "The number of vulnerabilities in open source libraries is increasing rapidly. However, the majority of them do not go through public disclosure. These unidentified vulnerabilities put developers' products at risk of being hacked since they are increasingly relying on open source libraries to assemble and build software quickly. To find unidentified vulnerabilities in open source libraries and secure modern software development, we describe an efficient automatic vulnerability identification system geared towards tracking large-scale projects in real time using natural language processing and machine learning techniques. Built upon the latent information underlying commit messages and bug reports in open source projects using GitHub, JIRA, and Bugzilla, our K-fold stacking classifier achieves promising results on vulnerability identification. Compared to the state of the art SVM-based classifier in prior work on vulnerability identification in commit messages, we improve precision by 54.55% while maintaining the same recall rate. For bug reports, we achieve a much higher precision of 0.70 and recall rate of 0.71 compared to existing work. Moreover, observations from running the trained model at SourceClear in production for over 3 months has shown 0.83 precision, 0.74 recall rate, and detected 349 hidden vulnerabilities, proving the effectiveness and generality of the proposed approach.", "title": "" }, { "docid": "abe5bdf6a17cf05b49ac578347a3ca5d", "text": "To realize the broad vision of pervasive computing, underpinned by the “Internet of Things” (IoT), it is essential to break down application and technology-based silos and support broad connectivity and data sharing; the cloud being a natural enabler. Work in IoT tends toward the subsystem, often focusing on particular technical concerns or application domains, before offloading data to the cloud. As such, there has been little regard given to the security, privacy, and personal safety risks that arise beyond these subsystems; i.e., from the wide-scale, cross-platform openness that cloud services bring to IoT. In this paper, we focus on security considerations for IoT from the perspectives of cloud tenants, end-users, and cloud providers, in the context of wide-scale IoT proliferation, working across the range of IoT technologies (be they things or entire IoT subsystems). Our contribution is to analyze the current state of cloud-supported IoT to make explicit the security considerations that require further work.", "title": "" }, { "docid": "ea029be1081beef8f2faf7e61787ae57", "text": "Discriminative learning machines often need a large set of labeled samples for training. Active learning (AL) settings assume that the learner has the freedom to ask an oracle to label its desired samples. Traditional AL algorithms heuristically choose query samples about which the current learner is uncertain. This strategy does not make good use of the structure of the dataset at hand and is prone to be misguided by outliers. To alleviate this problem, we propose to distill the structural information into a probabilistic generative model which acts as a teacher in our model. The active learner uses this information effectively at each cycle of active learning. The proposed method is generic and does not depend on the type of learner and teacher. We then suggest a query criterion for active learning that is aware of distribution of data and is more robust against outliers. Our method can be combined readily with several other query criteria for active learning. We provide the formulation and empirically show our idea via toy and real examples.", "title": "" }, { "docid": "3bc48489d80e824efb7e3512eafc6f30", "text": "GPS-equipped taxis can be regarded as mobile sensors probing traffic flows on road surfaces, and taxi drivers are usually experienced in finding the fastest (quickest) route to a destination based on their knowledge. In this paper, we mine smart driving directions from the historical GPS trajectories of a large number of taxis, and provide a user with the practically fastest route to a given destination at a given departure time. In our approach, we propose a time-dependent landmark graph, where a node (landmark) is a road segment frequently traversed by taxis, to model the intelligence of taxi drivers and the properties of dynamic road networks. Then, a Variance-Entropy-Based Clustering approach is devised to estimate the distribution of travel time between two landmarks in different time slots. Based on this graph, we design a two-stage routing algorithm to compute the practically fastest route. We build our system based on a real-world trajectory dataset generated by over 33,000 taxis in a period of 3 months, and evaluate the system by conducting both synthetic experiments and in-the-field evaluations. As a result, 60-70% of the routes suggested by our method are faster than the competing methods, and 20% of the routes share the same results. On average, 50% of our routes are at least 20% faster than the competing approaches.", "title": "" }, { "docid": "050c60c23b15c92da6c2cec6213b68e3", "text": "In this paper, the human brainstorming process is modeled, based on which two versions of Brain Storm Optimization (BSO) algorithm are introduced. Simulation results show that both BSO algorithms perform reasonably well on ten benchmark functions, which validates the effectiveness and usefulness of the proposed BSO algorithms. Simulation results also show that one of the BSO algorithms, BSO-II, performs better than the other BSO algorithm, BSO-I, in general. Furthermore, average inter-cluster distance Dc and inter-cluster diversity De are defined, which can be used to measure and monitor the distribution of cluster centroids and information entropy of the population over iterations. Simulation results illustrate that further improvement could be achieved by taking advantage of information revealed by Dc and or De, which points at one direction for future research on BSO algorithms. DOI: 10.4018/jsir.2011100103 36 International Journal of Swarm Intelligence Research, 2(4), 35-62, October-December 2011 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. a lot of evolutionary algorithms out there in the literature. The most popular evolutionary algorithms are evolutionary programming (Fogel, 1962), genetic algorithm (Holland, 1975), evolution strategy (Rechenberg, 1973), and genetic programming (Koza, 1992), which were inspired by biological evolution. In evolutionary algorithms, population of individuals survives into the next iteration. Which individual has higher probability to survive is proportional to its fitness value according to some evaluation function. The survived individuals are then updated by utilizing evolutionary operators such as crossover operator and mutation operator, etc. In evolutionary programming and evolution strategy, only the mutation operation is employed, while in genetic algorithms and genetic programming, both the mutation operation and crossover operation are employed. The optimization problems to be optimized by evolutionary algorithms do not need to be mathematically represented as continuous and differentiable functions, they can be represented in any form. Only requirement for representing optimization problems is that each individual can be evaluated as a value called fitness value. Therefore, evolutionary algorithms can be applied to solve more general optimization problems, especially those that are very difficult, if not impossible, for traditional hill-climbing algorithms to solve. Recently, another kind of algorithms, called swarm intelligence algorithms, is attracting more and more attentions from researchers. Swarm intelligence algorithms are usually nature-inspired optimization algorithms instead of evolution-inspired optimization algorithms such as evolutionary algorithms. Similar to evolutionary algorithms, a swarm intelligence algorithm is also a population-based optimization algorithm. Different from the evolutionary algorithms, each individual in a swarm intelligence algorithm represents a simple object such as ant, bird, fish, etc. So far, a lot of swarm intelligence algorithms have been proposed and studied. Among them are particle swarm optimization(PSO) (Eberhart & Shi, 2007; Shi & Eberhart, 1998), ant colony optimization algorithm(ACO) (Dorigo, Maniezzo, & Colorni, 1996), bacterial forging optimization algorithm(BFO) (Passino, 2010), firefly optimization algorithm (FFO) (Yang, 2008), bee colony optimization algorithm (BCO) (Tovey, 2004), artificial immune system (AIS) (de Castro & Von Zuben, 1999), fish school search optimization algorithm(FSO) (Bastos-Filho, De Lima Neto, Lins, Nascimento, & Lima, 2008), shuffled frog-leaping algorithm (SFL) (Eusuff & Lansey, 2006), intelligent water drops algorithm (IWD) (Shah-Hosseini, 2009), to just name a few. In a swarm intelligence algorithm, an individual represents a simple object such as birds in PSO, ants in ACO, bacteria in BFO, etc. These simple objects cooperate and compete among themselves to have a high tendency to move toward better and better search areas. As a consequence, it is the collective behavior of all individuals that makes a swarm intelligence algorithm to be effective in problem optimization. For example, in PSO, each particle (individual) is associated with a velocity. The velocity of each particle is dynamically updated according to its own historical best performance and its companions’ historical best performance. All the particles in the PSO population fly through the solution space in the hope that particles will fly towards better and better search areas with high probability. Mathematically, the updating process of the population of individuals over iterations can be looked as a mapping process from one population of individuals to another population of individuals from one iteration to the next iteration, which can be represented as Pt+1 = f(Pt), where Pt is the population of individuals at the iteration t, f() is the mapping function. Different evolutionary algorithm or swarm intelligence algorithm has a different mapping function. Through the mapping function, we expect the population of individuals will update to better and better solutions over iterations. Therefore mapping functions should possess the property of convergence. For nonlinear and complicated problems, mapping functions more 26 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the publisher's webpage: www.igi-global.com/article/optimization-algorithm-basedbrainstorming-process/62645", "title": "" }, { "docid": "a1757ee58eb48598d3cd6e257b53cd10", "text": "This paper examines the issues of puzzle design in the context of collaborative gaming. The qualitative research approach involves both the conceptual analysis of key terminology and a case study of a collaborative game called eScape. The case study is a design experiment, involving both the process of designing a game environment and an empirical study, where data is collected using multiple methods. The findings and conclusions emerging from the analysis provide insight into the area of multiplayer puzzle design. The analysis and reflections answer questions on how to create meaningful puzzles requiring collaboration and how far game developers can go with collaboration design. The multiplayer puzzle design introduces a new challenge for game designers. Group dynamics, social roles and an increased level of interaction require changes in the traditional conceptual understanding of a single-player puzzle.", "title": "" }, { "docid": "d8badd23313c7ea4baa0231ff1b44e32", "text": "Current state-of-the-art solutions for motion capture from a single camera are optimization driven: they optimize the parameters of a 3D human model so that its re-projection matches measurements in the video (e.g. person segmentation, optical flow, keypoint detections etc.). Optimization models are susceptible to local minima. This has been the bottleneck that forced using clean green-screen like backgrounds at capture time, manual initialization, or switching to multiple cameras as input resource. In this work, we propose a learning based motion capture model for single camera input. Instead of optimizing mesh and skeleton parameters directly, our model optimizes neural network weights that predict 3D shape and skeleton configurations given a monocular RGB video. Our model is trained using a combination of strong supervision from synthetic data, and self-supervision from differentiable rendering of (a) skeletal keypoints, (b) dense 3D mesh motion, and (c) human-background segmentation, in an end-to-end framework. Empirically we show our model combines the best of both worlds of supervised learning and test-time optimization: supervised learning initializes the model parameters in the right regime, ensuring good pose and surface initialization at test time, without manual effort. Self-supervision by back-propagating through differentiable rendering allows (unsupervised) adaptation of the model to the test data, and offers much tighter fit than a pretrained fixed model. We show that the proposed model improves with experience and converges to low-error solutions where previous optimization methods fail.", "title": "" }, { "docid": "5c8923335dd4ee4c2123b5b3245fb595", "text": "Virtualization is a key enabler of Cloud computing. Due to the numerous vulnerabilities in current implementations of virtualization, security is the major concern of Cloud computing. In this paper, we propose an enhanced security framework to detect intrusions at the virtual network layer of Cloud. It combines signature and anomaly based techniques to detect possible attacks. It uses different classifiers viz; naive bayes, decision tree, random forest, extra trees and linear discriminant analysis for an efficient and effective detection of intrusions. To detect distributed attacks at each cluster and at whole Cloud, it collects intrusion evidences from each region of Cloud and applies Dempster-Shafer theory (DST) for final decision making. We analyze the proposed security framework in terms of Cloud IDS requirements through offline simulation using different intrusion datasets.", "title": "" }, { "docid": "bc865fab6ac19d9f64155c6d87e1af2f", "text": "This study examined the \"Unified Theory of Acceptance and Use of Technology\" (UTAUT) in the context of tablet devices across multiple generations. We tested the four UTAUT determinants, performance expectancy, effort expectancy, social influence, and facilitating conditions, to determine their contributions for predicting behavioral intention to use tablets with age, gender, and user experience as moderators. 899 respondents aged 19-99 completed the survey. We found consistent generational differences in UTAUT determinants, most frequently between the oldest and youngest generations. Effort expectancy and facilitating conditions were the only determinants that positively predicted tablet use intentions after controlling for age, gender, and tablet use. We also discuss the implications of ageism and gender discrimination of technology adoption. Finally, we argue that our findings can be extended to create effective training programs for the teaching, learning, and adoption of new technologies in a variety of organizational settings.", "title": "" }, { "docid": "d1c46994c5cfd59bdd8d52e7d4a6aa83", "text": "Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, Control-Flow Integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple, and its guarantees can be established formally even with respect to powerful adversaries. Moreover, CFI enforcement is practical: it is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions.", "title": "" }, { "docid": "87ed7ebdf8528df1491936000649761b", "text": "Internet of Vehicles (IoV) is an important constituent of next generation smart cities that enables city wide connectivity of vehicles for traffic management applications. A secure and reliable communications is an important ingredient of safety applications in IoV. While the use of a more robust security algorithm makes communications for safety applications secure, it could reduce application QoS due to increased packet overhead and security processing delays. Particularly, in high density scenarios where vehicles receive large number of safety packets from neighborhood, timely signature verification of these packets could not be guaranteed. As a result, critical safety packets remain unverified resulting in cryptographic loss. In this paper, we propose two security mechanisms that aim to reduce cryptographic loss rate. The first mechanism is random transmitter security level section whereas the second one is adaptive scheme that iteratively selects the best possible security level at the transmitter depending on the current cryptographic loss rate. Simulation results show the effectiveness of the proposed mechanisms in comparison with the static security technique recommended by the ETSI standard.", "title": "" }, { "docid": "bccae9fdf10b1d0a16eab20278aeef3b", "text": "This paper is related to the development of an innovative multimodal biometric identification system. Unimodal biometric systems often face significant limitations due to sensitivity to noise intraclass variability and other factors. Multimodal biometric identification systems aim to fuse two or more physical or behavioral traits to provide optimal False Acceptance Rate (FAR) and False Rejection Rate (FRR), thus improving system accuracy and dependability. In greater detail, a Multimodal Biometric Identification System Based On Iris & Fingerprint. Both biometric traits (Iris & Fingerprint) are processed individually through all steps like segmentation ,feature extraction & matching. The multimodal system is fused using match level fusion at the verification stage on obtained matching score of iris and fingerprint. The performance of the biometric system shows improvement in the False Acceptance Rate (FAR) and False Reject Rate (FRR) .The proposed multimodal system achieves interesting results with several commonly used databases.", "title": "" }, { "docid": "a816ad26a49e0cf90dadc4db6dcba6d4", "text": "Despite the recent advances of deep reinforcement learning (DRL), agents trained by DRL tend to be brittle and sensitive to the training environment, especially in the multi-agent scenarios. In the multi-agent setting, a DRL agent’s policy can easily get stuck in a poor local optima w.r.t. its training partners – the learned policy may be only locally optimal to other agents’ current policies. In this paper, we focus on the problem of training robust DRL agents with continuous actions in the multi-agent learning setting so that the trained agents can still generalize when its opponents’ policies alter. To tackle this problem, we proposed a new algorithm, MiniMax Multi-agent Deep Deterministic Policy Gradient (M3DDPG) with the following contributions: (1) we introduce a minimax extension of the popular multi-agent deep deterministic policy gradient algorithm (MADDPG), for robust policy learning; (2) since the continuous action space leads to computational intractability in our minimax learning objective, we propose Multi-Agent Adversarial Learning (MAAL) to efficiently solve our proposed formulation. We empirically evaluate our M3DDPG algorithm in four mixed cooperative and competitive multi-agent environments and the agents trained by our method significantly outperforms existing baselines.", "title": "" }, { "docid": "8cb6a2a3014bd3a7f945abd4cb2ffe88", "text": "In order to identify and explore the strength and weaknesses of particular organizational designs, a wide range of maturity models have been developed by both, practitioners and academics over the past years. However, a systematization and generalization of the procedure on how to design maturity models as well as a synthesis of design science research with the rather behavioural field of organization theory is still lacking. Trying to combine the best of both fields, a first design proposition of a situational maturity model is presented in this paper. The proposed maturity model design is illustrated with the help of an instantiation for the healthcare domain.", "title": "" } ]
scidocsrr
99af9f34d289ab1fdaa3d552df78ba7b
Text Localization in Natural Scene Images Based on Conditional Random Field
[ { "docid": "482fb0c3b5ead028180c57466f3a092e", "text": "Separating text lines in handwritten documents remains a challenge because the text lines are often ununiformly skewed and curved. In this paper, we propose a novel text line segmentation algorithm based on Minimal Spanning Tree (MST) clustering with distance metric learning. Given a distance metric, the connected components of document image are grouped into a tree structure. Text lines are extracted by dynamically cutting the edges of the tree using a new objective function. For avoiding artificial parameters and improving the segmentation accuracy, we design the distance metric by supervised learning. Experiments on handwritten Chinese documents demonstrate the superiority of the approach.", "title": "" } ]
[ { "docid": "32cf79c1c2871f6a13b844ee9a6e414f", "text": "Abnormalities of diastolic function are common to virtually all forms of cardiac failure. However, their underlying mechanisms, precise role in the generation and phenotypic expression of heart failure, and value as specific therapeutic targets remain poorly understood. A growing proportion of heart failure patients, particularly among the elderly, have apparently preserved systolic function, and this is fueling interest for better understanding and treating diastolic abnormalities. Much of the attention in clinical and experimental studies has focused on relaxation and filling abnormalities of the heart, whereas chamber stiffness has been less well studied, particularly in humans. Nonetheless, new insights from basic and clinical research are helping define the regulators of diastolic dysfunction and illuminate novel targets for treatment. This review puts these developments into perspective with the major aim of highlighting current knowledge gaps and controversies.", "title": "" }, { "docid": "6f9823214b87abf5ed64c7dd25a0dda7", "text": "Unstructured data still makes up an important portion of the Web. One key task towards transforming this unstructured data into structured data is named entity recognition. We demo FOX, the Federated knOwledge eXtraction framework, a highly accurate open-source framework that implements RESTful web services for named entity recognition. Our framework achieves a higher Fmeasure than state-of-the-art named entity recognition frameworks by combining the results of several approaches through ensemble learning. Moreover, it disambiguates and links named entities against DBpedia by relying on the AGDISTIS framework. As a result, FOX provides users with accurately disambiguated and linked named entities in several RDF serialization formats. We demonstrate the different interfaces implemented by FOX within use cases pertaining to extracting entities from news texts.", "title": "" }, { "docid": "4d3aea1bd30234f58013a1136d1f834b", "text": "Predicting user response is one of the core machine learning tasks in computational advertising. Field-aware Factorization Machines (FFM) have recently been established as a state-of-the-art method for that problem and in particular won two Kaggle challenges. This paper presents some results from implementing this method in a production system that predicts click-through and conversion rates for display advertising and shows that this method it is not only effective to win challenges but is also valuable in a real-world prediction system. We also discuss some specific challenges and solutions to reduce the training time, namely the use of an innovative seeding algorithm and a distributed learning mechanism.", "title": "" }, { "docid": "4e42997b2411b90eeb0e3f8be967a09b", "text": "Is there a universal dimension of “Meditation Depth” valid to every individual and way of meditation? Or is there only a very subjective feeling of depth, that differs from person to person and from tradition to tradition? 45 authorized meditation teachers were asked about the depth of different experiences verbalized in 30 items. The agreement is highly significant. Five clusters, interpreted as depth structures, could be found through cluster analysis: “hindrances”, “relaxation”, “personal self”, “transpersonal qualities” and “transpersonal self”. An itemand factor-analysis supports a uni-dimensional Depth – factor in a group of 122 meditators. On this base the Meditation Depth Questionnaire was constructed. First investigations support the reliability and validity of the instrument.", "title": "" }, { "docid": "931c75847fdfec787ad6a31a6568d9e3", "text": "This paper introduces concepts and algorithms of feature selection, surveys existing feature selection algorithms for classification and clustering, groups and compares different algorithms with a categorizing framework based on search strategies, evaluation criteria, and data mining tasks, reveals unattempted combinations, and provides guidelines in selecting feature selection algorithms. With the categorizing framework, we continue our efforts toward-building an integrated system for intelligent feature selection. A unifying platform is proposed as an intermediate step. An illustrative example is presented to show how existing feature selection algorithms can be integrated into a meta algorithm that can take advantage of individual algorithms. An added advantage of doing so is to help a user employ a suitable algorithm without knowing details of each algorithm. Some real-world applications are included to demonstrate the use of feature selection in data mining. We conclude this work by identifying trends and challenges of feature selection research and development.", "title": "" }, { "docid": "0f1fab536992282dd1027d542c2c20e5", "text": "Traditional barcode localization methods based on image analysis are sensitive to the types of the target symbols and the environment they are applied to. To develop intelligent barcode reading system used in industry, a real-time region based barcode segmentation approach which is available for various types of linear and two-dimensional symbols is proposed. The two-stage approach consists of the target region connection part by orientation detection and morphological operation, and the target region detection part by efficient contour based connected component labeling. The result of target location, which is represented by the coordinates of the upper left corner and the lower right corner of its bounding box, is robust to the barcode orientation, noise, and uneven environment illumination. The segmentation method is proved to work well in the real-time barcode reading system. In the experiments, 100% of the DATAMATRIX codes, 99.3% of the Code39 symbols and 98.7% PDF417 codes are corrected segmented.", "title": "" }, { "docid": "f9cc9e1ddc0d1db56f362a1ef409274d", "text": "Phishing is increasing dramatically with the development of modern technologies and the global worldwide computer networks. This results in the loss of customer’s confidence in e-commerce and online banking, financial damages, and identity theft. Phishing is fraudulent effort aims to acquire sensitive information from users such as credit card credentials, and social security number. In this article, we propose a model for predicting phishing attacks based on Artificial Neural Network (ANN). A Feed Forward Neural Network trained by Back Propagation algorithm is developed to classify websites as phishing or legitimate. The suggested model shows high acceptance ability for noisy data, fault tolerance and high prediction accuracy with respect to false positive and false negative rates.", "title": "" }, { "docid": "eaae33cb97b799eff093a7a527143346", "text": "RGB Video now is one of the major data sources of traffic surveillance applications. In order to detect the possible traffic events in the video, traffic-related objects, such as vehicles and pedestrians, should be first detected and recognized. However, due to the 2D nature of the RGB videos, there are technical difficulties in efficiently detecting and recognizing traffic-related objects from them. For instance, the traffic-related objects cannot be efficiently detected in separation while parts of them overlap, and complex background will influence the accuracy of the object detection. In this paper, we propose a robust RGB-D data based traffic scene understanding algorithm. By integrating depth information, we can calculate more discriminative object features and spatial information can be used to separate the objects in the scene efficiently. Experimental results show that integrating depth data can improve the accuracy of object detection and recognition. We also show that the analyzed object information plus depth data facilitate two important traffic event detection applications: overtaking warning and collision", "title": "" }, { "docid": "a45294bcd622c526be47975abe4e6d66", "text": "Identification of gene locations in a DNA sequence is one of the important problems in the area of genomics. Nucleotides in exons of a DNA sequence show f = 1/3 periodicity. The period-3 property in exons of eukaryotic gene sequences enables signal processing based time-domain and frequency-domain methods to predict these regions. Identification of the period-3 regions helps in predicting the gene locations within the billions long DNA sequence of eukaryotic cells. Existing non-parametric filtering techniques are less effective in detecting small exons. This paper presents a harmonic suppression filter and parametric minimum variance spectrum estimation technique for gene prediction. We show that both the filtering techniques are able to detect smaller exon regions and adaptive MV filter minimizes the power in introns (non-coding regions) giving more suppression to the intron regions. Furthermore, 2-simplex mapping is used to reduce the computational complexity.", "title": "" }, { "docid": "1430c03448096953c6798a0b6151f0b2", "text": "This case study analyzes the impact of theory-based factors on the implementation of different blockchain technologies in use cases from the energy sector. We construct an integrated research model based on the Diffusion of Innovations theory, institutional economics and the Technology-Organization-Environment framework. Using qualitative data from in-depth interviews, we link constructs to theory and assess their impact on each use case. Doing so we can depict the dynamic relations between different blockchain technologies and the energy sector. The study provides insights for decision makers in electric utilities, and government administrations.", "title": "" }, { "docid": "0bf5a87d971ff2dca4c8dfa176316663", "text": "A crucial privacy-driven issue nowadays is re-identifying anonymized social networks by mapping them to correlated cross-domain auxiliary networks. Prior works are typically based on modeling social networks as random graphs representing users and their relations, and subsequently quantify the quality of mappings through cost functions that are proposed without sufficient rationale. Also, it remains unknown how to algorithmically meet the demand of such quantifications, i.e., to find the minimizer of the cost functions. We address those concerns in a more realistic social network modeling parameterized by community structures that can be leveraged as side information for de-anonymization. By Maximum A Posteriori (MAP) estimation, our first contribution is new and well justified cost functions, which, when minimized, enjoy superiority to previous ones in finding the correct mapping with the highest probability. The feasibility of the cost functions is then for the first time algorithmically characterized. While proving the general multiplicative inapproximability, we are able to propose two algorithms, which, respectively, enjoy an -additive approximation and a conditional optimality in carrying out successful user re-identification. Our theoretical findings are empirically validated, with a notable dataset extracted from rare true cross-domain networks that reproduce genuine social network de-anonymization. Both theoretical and empirical observations also manifest the importance of community information in enhancing privacy inferencing.", "title": "" }, { "docid": "648f4e6997fe289e56f4b2729c2ecb80", "text": "A substantial thread of recent work on latent tree learning has attempted to develop neural network models with parse-valued latent variables and train them on non-parsing tasks, in the hope of having them discover interpretable tree structure. In a recent paper, Shen et al. (2018) introduce such a model and report nearstate-of-the-art results on the target task of language modeling, and the first strong latent tree learning result on constituency parsing. In an attempt to reproduce these results, we discover issues that make the original results hard to trust, including tuning and even training on what is effectively the test set. Here, we attempt to reproduce these results in a fair experiment and to extend them to two new datasets. We find that the results of this work are robust: All variants of the model under study outperform all latent tree learning baselines, and perform competitively with symbolic grammar induction systems. We find that this model represents the first empirical success for latent tree learning, and that neural network language modeling warrants further study as a setting for grammar induction.", "title": "" }, { "docid": "c81bf639d65789ff488eb2188c310db0", "text": "Speechreading is a notoriously difficult task for humans to perform. In this paper we present an end-to-end model based on a convolutional neural network (CNN) for generating an intelligible acoustic speech signal from silent video frames of a speaking person. The proposed CNN generates sound features for each frame based on its neighboring frames. Waveforms are then synthesized from the learned speech features to produce intelligible speech. We show that by leveraging the automatic feature learning capabilities of a CNN, we can obtain state-of-the-art word intelligibility on the GRID dataset, and show promising results for learning out-of-vocabulary (OOV) words.", "title": "" }, { "docid": "53445e289a7472c52e0bccae9f255d8d", "text": "This paper analyses a ZVS isolated active clamp Sepic converter for high power LED applications. Due to the recent advancement in the light emitting diode technology, high brightness, high efficient LEDs becomes achievable in residential, industry and commercial applications to replace the incandescent bulbs, halogen bulbs, and even compact fluorescent light bulbs. Generally in these devices, the lumen is proportional to the current, so the converter has to control the LED string current, and for high power applications (greater than 100W), is preferable to have a galvanic isolation between the bus and the output; among different isolated topologies and taking into account the large input voltage variation in the application this paper is targeting, a ZVS active clamp Sepic converter has been adopted. Due to its circuit configuration, it can step up or down the input voltage, allowing a universal use, with lamps with different voltages and powers. A 300W, 5A, 48V input voltage prototype has been developed, and a peak efficiency of 91% has been reached without synchronous rectification.", "title": "" }, { "docid": "58042f8c83e5cc4aa41e136bb4e0dc1f", "text": "In this paper, we propose wire-free integrated sensors that monitor pulse wave velocity (PWV) and respiration, both non-electrical vital signs, by using an all-electrical method. The key techniques that we employ to obtain all-electrical and wire-free measurement are bio-impedance (BI) and analog-modulated body-channel communication (BCC), respectively. For PWV, time difference between ECG signal from the heart and BI signal from the wrist is measured. To remove wires and avoid sampling rate mismatch between ECG and BI sensors, ECG signal is sent to the BI sensor via analog BCC without any sampling. For respiration measurement, BI sensor is located at the abdomen to detect volume change during inhalation and exhalation. A prototype chip fabricated in 0.11 μm CMOS process consists of ECG, BI sensor and BCC transceiver. Measurement results show that heart rate and PWV are both within their normal physiological range. The chip consumes 1.28 mW at 1.2 V supply while occupying 5 mm×2.5 mm of area.", "title": "" }, { "docid": "c0283c87e2a8305ba43ce87bf74a56a6", "text": "Real-world deployments of accelerometer-based human activity recognition systems need to be carefully configured regarding the sampling rate used for measuring acceleration. Whilst a low sampling rate saves considerable energy, as well as transmission bandwidth and storage capacity, it is also prone to omitting relevant signal details that are of interest for contemporary analysis tasks. In this paper we present a pragmatic approach to optimising sampling rates of accelerometers that effectively tailors recognition systems to particular scenarios, thereby only relying on unlabelled sample data from the domain. Employing statistical tests we analyse the properties of accelerometer data and determine optimal sampling rates through similarity analysis. We demonstrate the effectiveness of our method in experiments on 5 benchmark datasets where we determine optimal sampling rates that are each substantially below those originally used whilst maintaining the accuracy of reference recognition systems. c © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "c4b0d93105e434d4d407575157a005a4", "text": "Online Judge is widespread for the undergraduate to study programming. The users usually feel confused while locating the problems they prefer from the massive ones. This paper proposes a specialized recommendation model for the online judge systems in order to present the alternative problems to the users which they may be interested in potentially. In this model, a three-level collaborative filtering recommendation method is referred to and redesigned catering for the specific interaction mode of Online Judge. This method is described in detail in this paper and implemented in our demo system which demonstrates its availability.", "title": "" }, { "docid": "d225b334a1feff4326e7a5779b50267f", "text": "We compare the fast training and decoding speed of RETURNN of attention models for translation, due to fast CUDA LSTM kernels, and a fast pure TensorFlow beam search decoder. We show that a layer-wise pretraining scheme for recurrent attention models gives over 1% BLEU improvement absolute and it allows to train deeper recurrent encoder networks. Promising preliminary results on max. expected BLEU training are presented. We obtain state-of-the-art models trained on the WMT 2017 German↔English translation task. We also present end-to-end model results for speech recognition on the Switchboard task. The flexibility of RETURNN allows a fast research feedback loop to experiment with alternative architectures, and its generality allows to use it on a wide range of applications.", "title": "" }, { "docid": "0847b2b9270bc39a1273edfdfa022345", "text": "This paper presents the analysis, design and measurement of novel, low-profile, small-footprint folded monopoles employing planar metamaterial phase-shifting lines. These lines are composed of fully-printed spiral elements, that are inductively coupled and hence exhibit an effective high- mu property. An equivalent circuit for the proposed structure is presented, validating the operating principles of the antenna and the metamaterial line. The impact of the antenna profile and the ground plane size on the antenna performance is investigated using accurate full-wave simulations. A lambda/9 antenna prototype, designed to operate at 2.36 GHz, is fabricated and tested on both electrically large and small ground planes, achieving on average 80% radiation efficiency, 5% (110 MHz) and 2.5% (55 MHz) -10 dB measured bandwidths, respectively, and fully omnidirectional, vertically polarized, monopole-type radiation patterns.", "title": "" }, { "docid": "08bc408d8afcc587814316724f5a6e83", "text": "It is generally agreed that the main challenges in designing a frequency modulated continuous wave (FMCW) radar are; (i) frequency sweep linearisation and (ii) controlling leakage of transmitter phase noise into the receiver. This paper addresses the latter and focuses on the seldom mentioned and often neglected problem of reflected noise from large, distant targets causing an increase in the receiver noise floor. Work has been presented since the early 1960s on techniques for reducing the effect of transmitter phase noise leakage desensitising the radar receiver of CW and FMCW radars (O'hara, 1963),(Beasly, 1990). All of this work has concentrated on the problems of direct, short path length leakage around the transmit/receive antenna", "title": "" } ]
scidocsrr
9c0a90468db57f87322ff584de436219
Understanding the Linux Kernel
[ { "docid": "4304d7ef3caaaf874ad0168ce8001678", "text": "In a path-breaking paper last year Pat and Betty O’Neil and Gerhard Weikum pro posed a self-tuning improvement to the Least Recently Used (LRU) buffer management algorithm[l5]. Their improvement is called LRU/k and advocates giving priority to buffer pages baaed on the kth most recent access. (The standard LRU algorithm is denoted LRU/l according to this terminology.) If Pl’s kth most recent access is more more recent than P2’s, then Pl will be replaced after P2. Intuitively, LRU/k for k > 1 is a good strategy, because it gives low priority to pages that have been scanned or to pages that belong to a big randomly accessed file (e.g., the account file in TPC/A). They found that LRU/S achieves most of the advantage of their method. The one problem of LRU/S is the processor *Supported by U.S. Office of Naval Research #N00014-91-E 1472 and #N99914-92-J-1719, U.S. National Science Foundation grants #CC%9103953 and IFlI-9224691, and USBA #5555-19. Part of this work was performed while Theodore Johnson was a 1993 ASEE Summer Faculty Fellow at the National Space Science Data Center of NASA Goddard Space Flight Center. t Authors’ e-mail addresses : ted@cis.ufi.edu and", "title": "" } ]
[ { "docid": "266f636d13f406ecbacf8ed8443b2b5c", "text": "This review examines the most frequently cited sociological theories of crime and delinquency. The major theoretical perspectives are presented, beginning with anomie theory and the theories associated with the Chicago School of Sociology. They are followed by theories of strain, social control, opportunity, conflict, and developmental life course. The review concludes with a conceptual map featuring the inter-relationships and contexts of the major theoretical perspectives.", "title": "" }, { "docid": "21d22dd1ae61539e6885654e95d541ee", "text": "Reducing noise from the medical images, a satellite image etc. is a challenge for the researchers in digital image processing. Several approaches are there for noise reduction. Generally speckle noise is commonly found in synthetic aperture radar images, satellite images and medical images. This paper proposes filtering techniques for the removal of speckle noise from the digital images. Quantitative measures are done by using signal to noise ration and noise level is measured by the standard deviation.", "title": "" }, { "docid": "a8122b8139b88ad5bff074d527b76272", "text": "Salt is a natural component of the Australian landscape to which a number of biota inhabiting rivers and wetlands are adapted. Under natural flow conditions periods of low flow have resulted in the concentration of salts in wetlands and riverine pools. The organisms of these systems survive these salinities by tolerance or avoidance. Freshwater ecosystems in Australia are now becoming increasingly threatened by salinity because of rising saline groundwater and modification of the water regime reducing the frequency of high-flow (flushing) events, resulting in an accumulation of salt. Available data suggest that aquatic biota will be adversely affected as salinity exceeds 1000 mg L (1500 EC) but there is limited information on how increasing salinity will affect the various life stages of the biota. Salinisation can lead to changes in the physical environment that will affect ecosystem processes. However, we know little about how salinity interacts with the way nutrients and carbon are processed within an ecosystem. This paper updates the knowledge base on how salinity affects the physical and biotic components of aquatic ecosystems and explores the needs for information on how structure and function of aquatic ecosystems change with increasing salinity. BT0215 Ef ect of s al ini ty on f r eshwat er ecosys t em s in A us t rali a D. L. Niel e etal", "title": "" }, { "docid": "26162f0e3f6c8752a5dbf7174d2e5e44", "text": "Literature on the combination of qualitative and quantitative research components at the primary empirical study level has recently accumulated exponentially. However, this combination is only rarely discussed and applied at the research synthesis level. The purpose of this paper is to explore the possible contribution of mixed methods research to the integration of qualitative and quantitative research at the synthesis level. In order to contribute to the methodology and utilization of mixed methods at the synthesis level, we present a framework to perform mixed methods research syntheses (MMRS). The presented classification framework can help to inform researchers intending to carry out MMRS, and to provide ideas for conceptualizing and developing those syntheses. We illustrate the use of this framework by applying it to the planning of MMRS on effectiveness studies concerning interventions for challenging behavior in persons with intellectual disabilities, presenting two hypothetical examples. Finally, we discuss possible strengths of MMRS and note some remaining challenges concerning the implementation of these syntheses.", "title": "" }, { "docid": "d40e565a2ed22af998ae60f670210f57", "text": "Research on human infants has begun to shed light on early-develpping processes for segmenting perceptual arrays into objects. Infants appear to perceive objects by analyzing three-dimensional surface arrangements and motions. Their perception does not accord with a general tendency to maximize figural goodness or to attend-to nonaccidental geometric relations in visual arrays. Object perception does accord with principles governing the motions of material bodies: Infants divide perceptual arrays into units that move as connected wholes, that move separately from one another, that tend to maintain their size and shape over motion, and that tend to act upon each other only on contact. These findings suggest that o general representation of object unity and boundaries is interposed between representations of surfaces and representations of obiects of familiar kinds. The processes that construct this representation may be related to processes of physical reasoning. This article is animated by two proposals about perception and perceptual development. One proposal is substantive: In situations where perception develops through experience, but without instruction or deliberate reflection , development tends to enrich perceptual abilities but not to change them fundamentally. The second proposal is methodological: In the above situations , studies of the origins and early development of perception can shed light on perception in its mature state. These proposals will arise from a discussion of the early development of one perceptual ability: the ability to organize arrays of surfaces into unitary, bounded, and persisting objects. PERCEIVING OBJECTS In recent years, my colleagues and I have been studying young infants' perception of objects in complex displays in which objects are adjacent to other objects, objects are partly hidden behind other objects, of objects move fully", "title": "" }, { "docid": "74a233279ecfd8a66d24d283002051ab", "text": "This paper proposes a communication-assisted protection strategy implementable by commercially available microprocessor-based relays for the protection of medium-voltage microgrids. Even though the developed protection strategy benefits from communications, it offers a backup protection strategy to manage communication network failures. The paper also introduces the structure of a relay that enables the proposed protection strategy. Comprehensive simulation studies are carried out to verify the effectiveness of the proposed protection strategy under different fault scenarios, in the PSCAD/EMTDC software environment.", "title": "" }, { "docid": "b72c8a92e8d0952970a258bb43f5d1da", "text": "Neural networks excel in detecting regular patterns but are less successful in representing and manipulating complex data structures, possibly due to the lack of an external memory. This has led to the recent development of a new line of architectures known as Memory-Augmented Neural Networks (MANNs), each of which consists of a neural network that interacts with an external memory matrix. However, this RAM-like memory matrix is unstructured and thus does not naturally encode structured objects. Here we design a new MANN dubbed Relational Dynamic Memory Network (RDMN) to bridge the gap. Like existing MANNs, RDMN has a neural controller but its memory is structured as multi-relational graphs. RDMN uses the memory to represent and manipulate graph-structured data in response to query; and as a neural network, RDMN is trainable from labeled data. Thus RDMN learns to answer queries about a set of graph-structured objects without explicit programming. We evaluate the capability of RDMN on several important prediction problems, including software vulnerability, molecular bioactivity and chemical-chemical interaction. Results demonstrate the efficacy of the proposed model.", "title": "" }, { "docid": "a0a618a4c5e81dce26d095daea7668e2", "text": "We study the efficiency of deblocking algorithms for improving visual signals degraded by blocking artifacts from compression. Rather than using only the perceptually questionable PSNR, we instead propose a block-sensitive index, named PSNR-B, that produces objective judgments that accord with observations. The PSNR-B modifies PSNR by including a blocking effect factor. We also use the perceptually significant SSIM index, which produces results largely in agreement with PSNR-B. Simulation results show that the PSNR-B results in better performance for quality assessment of deblocked images than PSNR and a well-known blockiness-specific index.", "title": "" }, { "docid": "72535e221c8d0a274ed7b025a17c8a7c", "text": "Along with increasing demand on improving power quality, the most popular technique that has been used is Active Power Filter (APF); this is because APF can easily eliminate unwanted harmonics, improve power factor and overcome voltage sags. This paper will discuss and analyze the simulation result for a three-phase shunt active power filter using MATLAB/Simulink program. This simulation will implement a non-linear load and compensate line current harmonics under balance and unbalance load. As a result of the simulation, it is found that an active power filter is the better way to reduce the total harmonic distortion (THD) which is required by quality standards IEEE-519.", "title": "" }, { "docid": "3fec27391057a4c14f2df5933c4847d8", "text": "This article explains how entrepreneurship can help resolve the environmental problems of global socio-economic systems. Environmental economics concludes that environmental degradation results from the failure of markets, whereas the entrepreneurship literature argues that opportunities are inherent in market failure. A synthesis of these literatures suggests that environmentally relevant market failures represent opportunities for achieving profitability while simultaneously reducing environmentally degrading economic behaviors. It also implies conceptualizations of sustainable and environmental entrepreneurship which detail how entrepreneurs seize the opportunities that are inherent in environmentally relevant market failures. Finally, the article examines the ability of the proposed theoretical framework to transcend its environmental context and provide insight into expanding the domain of the study of entrepreneurship. D 2005 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "08c6bd4aae8995a2291e22ccfcf026f2", "text": "This paper presents an example-based method for calculating skeleton-driven body deformations. Our example data consists of range scans of a human body in a variety of poses. Using markers captured during range scanning, we construct a kinematic skeleton and identify the pose of each scan. We then construct a mutually consistent parameterization of all the scans using a posable subdivision surface template. The detail deformations are represented as displacements from this surface, and holes are filled smoothly within the displacement maps. Finally, we combine the range scans using k-nearest neighbor interpolation in pose space. We demonstrate results for a human upper body with controllable pose, kinematics, and underlying surface shape.", "title": "" }, { "docid": "6e72c4401bfeedaffd92d5261face2c6", "text": "OBJECTIVE\nTo examine the association between television advertising exposure and adults' consumption of fast foods.\n\n\nDESIGN\nCross-sectional telephone survey. Questions included measures of frequency of fast-food consumption at different meal times and average daily hours spent watching commercial television.\n\n\nSUBJECTS/SETTING\nSubjects comprised 1495 adults (41 % response rate) aged >or=18 years from Victoria, Australia.\n\n\nRESULTS\nTwenty-three per cent of respondents usually ate fast food for dinner at least once weekly, while 17 % consumed fast food for lunch on a weekly basis. The majority of respondents reported never eating fast food for breakfast (73 %) or snacks (65 %). Forty-one per cent of respondents estimated watching commercial television for <or=1 h/d (low viewers); 29 % watched for 2 h/d (moderate viewers); 30 % watched for >or=3 h/d (high viewers). After adjusting for demographic variables, high viewers were more likely to eat fast food for dinner at least once weekly compared with low viewers (OR = 1.45; 95 % CI 1.04, 2.03). Both moderate viewers (OR = 1.53; 95 % CI 1.01, 2.31) and high viewers (OR = 1.81; 95 % CI 1.20, 2.72) were more likely to eat fast food for snacks at least once weekly compared with low viewers. Commercial television viewing was not significantly related (P > 0.05) to fast-food consumption at breakfast or lunch.\n\n\nCONCLUSIONS\nThe results of the present study provide evidence to suggest that cumulative exposure to television food advertising is linked to adults' fast-food consumption. Additional research that systematically assesses adults' behavioural responses to fast-food advertisements is needed to gain a greater understanding of the mechanisms driving this association.", "title": "" }, { "docid": "d9cdbc7dd4d8ae34a3d5c1765eb48072", "text": "Beanstalk is an educational game for children ages 6-10 teaching balance-fulcrum principles while folding in scientific inquiry and socio-emotional learning. This paper explores the incorporation of these additional dimensions using intrinsic motivation and a framing narrative. Four versions of the game are detailed, along with preliminary player data in a 2×2 pilot test with 64 children shaping the modifications of Beanstalk for much broader testing.", "title": "" }, { "docid": "f4c1a8b19248e0cb8e2791210715e7b7", "text": "The translation of proper names is one of the most challenging activities every translator faces. While working on children’s literature, the translation is especially complicated since proper names usually have various allusions indicating sex, age, geographical belonging, history, specific meaning, playfulness of language and cultural connotations. The goal of this article is to draw attention to strategic choices for the translation of proper names in children’s literature. First, the article presents the theoretical considerations that deal with different aspects of proper names in literary works and the issue of their translation. Second, the translation strategies provided by the translation theorist Eirlys E. Davies used for this research are explained. In addition, the principles of adaptation of proper names provided the State Commission of the Lithuanian Language are presented. Then, the discussion proceeds to the quantitative analysis of the translated proper names with an emphasis on providing and explaining numerous examples. The research has been carried out on four popular fantasy books translated from English and German by three Lithuanian translators. After analyzing the strategies of preservation, localization, transformation and creation, the strategy of localization has proved to be the most frequent one in all translations.", "title": "" }, { "docid": "b52fb324287ec47860e189062f961ad8", "text": "In this paper we reexamine the place and role of stable model semantics in logic programming and contrast it with a least Herbrand model approach to Horn programs. We demonstrate that inherent features of stable model semantics naturally lead to a logic programming system that offers an interesting alternative to more traditional logic programming styles of Horn logic programming, stratified logic programming and logic programming with well-founded semantics. The proposed approach is based on the interpretation of program clauses as constraints. In this setting programs do not describe a single intended model, but a family of stable models. These stable models encode solutions to the constraint satisfaction problem described by the program. Our approach imposes restrictions on the syntax of logic programs. In particular, function symbols are eliminated from the language. We argue that the resulting logic programming system is well-attuned to problems in the class NP, has a well-defined domain of applications, and an emerging methodology of programming. We point out that what makes the whole approach viable is recent progress in implementations of algorithms to compute stable models of propositional logic programs.", "title": "" }, { "docid": "0222814440107fe89c13a790a6a3833e", "text": "This paper presents a third method of generation and detection of a single-sideband signal. The method is basically different from either the conventional filter or phasing method in that no sharp cutoff filters or wide-band 90° phase-difference networks are needed. This system is especially suited to keeping the signal energy confined to the desired bandwidth. Any unwanted sideband occupies the same band as the desired sideband, and the unwanted sideband in the usual sense is not present.", "title": "" }, { "docid": "73d58bbe0550fb58efc49ae5f84a1c7b", "text": "In this study, we will present the novel application of Type-2 (T2) fuzzy control into the popular video game called flappy bird. To the best of our knowledge, our work is the first deployment of the T2 fuzzy control into the computer games research area. We will propose a novel T2 fuzzified flappy bird control system that transforms the obstacle avoidance problem of the game logic into the reference tracking control problem. The presented T2 fuzzy control structure is composed of two important blocks which are the reference generator and Single Input Interval T2 Fuzzy Logic Controller (SIT2-FLC). The reference generator is the mechanism which uses the bird's position and the pipes' positions to generate an appropriate reference signal to be tracked. Thus, a conventional fuzzy feedback control system can be defined. The generated reference signal is tracked via the presented SIT2-FLC that can be easily tuned while also provides a certain degree of robustness to system. We will investigate the performance of the proposed T2 fuzzified flappy bird control system by providing comparative simulation results and also experimental results performed in the game environment. It will be shown that the proposed T2 fuzzified flappy bird control system results with a satisfactory performance both in the framework of fuzzy control and computer games. We believe that this first attempt of the employment of T2-FLCs in games will be an important step for a wider deployment of T2-FLCs in the research area of computer games.", "title": "" }, { "docid": "0d6a276770da5e7e544f66256084ba75", "text": "ARC AND PATH CONSISTENCY REVISITED' Roger Mohr and Thomas C. Henderson 2 CRIN BP 239 54506 Vandoeuvre (France)", "title": "" }, { "docid": "d131cda62d8ac73b209d092d8e36037e", "text": "The problem of packing congruent spheres (i.e., copies of the same sph ere) in a bounded domain arises in many applications. In this paper, we present a new pack-and-shake scheme for packing congruent spheres in various bounded 2-D domains. Our packing scheme is based on a number of interesting ideas, such as a trimming and packing approach, optimal lattice packing under translation and/or rotation, shaking procedures, etc. Our packing algorithms have fairly low time complexities. In certain cases, they even run in nearly linear time. Our techniques can be easily generalized to congruent packing of other shapes of objects, and are readily extended to higher dimensional spaces. Applications of our packing algorithms to treatment planning of radiosurgery are discussed. Experimental results suggest that our algorithms produce reasonably dense packings.", "title": "" }, { "docid": "1c9c30e3e007c2d11c6f5ebd0092050b", "text": "Fatty acids are essential components of the dynamic lipid metabolism in cells. Fatty acids can also signal to intracellular pathways to trigger a broad range of cellular responses. Oleic acid is an abundant monounsaturated omega-9 fatty acid that impinges on different biological processes, but the mechanisms of action are not completely understood. Here, we report that oleic acid stimulates the cAMP/protein kinase A pathway and activates the SIRT1-PGC1α transcriptional complex to modulate rates of fatty acid oxidation. In skeletal muscle cells, oleic acid treatment increased intracellular levels of cyclic adenosine monophosphate (cAMP) that turned on protein kinase A activity. This resulted in SIRT1 phosphorylation at Ser-434 and elevation of its catalytic deacetylase activity. A direct SIRT1 substrate is the transcriptional coactivator peroxisome proliferator-activated receptor γ coactivator 1-α (PGC1α), which became deacetylated and hyperactive after oleic acid treatment. Importantly, oleic acid, but not other long chain fatty acids such as palmitate, increased the expression of genes linked to fatty acid oxidation pathway in a SIRT1-PGC1α-dependent mechanism. As a result, oleic acid potently accelerated rates of complete fatty acid oxidation in skeletal muscle cells. These results illustrate how a single long chain fatty acid specifically controls lipid oxidation through a signaling/transcriptional pathway. Pharmacological manipulation of this lipid signaling pathway might provide therapeutic possibilities to treat metabolic diseases associated with lipid dysregulation.", "title": "" } ]
scidocsrr
b3f6de5f4bc5fec4e0baa480369a7b53
Tahoe: the least-authority filesystem
[ { "docid": "8608ccbb61cbfbf3aae7e832ad4be0aa", "text": "Part A: Fundamentals and Cryptography Chapter 1: A Framework for System Security Chapter 1 aims to describe a conceptual framework for the design and analysis of secure systems with the goal of defining a common language to express “concepts”. Since it is designed both for theoreticians and for practitioners, there are two kinds of applicability. On the one hand a meta-model is proposed to theoreticians, enabling them to express arbitrary axioms of other security models in this special framework. On the other hand the framework provides a language for describing the requirements, designs, and evaluations of secure systems. This information is given to the reader in the introduction and as a consequence he wants to get the specification of the framework. Unfortunately, the framework itself is not described! However, the contents cover first some surrounding concepts like “systems, owners, security and functionality”. These are described sometimes in a confusing way, so that it remains unclear, what the author really wants to focus on. The following comparison of “Qualitative and Quantitative Security” is done 1For example: if the reader is told, that “every system has an owner, and every owner is a system”, there obviously seems to be no difference between these entities (cp. p. 4).", "title": "" } ]
[ { "docid": "75961ecd0eadf854ad9f7d0d76f7e9c8", "text": "This paper presents the design of a microstrip-CPW transition where the CPW line propagates close to slotline mode. This design allows the solution to be determined entirely though analytical techniques. In addition, a planar via-less microwave crossover using this technique is proposed. The experimental results at 5 GHz show that the crossover has a minimum isolation of 32 dB. It also has low in-band insertion loss and return loss of 1.2 dB and 18 dB respectively over more than 44 % of bandwidth.", "title": "" }, { "docid": "fb4837a619a6b9e49ca2de944ec2314e", "text": "Inverse reinforcement learning addresses the general problem of recovering a reward function from samples of a policy provided by an expert/demonstrator. In this paper, we introduce active learning for inverse reinforcement learning. We propose an algorithm that allows the agent to query the demonstrator for samples at specific states, instead of relying only on samples provided at “arbitrary” states. The purpose of our algorithm is to estimate the reward function with similar accuracy as other methods from the literature while reducing the amount of policy samples required from the expert. We also discuss the use of our algorithm in higher dimensional problems, using both Monte Carlo and gradient methods. We present illustrative results of our algorithm in several simulated examples of different complexities.", "title": "" }, { "docid": "141c28bfbeb5e71dc68d20b6220794c3", "text": "The development of topical cosmetic anti-aging products is becoming increasingly sophisticated. This is demonstrated by the benefit agents selected and the scientific approaches used to identify them, treatment protocols that increasingly incorporate multi-product regimens, and the level of rigor in the clinical testing used to demonstrate efficacy. Consistent with these principles, a new cosmetic anti-aging regimen was recently developed. The key product ingredients were identified based on an understanding of the key mechanistic themes associated with aging at the genomic level coupled with appropriate in vitro testing. The products were designed to provide optimum benefits when used in combination in a regimen format. This cosmetic regimen was then tested for efficacy against the appearance of facial wrinkles in a 24-week clinical trial compared with 0.02% tretinoin, a recognized benchmark prescription treatment for facial wrinkling. The cosmetic regimen significantly improved wrinkle appearance after 8 weeks relative to tretinoin and was better tolerated. Wrinkle appearance benefits from the two treatments in cohorts of subjects who continued treatment through 24 weeks were also comparable.", "title": "" }, { "docid": "f57ae277024583565575474dcd32de03", "text": "The Publication Manual of the American Psychological Association (American Psychological Association, 2001, American Psychological Association, 2010) calls for the reporting of effect sizes and their confidence intervals. Estimates of effect size are useful for determining the practical or theoretical importance of an effect, the relative contributions of factors, and the power of an analysis. We surveyed articles published in 2009 and 2010 in the Journal of Experimental Psychology: General, noting the statistical analyses reported and the associated reporting of effect size estimates. Effect sizes were reported for fewer than half of the analyses; no article reported a confidence interval for an effect size. The most often reported analysis was analysis of variance, and almost half of these reports were not accompanied by effect sizes. Partial η2 was the most commonly reported effect size estimate for analysis of variance. For t tests, 2/3 of the articles did not report an associated effect size estimate; Cohen's d was the most often reported. We provide a straightforward guide to understanding, selecting, calculating, and interpreting effect sizes for many types of data and to methods for calculating effect size confidence intervals and power analysis.", "title": "" }, { "docid": "2c3c227a8fd9f2a96e61549b962d3741", "text": "Developmental dyslexia is an unexplained inability to acquire accurate or fluent reading that affects approximately 5-17% of children. Dyslexia is associated with structural and functional alterations in various brain regions that support reading. Neuroimaging studies in infants and pre-reading children suggest that these alterations predate reading instruction and reading failure, supporting the hypothesis that variant function in dyslexia susceptibility genes lead to atypical neural migration and/or axonal growth during early, most likely in utero, brain development. Yet, dyslexia is typically not diagnosed until a child has failed to learn to read as expected (usually in second grade or later). There is emerging evidence that neuroimaging measures, when combined with key behavioral measures, can enhance the accuracy of identification of dyslexia risk in pre-reading children but its sensitivity, specificity, and cost-efficiency is still unclear. Early identification of dyslexia risk carries important implications for dyslexia remediation and the amelioration of the psychosocial consequences commonly associated with reading failure.", "title": "" }, { "docid": "a7656eb3b0443ef88ef4bb134a4f3a55", "text": "A simple methodology is described – the multi-turbine power curve approach – a methodology to generate a qualified estimate of the time series of the aggregated power generation from planned wind turbine units distributed in an area where limited wind time series are available. This is often the situation in a planning phase where you want to simulate planned expansions in a power system with wind power. The methodology is described in a stepby-step guideline.", "title": "" }, { "docid": "dd145aafe2f80b132e02c05eab2df870", "text": "By performing a systematic study of the Hénon map, we find low-period sinks for parameter values extremely close to the classical ones. This raises the question whether or not the well-known Hénon attractor-the attractor of the Hénon map existing for the classical parameter values-is a strange attractor, or simply a stable periodic orbit. Using results from our study, we conclude that even if the latter were true, it would be practically impossible to establish this by computing trajectories of the map.", "title": "" }, { "docid": "ac37ca6b8bb12305ac6e880e6e7c336a", "text": "In this paper, we are interested in learning the underlying graph structure behind training data. Solving this basic problem is essential to carry out any graph signal processing or machine learning task. To realize this, we assume that the data is smooth with respect to the graph topology, and we parameterize the graph topology using an edge sampling function. That is, the graph Laplacian is expressed in terms of a sparse edge selection vector, which provides an explicit handle to control the sparsity level of the graph. We solve the sparse graph learning problem given some training data in both the noiseless and noisy settings. Given the true smooth data, the posed sparse graph learning problem can be solved optimally and is based on simple rank ordering. Given the noisy data, we show that the joint sparse graph learning and denoising problem can be simplified to designing only the sparse edge selection vector, which can be solved using convex optimization.", "title": "" }, { "docid": "0907539385c59f9bd476b2d1fb723a38", "text": "We present a real-time method for synthesizing highly complex human motions using a novel training regime we call the auto-conditioned Recurrent Neural Network (acRNN). Recently, researchers have attempted to synthesize new motion by using autoregressive techniques, but existing methods tend to freeze or diverge after a couple of seconds due to an accumulation of errors that are fed back into the network. Furthermore, such methods have only been shown to be reliable for relatively simple human motions, such as walking or running. In contrast, our approach can synthesize arbitrary motions with highly complex styles, including dances or martial arts in addition to locomotion. The acRNN is able to accomplish this by explicitly accommodating for autoregressive noise accumulation during training. Our work is the first to our knowledge that demonstrates the ability to generate over 18,000 continuous frames (300 seconds) of new complex human motion w.r.t. different styles.", "title": "" }, { "docid": "304315feeb6e21149a9c7a3c7c7c372e", "text": "In the future battlefields, communication between commanders and soldiers will be a decisive factor to complete an assigned mission. In such military tactical scenarios, network topology is constrained by the dynamics of dismounted soldiers in the battlefield. In the battlefield area, soldiers may be divided into a number of squads and fire teams with each one having its own mission, especially in some critical situation (e.g., a military response to an enemy attack or a sweep operation of houses). This situation may cause an unpredictable behavior in terms of wireless network topology state, thus increasing the susceptibility of network topology to decomposition in multiple components. This paper presents a Group Mobility Model simulating realistic battlefield behaviors and movement techniques. We also analyze wireless communication between dismounted soldiers and their squad leader deployed in a mobile ad hoc network (MANET) under different packet sending rate and perturbation factor modeled as a standard deviation parameter which may affect soldiers' mobility. A discussion of results follows, using several performance metrics according to network behavior (such as throughput, relaying rate of unrelated packets and path length).", "title": "" }, { "docid": "9b11423260c2d3d175892f846cecced3", "text": "Disturbances in fluid and electrolytes are among the most common clinical problems encountered in the intensive care unit (ICU). Recent studies have reported that fluid and electrolyte imbalances are associated with increased morbidity and mortality among critically ill patients. To provide optimal care, health care providers should be familiar with the principles and practice of fluid and electrolyte physiology and pathophysiology. Fluid resuscitation should be aimed at restoration of normal hemodynamics and tissue perfusion. Early goal-directed therapy has been shown to be effective in patients with severe sepsis or septic shock. On the other hand, liberal fluid administration is associated with adverse outcomes such as prolonged stay in the ICU, higher cost of care, and increased mortality. Development of hyponatremia in critically ill patients is associated with disturbances in the renal mechanism of urinary dilution. Removal of nonosmotic stimuli for vasopressin secretion, judicious use of hypertonic saline, and close monitoring of plasma and urine electrolytes are essential components of therapy. Hypernatremia is associated with cellular dehydration and central nervous system damage. Water deficit should be corrected with hypotonic fluid, and ongoing water loss should be taken into account. Cardiac manifestations should be identified and treated before initiating stepwise diagnostic evaluation of dyskalemias. Divalent ion deficiencies such as hypocalcemia, hypomagnesemia and hypophosphatemia should be identified and corrected, since they are associated with increased adverse events among critically ill patients.", "title": "" }, { "docid": "7bf0b158d9fa4e62b38b6757887c13ed", "text": "Examinations are the most crucial section of any educational system. They are intended to measure student's knowledge, skills and aptitude. At any institute, a great deal of manual effort is required to plan and arrange examination. It includes making seating arrangement for students as well as supervision duty chart for invigilators. Many institutes performs this task manually using excel sheets. This results in excessive wastage of time and manpower. Automating the entire system can help solve the stated problem efficiently saving a lot of time. This paper presents the automatic exam seating allocation. It works in two modules First as, Students Seating Arrangement (SSA) and second as, Supervision Duties Allocation (SDA). It assigns the classrooms and the duties to the teachers in any institution. An input-output data is obtained from the real system which is found out manually by the organizers who set up the seating arrangement and chalk out the supervision duties. The results obtained using the real system and these two models are compared. The application shows that the modules are highly efficient, low-cost, and can be widely used in various colleges and universities.", "title": "" }, { "docid": "007791833b15bd3367c11bb17b7abf82", "text": "When speakers talk, they gesture. The goal of this review is to investigate the contribution that these gestures make to how we communicate and think. Gesture can play a role in communication and thought at many timespans. We explore, in turn, gesture's contribution to how language is produced and understood in the moment; its contribution to how we learn language and other cognitive skills; and its contribution to how language is created over generations, over childhood, and on the spot. We find that the gestures speakers produce when they talk are integral to communication and can be harnessed in a number of ways. (a) Gesture reflects speakers' thoughts, often their unspoken thoughts, and thus can serve as a window onto cognition. Encouraging speakers to gesture can thus provide another route for teachers, clinicians, interviewers, etc., to better understand their communication partners. (b) Gesture can change speakers' thoughts. Encouraging gesture thus has the potential to change how students, patients, witnesses, etc., think about a problem and, as a result, alter the course of learning, therapy, or an interchange. (c) Gesture provides building blocks that can be used to construct a language. By watching how children and adults who do not already have a language put those blocks together, we can observe the process of language creation. Our hands are with us at all times and thus provide researchers and learners with an ever-present tool for understanding how we talk and think.", "title": "" }, { "docid": "0de38657b70acdaead3226d6ebd2f7ff", "text": "We present the results of a parametric study devised to allow us to optimally design a patch fed planar dielectric slab waveguide extended hemi-elliptical lens antenna. The lens antenna, 11lambda times 13lambda in the lens plane and 0.6lambda thick, constructed from polystyrene and weighing only 90 g is fabricated and characterized at 28.5 GHz for both single and multiple operating configurations. The lens when optimized for single beam operation achieves 18.5 dB measured gain (85% aperture efficiency), 40deg and 4.1deg half power beam width for E plane and H plane respectively and 10% impedance bandwidth for -10 dB return loss. While for optimized for multiple beam operation it is shown that the lens can accommodate up to 9 feeds and that beam symmetry can be maintained over a scan angle of 27deg with a gain of 14.9 to 17.7 dB, and first side lobe levels of -11 to -7 dB respectively. Over the frequency range 26 to 30 GHz the lens maintains a worst case return loss of -10 dB and port to port feed isolation of better than -25 dB. Further it is shown that residual leaked energy from the structure is less than -48 dBm at 1 cm, thus making a low profile enclosure possible. We also show that by simultaneous excitation of two adjacent ports we can obtain difference patterns with null depths of up to -36 dB.", "title": "" }, { "docid": "5843909545307a1d59e4da2c258748df", "text": "OBJECTIVE\nTo perform a systematic review of the global prevalence of low back pain, and to examine the influence that case definition, prevalence period, and other variables have on prevalence.\n\n\nMETHODS\nWe conducted a new systematic review of the global prevalence of low back pain that included general population studies published between 1980 and 2009. A total of 165 studies from 54 countries were identified. Of these, 64% had been published since the last comparable review.\n\n\nRESULTS\nLow back pain was shown to be a major problem throughout the world, with the highest prevalence among female individuals and those aged 40-80 years. After adjusting for methodologic variation, the mean ± SEM point prevalence was estimated to be 11.9 ± 2.0%, and the 1-month prevalence was estimated to be 23.2 ± 2.9%.\n\n\nCONCLUSION\nAs the population ages, the global number of individuals with low back pain is likely to increase substantially over the coming decades. Investigators are encouraged to adopt recent recommendations for a standard definition of low back pain and to consult a recently developed tool for assessing the risk of bias of prevalence studies.", "title": "" }, { "docid": "3fb2879369216d47d5462db09be970a8", "text": "Automatic synthesis of digital circuits has played a key role in obtaining high-performance designs. While considerable work has been done in the past, emerging device technologies call for a need to re-examine the synthesis approaches, so that better circuits that harness the true power of these technologies can be developed. This paper presents a methodology for synthesis applicable to devices that support ternary logic. We present an algorithm for synthesis that combines a geometrical representation with unary operators of multivalued logic. The geometric representation facilitates scanning appropriately to obtain simple sum-of-products expressions in terms of unary operators. An implementation based on Python is described. The power of the approach lies in its applicability to a wide variety of circuits. The proposed approach leads to the savings of 26% and 22% in transistor-count, respectively, for a ternary full-adder and a ternary content-addressable memory (TCAM) over the best existing designs. Furthermore, the proposed approach requires, on an average, less than 10% of the number of the transistors in comparison with a recent decoder-based design for various ternary benchmark circuits. Extensive HSPICE simulation results show roughly 92% reduction in power-delay product (PDP) for a $12\\times 12$ TCAM and 60% reduction in PDP for a 24-ternary digit barrel shifter over recent designs.", "title": "" }, { "docid": "4493a071f0dbdf7464d7ad299fec97d3", "text": "Drawing upon self-determination theory, this study tested different types of behavioral regulation as parallel mediators of the association between the job’s motivating potential, autonomy-supportive leadership, and understanding the organization’s strategy, on the one hand, and job satisfaction, turnover intention, and two types of organizational citizenship behaviors (OCB), on the other hand. In particular, intrinsic motivation and identified regulation were contrasted as idiosyncratic motivational processes. Analyses were based on data from 201 employees in the Swiss insurance industry. Results supported both types of self-determined motivation as mediators of specific antecedent-outcome relationships. Identified regulation, for example, particularly mediated the impact of contextual antecedents on both civic virtue and altruism OCB. Overall, controlled types of behavioral regulation showed comparatively weak relations to antecedents or consequences. The unique characteristics of motivational processes and potential explanations for the weak associations of controlled motivation are discussed.", "title": "" }, { "docid": "dc445d234bafaf115495ce1838163463", "text": "In this paper, a novel camera tamper detection algorithm is proposed to detect three types of tamper attacks: covered, moved and defocused. The edge disappearance rate is defined in order to measure the amount of edge pixels that disappear in the current frame from the background frame while excluding edges in the foreground. Tamper attacks are detected if the difference between the edge disappearance rate and its temporal average is larger than an adaptive threshold reflecting the environmental conditions of the cameras. The performance of the proposed algorithm is evaluated for short video sequences with three types of tamper attacks and for 24-h video sequences without tamper attacks; the algorithm is shown to achieve acceptable levels of detection and false alarm rates for all types of tamper attacks in real environments.", "title": "" }, { "docid": "e4da3b7fbbce2345d7772b0674a318d5", "text": "5", "title": "" }, { "docid": "25b250495fd4989ce1a365d5ddaa526e", "text": "Supervised automation of selected subtasks in Robot-Assisted Minimally Invasive Surgery (RMIS) has potential to reduce surgeon fatigue, operating time, and facilitate tele-surgery. Tumor resection is a multi-step multilateral surgical procedure to localize, expose, and debride (remove) a subcutaneous tumor, then seal the resulting wound with surgical adhesive. We developed a finite state machine using the novel devices to autonomously perform the tumor resection. The first device is an interchangeable instrument mount which uses the jaws and wrist of a standard RMIS gripping tool to securely hold and manipulate a variety of end-effectors. The second device is a fluid injection system that can facilitate precision delivery of material such as chemotherapy, stem cells, and surgical adhesives to specific targets using a single-use needle attached using the interchangeable instrument mount. Fluid flow through the needle is controlled via an externallymounted automated lead screw. Initial experiments suggest that an automated Intuitive Surgical dVRK system which uses these devices combined with a palpation probe and sensing model described in a previous paper can successfully complete the entire procedure in five of ten trials. We also show the most common failure phase, debridement, can be improved with visual feedback. Design details and video are available at: http://berkeleyautomation.github.io/surgical-tools.", "title": "" } ]
scidocsrr
f6c5620afa78588d3bfef71f6690a2fc
Automatic Video Summarization by Graph Modeling
[ { "docid": "e5261ee5ea2df8bae7cc82cb4841dea0", "text": "Automatic generation of video summarization is one of the key techniques in video management and browsing. In this paper, we present a generic framework of video summarization based on the modeling of viewer's attention. Without fully semantic understanding of video content, this framework takes advantage of understanding of video content, this framework takes advantage of computational attention models and eliminates the needs of complex heuristic rules in video summarization. A set of methods of audio-visual attention model features are proposed and presented. The experimental evaluations indicate that the computational attention based approach is an effective alternative to video semantic analysis for video summarization.", "title": "" }, { "docid": "aea474fcacb8af1d820413b5f842056f", "text": ".4 video sequence can be reprmented as a trajectory curve in a high dmensiond feature space. This video curve can be an~yzed by took Mar to those devdoped for planar cnrv=. h partidar, the classic biiary curve sphtting algorithm has been fonnd to be a nseti tool for video analysis. With a spEtting condition that checks the dimension&@ of the curve szgrnent being spht, the video curve can be recursivdy sirnpMed and repr~ented as a tree stmcture, and the framm that are fomtd to be junctions betieen curve segments at Merent, lev& of the tree can be used as ke-fiarn~s to summarize the tideo sequences at Merent levds of det ti. The-e keyframes can be combmed in various spatial and tempord configurations for browsing purposes. We describe a simple video player that displays the ke.fiarn~ seqnentifly and lets the user change the summarization level on the fly tith an additiond shder. 1.1 Sgrrlficance of the Problem Recent advances in digitd technology have promoted video as a vdnable information resource. I$le can now XCaS Se lected &ps from archives of thousands of hours of video footage host instantly. This new resource is e~citing, yet the sheer volume of data makes any retried task o~emhehning and its dcient. nsage impowible. Brow= ing tools that wodd flow the user to qnitiy get an idea of the content of video footage are SW important ti~~ ing components in these video database syst-Fortunately, the devdopment of browsing took is a very active area of research [13, 16, 17], and pow~ solutions are in the horizon. Browsers use as balding blocks subsets of fiarnes c~ed ke.frames, sdected because they smnmarize the video content better than their neighbors. Obviously, sdecting one keytiarne per shot does not adeqnatdy surnPermisslonlo rna~edigitalorhardcopi= of aftorpartof this v:ork for personalor classroomuse is granted v;IIhouIfee providedlhat copies are nol made or distributed for profitor commercial advantage, andthat copiesbear!hrsnoticeandihe full citationon ihe first page.To copyoxhem,se,IOrepublishtopostonservers or lo redistribute10 lists, requiresprior specific pzrrnisston znt’or a fe~ AChl hlultimedia’9S. BnsIol.UK @ 199sAchi 1-5s11>036s!9s/000s S.oo 211 marize the complex information content of long shots in which camera pan and zoom as we~ as object motion pr~ gr=sivdy unvd entirely new situations. Shots shotid be sampled by a higher or lower density of keyfrarnes according to their activity level. Sampbg techniques that would attempt to detect sigficant information changes simply by looking at pairs of frames or even several consecutive frames are bound to lack robustness in presence of noise, such as jitter occurring during camera motion or sudden ~urnination changes due to fluorescent Eght ticker, glare and photographic flash. kterestin~y, methods devdoped to detect perceptually signi$mnt points and &continuities on noisy 2D curves have succes~y addressed this type of problem, and can be extended to the mdtidimensiond curves that represent video sequences. h this paper, we describe an algorithm that can de compose a curve origin~y defined in a high dmensiond space into curve segments of low dimension. In partictiar, a video sequence can be mapped to a high dimensional polygonal trajectory curve by mapping each frame to a time dependent feature usctor, and representing these feature vectors as points. We can apply this algorithm to segment the curve of the video sequence into low ditnensiond curve segments or even fine segments. Th=e segments correspond to video footage where activity is low and frames are redundant. The idea is to detect the constituent segments of the video curoe rather than attempt to lomte the jtmctions between these segments directly. In such a dud aPProach, the curve is decomposed into segments \\vhich exkibit hearity or low dirnensiontity. Curvature discontinuiti~ are then assigned to the junctions between these segments. Detecting generrd stmcture in the video curves to derive frame locations of features such as cuts and shot transitions, rather than attempting to locate the features thernsdv~ by Iocrd analysis of frame changes, ensures that the detected positions of these features are more stable in the presence of noise which is effectively faltered out. h addition, the proposed technique butids a binary tree representation of a video sequence where branches cent tin frarn= corresponding to more dettied representations of the sequence. The user can view the video sequence at coarse or fine lev& of detds, zooming in by displaying keyfrantes corresponding to the leaves of the tree, or zooming out by displaying keyframes near the root of the tree. ●", "title": "" } ]
[ { "docid": "298d3280deb3bb326314a7324d135911", "text": "BACKGROUND\nUterine leiomyomas are rarely seen in adolescent and to date nine leiomyoma cases have been reported under age 17. Eight of these have been treated surgically via laparotomic myomectomy.\n\n\nCASE\nA 16-year-old girl presented with a painless, lobulated necrotic mass protruding through the introitus. The mass originated from posterior uterine wall resected using hysteroscopy. Final pathology report revealed a submucous uterine leiomyoma.\n\n\nSUMMARY AND CONCLUSION\nSubmucous uterine leiomyomas may present as a vaginal mass in adolescents and can be safely treated using hysteroscopy.", "title": "" }, { "docid": "8dc9f29e305d66590948896de2e0a672", "text": "Affective events are events that impact people in positive or negative ways. When people discuss an event, people understand not only the affective polarity but also the reason for the event being positive or negative. In this paper, we aim to categorize affective events based on the reasons why events are affective. We propose that an event is affective to people often because the event describes or indicates the satisfaction or violation of certain kind of human needs. For example, the event “I broke my leg” affects people negatively because the need to be physically healthy is violated. “I play computer games” has a positive affect on people because the need to have fun is probably satisfied. To categorize affective events in narrative human language, we define seven common human need categories and introduce a new data set of randomly sampled affective events with manual human need annotations. In addition, we explored two types of methods: a LIWC lexicon based method and supervised classifiers to automatically categorize affective event expressions with respect to human needs. Experiments show that these methods achieved moderate performance on this task.", "title": "" }, { "docid": "77d0786af4c5eee510a64790af497e25", "text": "Mobile computing is a revolutionary technology, born as a result of remarkable advances in computer hardware and wireless communication. Mobile applications have become increasingly popular in recent years. Today, it is not uncommon to see people playing games or reading mails on handphones. With the rapid advances in mobile computing technology, there is an increasing demand for processing realtime transactions in a mobile environment. Hence there is a strong need for efficient transaction management, data access modes and data management, consistency control and other mobile data management issues. This survey paper will cover issues related to concurrency control in mobile database. This paper studies concurrency control problem in mobile database systems, we analyze the features of mobile database and concurrency control techniques. With the increasing number of mobile hosts there are many new solutions and algorithms for concurrency control being proposed and implemented. We wish that our paper has served as a survey of the important solutions in the fields of concurrency control in mobile database. Keywords-component; Distributed Real-time Databases, Mobile Real-time Databases, Concurrency Control, Data Similarity, and Transaction Scheduling.", "title": "" }, { "docid": "3cceb3792d55bd14adb579bb9e3932ec", "text": "BACKGROUND\nTrastuzumab, a monoclonal antibody against human epidermal growth factor receptor 2 (HER2; also known as ERBB2), was investigated in combination with chemotherapy for first-line treatment of HER2-positive advanced gastric or gastro-oesophageal junction cancer.\n\n\nMETHODS\nToGA (Trastuzumab for Gastric Cancer) was an open-label, international, phase 3, randomised controlled trial undertaken in 122 centres in 24 countries. Patients with gastric or gastro-oesophageal junction cancer were eligible for inclusion if their tumours showed overexpression of HER2 protein by immunohistochemistry or gene amplification by fluorescence in-situ hybridisation. Participants were randomly assigned in a 1:1 ratio to receive a chemotherapy regimen consisting of capecitabine plus cisplatin or fluorouracil plus cisplatin given every 3 weeks for six cycles or chemotherapy in combination with intravenous trastuzumab. Allocation was by block randomisation stratified by Eastern Cooperative Oncology Group performance status, chemotherapy regimen, extent of disease, primary cancer site, and measurability of disease, implemented with a central interactive voice recognition system. The primary endpoint was overall survival in all randomised patients who received study medication at least once. This trial is registered with ClinicalTrials.gov, number NCT01041404.\n\n\nFINDINGS\n594 patients were randomly assigned to study treatment (trastuzumab plus chemotherapy, n=298; chemotherapy alone, n=296), of whom 584 were included in the primary analysis (n=294; n=290). Median follow-up was 18.6 months (IQR 11-25) in the trastuzumab plus chemotherapy group and 17.1 months (9-25) in the chemotherapy alone group. Median overall survival was 13.8 months (95% CI 12-16) in those assigned to trastuzumab plus chemotherapy compared with 11.1 months (10-13) in those assigned to chemotherapy alone (hazard ratio 0.74; 95% CI 0.60-0.91; p=0.0046). The most common adverse events in both groups were nausea (trastuzumab plus chemotherapy, 197 [67%] vs chemotherapy alone, 184 [63%]), vomiting (147 [50%] vs 134 [46%]), and neutropenia (157 [53%] vs 165 [57%]). Rates of overall grade 3 or 4 adverse events (201 [68%] vs 198 [68%]) and cardiac adverse events (17 [6%] vs 18 [6%]) did not differ between groups.\n\n\nINTERPRETATION\nTrastuzumab in combination with chemotherapy can be considered as a new standard option for patients with HER2-positive advanced gastric or gastro-oesophageal junction cancer.\n\n\nFUNDING\nF Hoffmann-La Roche.", "title": "" }, { "docid": "59932c6e6b406a41d814e651d32da9b2", "text": "The purpose of this study was to examine the effects of virtual reality simulation (VRS) on learning outcomes and retention of disaster training. The study used a longitudinal experimental design using two groups and repeated measures. A convenience sample of associate degree nursing students enrolled in a disaster course was randomized into two groups; both groups completed web-based modules; the treatment group also completed a virtually simulated disaster experience. Learning was measured using a 20-question multiple-choice knowledge assessment pre/post and at 2 months following training. Results were analyzed using the generalized linear model. Independent and paired t tests were used to examine the between- and within-participant differences. The main effect of the virtual simulation was strongly significant (p < .0001). The VRS effect demonstrated stability over time. In this preliminary examination, VRS is an instructional method that reinforces learning and improves learning retention.", "title": "" }, { "docid": "a6872c1cab2577547c9a7643a6acd03e", "text": "Current theories and models of leadership seek to explain the influence of the hierarchical superior upon the satisfaction and performance of subordinates. While disagreeing with one another in important respects, these theories and models share an implicit assumption that while the style of leadership likely to be effective may vary according to the situation, some leadership style will be effective regardless of the situation. It has been found, however, that certain individual, task, and organizational variables act as \"substitutes for leadership,\" negating the hierarchical superior's ability to exert either positive or negative influence over subordinate attitudes and effectiveness. This paper identifies a number of such substitutes for leadership, presents scales of questionnaire items for their measurement, and reports some preliminary tests.", "title": "" }, { "docid": "7dead097d1055a713bb56f9369eb1f98", "text": "Web applications vulnerabilities allow attackers to perform malicious actions that range from gaining unauthorized account access to obtaining sensitive data. The number of web application vulnerabilities in last decade is growing constantly. Improper input validation and sanitization are reasons for most of them. The most important of these vulnerabilities based on improper input validation and sanitization is SQL injection (SQLI) vulnerability. The primary focus of our research was to develop a reliable black-box vulnerability scanner for detecting SQLI vulnerability - SQLIVDT (SQL Injection Vulnerability Detection Tool). The black-box approach is based on simulation of SQLI attacks against web applications. Thus, the scope of analysis is limited to HTTP responses and HTML pages received from the application server. In order to achieve efficient SQLI vulnerability detection, an efficient algorithm for HTML page similarity detection is used. The proposed tool showed promising results as compared to six well-known web application scanners.", "title": "" }, { "docid": "edd9795ce024f8fed8057992cf3f4279", "text": "INTRODUCTION\nIdiopathic talipes equinovarus is the most common congenital defect characterized by the presence of a congenital dysplasia of all musculoskeletal tissues distal to the knee. For many years, the treatment has been based on extensive surgery after manipulation and cast trial. Owing to poor surgical results, Ponseti developed a new treatment protocol consisting of manipulation with cast and an Achilles tenotomy. The new technique requires 4 years of orthotic management to guarantee good results. The most recent studies have emphasized how difficult it is to comply with the orthotic posttreatment protocol. Poor compliance has been attributed to parent's low educational and low income level. The purpose of the study is to evaluate if poor compliance is due to the complexity of the orthotic use or if it is related to family education, cultural, or income factors.\n\n\nMETHOD\nFifty-three patients with 73 idiopathic talipes equinovarus feet were treated with the Ponseti technique and followed for 48 months after completing the cast treatment. There was a male predominance (72%). The mean age at presentation was 1 month (range: 1 wk to 7 mo). Twenty patients (38%) had bilateral involvement, 17 patients (32%) had right side affected, and 16 patients (30%) had the left side involved. The mean time of manipulation and casting treatment was 6 weeks (range: 4 to 10 wk). Thirty-eight patients (72%) required Achilles tenotomy as stipulated by the protocol. Recurrence was considered if there was a deterioration of the Dimeglio severity score requiring remanipulation and casting.\n\n\nRESULTS\nTwenty-four out of 73 feet treated by our service showed the evidence of recurrence (33%). Sex, age at presentation, cast treatment duration, unilateral or bilateral, severity score, the necessity of Achilles tenotomy, family educational, or income level did not reveal any significant correlation with the recurrence risk. Noncompliance with the orthotic use showed a significant correlation with the recurrence rate. The noncompliance rate did not show any correlation with the patient demographic data or parent's education level, insurance, or cultural factors as proposed previously.\n\n\nCONCLUSION\nThe use of the brace is extremely relevant with the Ponseti technique outcome (recurrence) in the treatment of idiopathic talipes equinovarus. Noncompliance is not related to family education, cultural, or income level. The Ponseti postcasting orthotic protocol needs to be reevaluated to a less demanding option to improve outcome and brace compliance.", "title": "" }, { "docid": "7db00719532ab0d9b408d692171d908f", "text": "The real-time monitoring of human movement can provide valuable information regarding an individual's degree of functional ability and general level of activity. This paper presents the implementation of a real-time classification system for the types of human movement associated with the data acquired from a single, waist-mounted triaxial accelerometer unit. The major advance proposed by the system is to perform the vast majority of signal processing onboard the wearable unit using embedded intelligence. In this way, the system distinguishes between periods of activity and rest, recognizes the postural orientation of the wearer, detects events such as walking and falls, and provides an estimation of metabolic energy expenditure. A laboratory-based trial involving six subjects was undertaken, with results indicating an overall accuracy of 90.8% across a series of 12 tasks (283 tests) involving a variety of movements related to normal daily activities. Distinction between activity and rest was performed without error; recognition of postural orientation was carried out with 94.1% accuracy, classification of walking was achieved with less certainty (83.3% accuracy), and detection of possible falls was made with 95.6% accuracy. Results demonstrate the feasibility of implementing an accelerometry-based, real-time movement classifier using embedded intelligence", "title": "" }, { "docid": "a2842352924cbd1deff52976425a0bd6", "text": "Content-based music information retrieval tasks have traditionally been solved using engineered features and shallow processing architectures. In recent years, there has been increasing interest in using feature learning and deep architectures instead, thus reducing the required engineering effort and the need for prior knowledge. However, this new approach typically still relies on mid-level representations of music audio, e.g. spectrograms, instead of raw audio signals. In this paper, we investigate whether it is possible to apply feature learning directly to raw audio signals. We train convolutional neural networks using both approaches and compare their performance on an automatic tagging task. Although they do not outperform a spectrogram-based approach, the networks are able to autonomously discover frequency decompositions from raw audio, as well as phase-and translation-invariant feature representations.", "title": "" }, { "docid": "cdcdbb6dca02bdafdf9f5d636acb8b3d", "text": "BACKGROUND\nExpertise has been extensively studied in several sports over recent years. The specificities of how excellence is achieved in Association Football, a sport practiced worldwide, are being repeatedly investigated by many researchers through a variety of approaches and scientific disciplines.\n\n\nOBJECTIVE\nThe aim of this review was to identify and synthesise the most significant literature addressing talent identification and development in football. We identified the most frequently researched topics and characterised their methodologies.\n\n\nMETHODS\nA systematic review of Web of Science™ Core Collection and Scopus databases was performed according to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines. The following keywords were used: \"football\" and \"soccer\". Each word was associated with the terms \"talent\", \"expert*\", \"elite\", \"elite athlete\", \"identification\", \"career transition\" or \"career progression\". The selection was for the original articles in English containing relevant data about talent development/identification on male footballers.\n\n\nRESULTS\nThe search returned 2944 records. After screening against set criteria, a total of 70 manuscripts were fully reviewed. The quality of the evidence reviewed was generally excellent. The most common topics of analysis were (1) task constraints: (a) specificity and volume of practice; (2) performers' constraints: (a) psychological factors; (b) technical and tactical skills; (c) anthropometric and physiological factors; (3) environmental constraints: (a) relative age effect; (b) socio-cultural influences; and (4) multidimensional analysis. Results indicate that the most successful players present technical, tactical, anthropometric, physiological and psychological advantages that change non-linearly with age, maturational status and playing positions. These findings should be carefully considered by those involved in the identification and development of football players.\n\n\nCONCLUSION\nThis review highlights the need for coaches and scouts to consider the players' technical and tactical skills combined with their anthropometric and physiological characteristics scaled to age. Moreover, research addressing the psychological and environmental aspects that influence talent identification and development in football is currently lacking. The limitations detected in the reviewed studies suggest that future research should include the best performers and adopt a longitudinal and multidimensional perspective.", "title": "" }, { "docid": "8ed2fa021e5b812de90795251b5c2b64", "text": "A new implicit surface fitting method for surface reconstruction from scattered point data is proposed. The method combines an adaptive partition of unity approximation with least-squares RBF fitting and is capable of generating a high quality surface reconstruction. Given a set of points scattered over a smooth surface, first a sparse set of overlapped local approximations is constructed. The partition of unity generated from these local approximants already gives a faithful surface reconstruction. The final reconstruction is obtained by adding compactly supported RBFs. The main feature of the developed approach consists of using various regularization schemes which lead to economical, yet accurate surface reconstruction.", "title": "" }, { "docid": "99fdab0b77428f98e9486d1cc7430757", "text": "Self organizing Maps (SOMs) are most well-known, unsupervised approach of neural network that is used for clustering and are very efficient in handling large and high dimensional dataset. As SOMs can be applied on large complex set, so it can be implemented to detect credit card fraud. Online banking and ecommerce has been experiencing rapid growth over past years and will show tremendous growth even in future. So, it is very necessary to keep an eye on fraudsters and find out some ways to depreciate the rate of frauds. This paper focuses on Real Time Credit Card Fraud Detection and presents a new and innovative approach to detect the fraud by the help of SOM. Keywords— Self-Organizing Map, Unsupervised Learning, Transaction Introduction The fast and rapid growth in the credit card issuers, online merchants and card users have made them very conscious about the online frauds. Card users just want to make safe transactions while purchasing their goods and on the other hand, banks want to differentiate the legitimate as well as fraudulent users. The merchants that is mostly affected as they do not have any kind of evidence like Digital Signature wants to sell their goods only to the legitimate users to make profit and want to use a great secure system that avoid them from a great loss. Our approach of Self Organizing map can work in the large complex datasets and can cluster even unaware datasets. It is an unsupervised neural network that works even in the absence of an external teacher and provides fruitful results in detecting credit card frauds. It is interesting to note that credit card fraud affect owner the least and merchant the most. The existing legislation and card holder protection policies as well as insurance scheme affect most the merchant and customer the least. Card issuer bank also has to pay the administrative cost and infrastructure cost. Studies show that average time lag between the fraudulent transaction dates and charge back notification 1344 Mitali Bansal and Suman can be high as 72 days, thereby giving fraudster sufficient time to cause severe damage. In this paper first, you will see a brief survey of different approaches on credit card fraud detection systems,. In Section 2 we explain the design and architecture of SOM to detect Credit Card Fraud. Section 3, will represent results. Finally, Conclusion are presented in Section 4. A Survey of Credit card fraud Detection Fraud Detection Systems work by trying to identify anomalies in an environment [1]. At the early stage, the research focus lies in using rule based expert systems. The model’s rule constructed through the input of many fraud experts within the bank [2]. But when their processing is encountered, their output become was worst. Because the rule based expert system totally lies on the prior information of the data set that is generally not available easily in the case of credit card frauds. After these many Artificial Neural Network (ANN) is mostly used and solved very complex problems in a very efficient way [3]. Some believe that unsupervised methods are best to detect credit card frauds because these methods work well even in absence of external teacher. While supervised methods are based on prior data knowledge and surely needs an external teacher. Unsupervised method is used [4] [5] to detect some kind of anomalies like fraud. They do not cluster the data but provides a ranking on the list of all segments and by this ranking method they provide how much a segment is anomalous as compare to the whole data sets or other segments [6]. Dempster-Shafer Theory [1] is able to detect anomalous data. They did an experiment to detect infected E-mails by the help of D-S theory. As this theory can also be helpful because in this modern era all the new card information is sent through e-mails by the banks. Some various other approaches have also been used to detect Credit Card Frauds, one of which is ID3 pre pruning method in which decision tree is formed to detect anomalous data [7]. Artificial Neural Networks are other efficient and intelligent methods to detect credit card fraud. A compound method that is based on rule-based systems and ANN is used to detect Credit card fraud by Brause et al. [8]. Our work is based on self-organizing map that is based on unsupervised approach to detect Credit Card Fraud. We focus on to detect anomalous data by making clusters so that legitimate and fraudulent transactions can be differentiated. Collection of data and its pre-processing is also explained by giving example in fraud detection. SYSTEM DESIGN ARCHITECTURE The SOM works well in detecting Credit Card Fraud and all its interesting properties we have already discussed. Here we provide some detailed prototype and working of SOM in fraud detection. Credit Card Fraud Detection Using Self Organised Map 1345 Our Approach to detect Credit Card Fraud Using SOM Our approach towards Real time Credit Card Fraud detection is modelled by prototype. It is a multilayered approach as: 1. Initial Selection of data set. 2. Conversion of data from Symbolic to Numerical Data Set. 3. Implementation of SOM. 4. A layer of further review and decision making. This multilayered approach works well in the detection of Credit Card Fraud. As this approach is based on SOM, so finally it will cluster the data into fraudulent and genuine sets. By further review the sets can be analyzed and proper decision can be taken based on those results. The algorithm that is implemented to detect credit card fraud using Self Organizing Map is represented in Figure 1: 1. Initially choose all neurons (weight vectors wi) randomly. 2. For each input vector Ii { 2. 1) Convert all the symbolic input to the Numerical input by applying some mean and standard deviation formulas. 2. 2) Perform the initial authentication process like verification of Pin, Address, expiry date etc. } 3. Choose the learning rate parameter randomly for eg. 0. 5 4. Initially update all neurons for each input vector Ii. 5. Apply the unsupervised approach to distinguish the transaction into fraudulent and non-fraudulent cluster. 5. 1) Perform iteration till a specific cluster is not formed for a input vector. 6. By applying SOM we can divide the transactions into fraudulent (Fk) and genuine vector (Gk). 7. Perform a manually review decision. 8. Get your optimized result. Figure 1: Algorithm to detect Credit Card Fraud Initial Selection of Data Set Input vectors are generally in the form of High Dimensional Real world quantities which will be fed to a neuron matrix. These quantities are generally divided as [9]: 1346 Mitali Bansal and Suman Figure 2: Division of Transactions to form an Input Matrix In Account related quantities we can include like account number, currency of account, account opening date, last date of credit or debit available balance etc. In customer related quantities we can include customer id, customer type like high profile, low profile etc. In transaction related quantities we can have transaction no, location, currency, its timestamp etc. Conversion of Symbolic data into Numeric In credit card fraud detection, all of the data of banking transactions will be in the form of the symbolic, so there is a need to convert that symbolic data into numeric one. For example location, name, customer id etc. Conversion of all this data needs some normal distribution mechanism on the basis of frequency. The normalizing of data is done using Z = (Ni-Mi) / S where Ni is frequency of occurrence of a particular entity, M is mean and S is standard deviation. Then after all this procedure we will arrive at normalized values [9]. Implementation of SOM After getting all the normalized values, we make a input vector matrix. After that randomly weight vector is selected, this is generally termed as Neuron matrix. Dimension of this neuron matrix will be same as input vector matrix. A randomly learning parameter α is also taken. The value of this learning parameter is a small positive value that can be adjusted according to the process. The commonly used similarity matrix is the Euclidian distance given by equation 1: Distance between two neuron = jx(p)=minj││X-Wj(p)││={ Xi-Wij(p)]}, (1) Where j=1, 2......m and W is neuron or weight matrix, X is Input vectorThe main output of SOM is the patterns and cluster it has given as output vector. The cluster in credit card fraud detection will be in the form of fraudulent and genuine set represented as Fk and Gk respectively. Credit Card Fraud Detection Using Self Organised Map 1347 Review and decision making The clustering of input data into fraudulent and genuine set shows the categories of transactions performed as well as rarely performed more frequently as well as rarely by each customer. Since by the help of SOM relationship as well as hidden patterns is unearthed, we get more accuracy in our results. If the extent of suspicious activity exceeds a certain threshold value that transaction can be sent for review. So, it reduces overall processing time and complexity. Results The no of transactions taken in Test1, Test2, Test3 and Test4 are 500, 1000, 1500 and 2000 respectively. When compared to ID3 algorithm our approach presents much efficient result as shown in figure 3. Conclusion As results shows that SOM gives better results in case of detecting credit card fraud. As all parameters are verified and well represented in plots. The uniqueness of our approach lies in using the normalization and clustering mechanism of SOM of detecting credit card fraud. This helps in detecting hidden patterns of the transactions which cannot be identified to the other traditional method. With appropriate no of weight neurons and with help of thousands of iterations the network is trained and then result is verified to new transactions. The concept of normalization will help to normalize the values in other fraud cases and SOM will be helpful in detecting anomalies in credit card fraud cas", "title": "" }, { "docid": "f7f609ebb1a0fcf789e5e2e5fe463718", "text": "Individuals with generalized anxiety disorder (GAD) display poor emotional conflict adaptation, a cognitive control process requiring the adjustment of performance based on previous-trial conflict. It is unclear whether GAD-related conflict adaptation difficulties are present during tasks without emotionally-salient stimuli. We examined conflict adaptation using the N2 component of the event-related potential (ERP) and behavioral responses on a Flanker task from 35 individuals with GAD and 35 controls. Groups did not differ on conflict adaptation accuracy; individuals with GAD also displayed intact RT conflict adaptation. In contrast, individuals with GAD showed decreased amplitude N2 principal component for conflict adaptation. Correlations showed increased anxiety and depressive symptoms were associated with longer RT conflict adaptation effects and lower ERP amplitudes, but not when separated by group. We conclude that individuals with GAD show reduced conflict-related component processes that may be influenced by compensatory activity, even in the absence of emotionally-salient stimuli.", "title": "" }, { "docid": "e6bb946ea2984ccb54fd37833bb55585", "text": "11 Automatic Vehicles Counting and Recognizing (AVCR) is a very challenging topic in transport engineering having important implications for the modern transport policies. Implementing a computer-assisted AVCR in the most vital districts of a country provides a large amount of measurements which are statistically processed and analyzed, the purpose of which is to optimize the decision-making of traffic operation, pavement design, and transportation planning. Since the advent of computer vision technology, video-based surveillance of road vehicles has become a key component in developing autonomous intelligent transportation systems. In this context, this paper proposes a Pattern Recognition system which employs an unsupervised clustering algorithm with the objective of detecting, counting and recognizing a number of dynamic objects crossing a roadway. This strategy defines a virtual sensor, whose aim is similar to that of an inductive-loop in a traditional mechanism, i.e. to extract from the traffic video streaming a number of signals containing anarchic information about the road traffic. Then, the set of signals is filtered with the aim of conserving only motion’s significant patterns. Resulted data are subsequently processed by a statistical analysis technique so as to estimate and try to recognize a number of clusters corresponding to vehicles. Finite Mixture Models fitted by the EM algorithm are used to assess such clusters, which provides ∗Corresponding author Email addresses: hana.rabbouch@gmail.com (Hana RABBOUCH), foued.saadaoui@gmail.com (Foued SAÂDAOUI), rafaa_mraihi@yahoo.fr (Rafaa MRAIHI) Preprint submitted to Journal of LTEX Templates April 21, 2017", "title": "" }, { "docid": "4d84b8dbcd0d5922fa3b20287b75c449", "text": "We investigate an efficient parallelization of the most common iterative sparse tensor decomposition algorithms on distributed memory systems. A key operation in each iteration of these algorithms is the matricized tensor times Khatri-Rao product (MTTKRP). This operation amounts to element-wise vector multiplication and reduction depending on the sparsity of the tensor. We investigate a fine and a coarse-grain task definition for this operation, and propose hypergraph partitioning-based methods for these task definitions to achieve the load balance as well as reduce the communication requirements. We also design a distributed memory sparse tensor library, HyperTensor, which implements a well-known algorithm for the CANDECOMP-/PARAFAC (CP) tensor decomposition using the task definitions and the associated partitioning methods. We use this library to test the proposed implementation of MTTKRP in CP decomposition context, and report scalability results up to 1024 MPI ranks. We observed up to 194 fold speedups using 512 MPI processes on a well-known real world data, and significantly better performance results with respect to a state of the art implementation.", "title": "" }, { "docid": "6c682f3412cc98eac5ae2a2356dccef7", "text": "Since their inception, micro-size light emitting diode (µLED) arrays based on III-nitride semiconductors have emerged as a promising technology for a range of applications. This paper provides an overview on a decade progresses on realizing III-nitride µLED based high voltage single-chip AC/DC-LEDs without power converters to address the key compatibility issue between LEDs and AC power grid infrastructure; and high-resolution solid-state self-emissive microdisplays operating in an active driving scheme to address the need of high brightness, efficiency and robustness of microdisplays. These devices utilize the photonic integration approach by integrating µLED arrays on-chip. Other applications of nitride µLED arrays are also discussed.", "title": "" }, { "docid": "14fe7deaece11b3d4cd4701199a18599", "text": "\"Natively unfolded\" proteins occupy a unique niche within the protein kingdom in that they lack ordered structure under conditions of neutral pH in vitro. Analysis of amino acid sequences, based on the normalized net charge and mean hydrophobicity, has been applied to two sets of proteins: small globular folded proteins and \"natively unfolded\" ones. The results show that \"natively unfolded\" proteins are specifically localized within a unique region of charge-hydrophobicity phase space and indicate that a combination of low overall hydrophobicity and large net charge represent a unique structural feature of \"natively unfolded\" proteins.", "title": "" }, { "docid": "041772bbad50a5bf537c0097e1331bdd", "text": "As students read expository text, comprehension is improved by pausing to answer questions that reinforce the material. We describe an automatic question generator that uses semantic pattern recognition to create questions of varying depth and type for self-study or tutoring. Throughout, we explore how linguistic considerations inform system design. In the described system, semantic role labels of source sentences are used in a domain-independent manner to generate both questions and answers related to the source sentence. Evaluation results show a 44% reduction in the error rate relative to the best prior systems, averaging over all metrics, and up to 61% reduction in the error rate on grammaticality judgments.", "title": "" }, { "docid": "d1eed1d7875930865944c98fbab5f7e1", "text": "Optic disc (OD) and fovea locations are two important anatomical landmarks in automated analysis of retinal disease in color fundus photographs. This paper presents a new, fast, fully automatic optic disc and fovea localization algorithm developed for diabetic retinopathy (DR) screening. The optic disc localization methodology comprises of two steps. First, the OD location is identified using template matching and directional matched filter. To reduce false positives due to bright areas of pathology, we exploit vessel characteristics inside the optic disc. The location of the fovea is estimated as the point of lowest matched filter response within a search area determined by the optic disc location. Second, optic disc segmentation is performed. Based on the detected optic disc location, a fast hybrid level-set algorithm which combines the region information and edge gradient to drive the curve evolution is used to segment the optic disc boundary. Extensive evaluation was performed on 1200 images (Messidor) composed of 540 images of healthy retinas, 431 images with DR but no risk of macular edema (ME), and 229 images with DR and risk of ME. The OD location methodology obtained 98.3% success rate, while fovea location achieved 95% success rate. The average mean absolute distance (MAD) between the OD segmentation algorithm and “gold standard” is 10.5% of estimated OD radius. Qualitatively, 97% of the images achieved Excellent to Fair performance for OD segmentation. The segmentation algorithm performs well even on blurred images.", "title": "" } ]
scidocsrr
8542d6e847a4522a40e735600bd2095a
An efficient data replication and load balancing technique for fog computing environment
[ { "docid": "780f2a97da4f18fc3710fa0ca0489ef4", "text": "MapReduce has gradually become the framework of choice for \"big data\". The MapReduce model allows for efficient and swift processing of large scale data with a cluster of compute nodes. However, the efficiency here comes at a price. The performance of widely used MapReduce implementations such as Hadoop suffers in heterogeneous and load-imbalanced clusters. We show the disparity in performance between homogeneous and heterogeneous clusters in this paper to be high. Subsequently, we present MARLA, a MapReduce framework capable of performing well not only in homogeneous settings, but also when the cluster exhibits heterogeneous properties. We address the problems associated with existing MapReduce implementations affecting cluster heterogeneity, and subsequently present through MARLA the components and trade-offs necessary for better MapReduce performance in heterogeneous cluster and cloud environments. We quantify the performance gains exhibited by our approach against Apache Hadoop and MARIANE in data intensive and compute intensive applications.", "title": "" } ]
[ { "docid": "8c3ecd27a695fef2d009bbf627820a0d", "text": "This paper presents a novel attention mechanism to improve stereo-vision based object recognition systems in terms of recognition performance and computational efficiency at the same time. We utilize the Stixel World, a compact medium-level 3D representation of the local environment, as an early focus-of-attention stage for subsequent system modules. In particular, the search space of computationally expensive pattern classifiers is significantly narrowed down. We explicitly couple the 3D Stixel representation with prior knowledge about the object class of interest, i.e. 3D geometry and symmetry, to precisely focus processing on well-defined local regions that are consistent with the environment model. Experiments are conducted on large real-world datasets captured from a moving vehicle in urban traffic. In case of vehicle recognition as an experimental testbed, we demonstrate that the proposed Stixel-based attention mechanism significantly reduces false positive rates at constant sensitivity levels by up to a factor of 8 over state-of-the-art. At the same time, computational costs are reduced by more than an order of magnitude.", "title": "" }, { "docid": "2c0b3b58da77cc217e4311142c0aa196", "text": "In this paper, we show that the hinge loss can be interpreted as the neg-log-likelihood of a semi-parametric model of posterior probabilities. From this point of view, SVMs represent the parametric component of a semi-parametric model fitted by a maximum a posteriori estimation procedure. This connection enables to derive a mapping from SVM scores to estimated posterior probabilities. Unlike previous proposals, the suggested mapping is interval-valued, providing a set of posterior probabilities compatible with each SVM score. This framework offers a new way to adapt the SVM optimization problem to unbalanced classification, when decisions result in unequal (asymmetric) losses. Experiments show improvements over state-of-the-art procedures.", "title": "" }, { "docid": "9c7f9ff55b02bd53e94df004dcc615b9", "text": "Support Vector Machines (SVM) is among the most popular classification techniques in machine learning, hence designing fast primal SVM algorithms for large-scale datasets is a hot topic in recent years. This paper presents a new L2norm regularized primal SVM solver using Augmented Lagrange Multipliers, with linear computational cost for Lp-norm loss functions. The most computationally intensive steps (that determine the algorithmic complexity) of the proposed algorithm is purely and simply matrix-byvector multiplication, which can be easily parallelized on a multi-core server for parallel computing. We implement and integrate our algorithm into the interfaces and framework of the well-known LibLinear software toolbox. Experiments show that our algorithm is with stable performance and on average faster than the stateof-the-art solvers such as SVM perf , Pegasos and the LibLinear that integrates the TRON, PCD and DCD algorithms.", "title": "" }, { "docid": "5d7dced0ed875fed0f11440dc26fffd1", "text": "Different from conventional mobile networks designed to optimize the transmission efficiency of one particular service (e.g., streaming voice/ video) primarily, the industry and academia are reaching an agreement that 5G mobile networks are projected to sustain manifold wireless requirements, including higher mobility, higher data rates, and lower latency. For this purpose, 3GPP has launched the standardization activity for the first phase 5G system in Release 15 named New Radio (NR). To fully understand this crucial technology, this article offers a comprehensive overview of the state-of-the-art development of NR, including deployment scenarios, numerologies, frame structure, new waveform, multiple access, initial/random access procedure, and enhanced carrier aggregation (CA) for resource requests and data transmissions. The provided insights thus facilitate knowledge of design and practice for further features of NR.", "title": "" }, { "docid": "96d8e375616a7ee137276d385c14a18a", "text": "Constructivism is a theory of learning which claims that students construct knowledge rather than merely receive and store knowledge transmitted by the teacher. Constructivism has been extremely influential in science and mathematics education, but not in computer science education (CSE). This paper surveys constructivism in the context of CSE, and shows how the theory can supply a theoretical basis for debating issues and evaluating proposals.", "title": "" }, { "docid": "70f0997789d4d61a6e5d44f15a6af32a", "text": "This study reviewed the literature on cone-beam computerized tomography (CBCT) imaging of the oral and maxillofacial (OMF) region. A PUBMED search (National Library of Medicine, NCBI; revised 1 December 2007) from 1998 to December 2007 was conducted. This search revealed 375 papers, which were screened in detail. 176 papers were clinically relevant and were analyzed in detail. CBCT is used in OMF surgery and orthodontics for numerous clinical applications, particularly for its low cost, easy accessibility and low radiation compared with multi-slice computerized tomography. The results of this systematic review show that there is a lack of evidence-based data on the radiation dose for CBCT imaging. Terminology and technical device properties and settings were not consistent in the literature. An attempt was made to provide a minimal set of CBCT device-related parameters for dedicated OMF scanners as a guideline for future studies.", "title": "" }, { "docid": "4d91850baa5995bc7d5e3d5e9e11fa58", "text": "Drug risk management has many tools for minimizing risk and black-boxed warnings (BBWs) are one of those tools. Some serious adverse drug reactions (ADRs) emerge only after a drug is marketed and used in a larger population. In Thailand, additional legal warnings after drug approval, in the form of black-boxed warnings, may be applied. Review of their characteristics can assist in the development of effective risk mitigation. This study was a cross sectional review of all legal warnings imposed in Thailand after drug approval (2003-2012). Any boxed warnings for biological products and revised warnings which were not related to safety were excluded. Nine legal warnings were evaluated. Seven related to drugs classes and two to individual drugs. The warnings involved four main types of predictable ADRs: drug-disease interactions, side effects, overdose and drug-drug interactions. The average time from first ADRs reported to legal warnings implementation was 12 years. The triggers were from both safety signals in Thailand and regulatory measures in other countries outside Thailand.", "title": "" }, { "docid": "dc71b53847d33e82c53f0b288da89bfa", "text": "We explore the use of convolutional neural networks for the semantic classification of remote sensing scenes. Two recently proposed architectures, CaffeNet and GoogLeNet, are adopted, with three different learning modalities. Besides conventional training from scratch, we resort to pre-trained networks that are only fine-tuned on the target data, so as to avoid overfitting problems and reduce design time. Experiments on two remote sensing datasets, with markedly different characteristics, testify on the effectiveness and wide applicability of the proposed solution, which guarantees a significant performance improvement over all state-of-the-art references.", "title": "" }, { "docid": "5e0921d158f0fa7b299fffba52f724d5", "text": "Space syntax derives from a set of analytic measures of configuration that have been shown to correlate well with how people move through and use buildings and urban environments. Space syntax represents the open space of an environment in terms of the intervisibility of points in space. The measures are thus purely configurational, and take no account of attractors, nor do they make any assumptions about origins and destinations or path planning. Space syntax has found that, despite many proposed higher-level cognitive models, there appears to be a fundamental process that informs human and social usage of an environment. In this paper we describe an exosomatic visual architecture, based on space syntax visibility graphs, giving many agents simultaneous access to the same pre-processed information about the configuration of a space layout. Results of experiments in a simulated retail environment show that a surprisingly simple ‘random next step’ based rule outperforms a more complex ‘destination based’ rule in reproducing observed human movement behaviour. We conclude that the effects of spatial configuration on movement patterns that space syntax studies have found are consistent with a model of individual decision behaviour based on the spatial affordances offered by the morphology of the local visual field.", "title": "" }, { "docid": "5910bcdd2dcacb42d47194a70679edb1", "text": "Developing effective suspicious activity detection methods has become an increasingly critical problem for governments and financial institutions in their efforts to fight money laundering. Previous anti-money laundering (AML) systems were mostly rule-based systems which suffered from low efficiency and could can be easily learned and evaded by money launders. Recently researchers have begun to use machine learning methods to solve the suspicious activity detection problem. However nearly all these methods focus on detecting suspicious activities on accounts or individual level. In this paper we propose a sequence matching based algorithm to identify suspicious sequences in transactions. Our method aims to pick out suspicious transaction sequences using two kinds of information as reference sequences: 1) individual account’s transaction history and 2) transaction information from other accounts in a peer group. By introducing the reference sequences, we can combat those who want to evade regulations by simply learning and adapting reporting criteria, and easily detect suspicious patterns. The initial results show that our approach is highly accurate.", "title": "" }, { "docid": "a0eb1b462d2169f5e7fa67690169591f", "text": "In this paper, we present 3 different neural network-based methods to perform variable selection. OCD Optimal Cell Damage is a pruning method, which evaluates the usefulness of a variable and prunes the least useful ones (it is related to the Optimal Brain Damage method of J_.e Cun et al.). Regularization theory proposes to constrain estimators by adding a term to the cost function used to train a neural network. In the Bayesian framework, this additional term can be interpreted as the log prior to the weights distribution. We propose to use two priors (a Gaussian and a Gaussian mixture) and show that this regularization approach allows to select efficient subsets of variables. Our methods are compared to conventional statistical selection procedures and are shown to significantly improve on that.", "title": "" }, { "docid": "6d3dbbf788255dfc137b1324e491fd9d", "text": "Nowadays, a great number of healthcare data are generated every day from both medical institutions and individuals. Healthcare information exchange (HIE) has been proved to benefit the medical industry remarkably. To store and share such large amount of healthcare data is important while challenging. In this paper, we propose BlocHIE, a Blockchain-based platform for healthcare information exchange. First, we analyze the different requirements for sharing healthcare data from different sources. Based on the analysis, we employ two loosely-coupled Blockchains to handle different kinds of healthcare data. Second, we combine off-chain storage and on-chain verification to satisfy the requirements of both privacy and authenticability. Third, we propose two fairness-based packing algorithms to improve the system throughput and the fairness among users jointly. To demonstrate the practicability and effectiveness of BlocHIE, we implement BlocHIE in a minimal-viable-product way and evaluate the proposed packing algorithms extensively.", "title": "" }, { "docid": "3714dabbe309545a1926e06e82f91975", "text": "The automatic generation of anime characters offers an opportunity to bring a custom character into existence without professional skill. Besides, professionals may also take advantages of the automatic generation for inspiration on animation and game character design. however results from existing models [15, 18, 8, 22, 12] on anime image generation are blurred and distorted on an non-trivial frequency, thus generating industry-standard facial images for anime characters remains a challenge. In this paper, we propose a model that produces anime faces at high quality with promising rate of success with three-fold contributions: A clean dataset from Getchu, a suitable DRAGAN[10]-based SRResNet[11]like GAN model, and our general approach to training conditional model from image with estimated tags as conditions. We also make available a public accessible web interface.", "title": "" }, { "docid": "22bb6af742b845dea702453b6b14ef3a", "text": "Errors are prevalent in data sequences, such as GPS trajectories or sensor readings. Existing methods on cleaning sequential data employ a constraint on value changing speeds and perform constraint-based repairing. While such speed constraints are effective in identifying large spike errors, the small errors that do not significantly deviate from the truth and indeed satisfy the speed constraints can hardly be identified and repaired. To handle such small errors, in this paper, we propose a statistical based cleaning method. Rather than declaring a broad constraint of max/min speeds, we model the probability distribution of speed changes. The repairing problem is thus to maximize the likelihood of the sequence w.r.t. the probability of speed changes. We formalize the likelihood-based cleaning problem, show its NP-hardness, devise exact algorithms, and propose several approximate/heuristic methods to trade off effectiveness for efficiency. Experiments on real data sets (in various applications) demonstrate the superiority of our proposal.", "title": "" }, { "docid": "658c7ae98ea4b0069a7a04af1e462307", "text": "Exploiting packetspsila timing information for covert communication in the Internet has been explored by several network timing channels and watermarking schemes. Several of them embed covert information in the inter-packet delay. These channels, however, can be detected based on the perturbed traffic pattern, and their decoding accuracy could be degraded by jitter, packet loss and packet reordering events. In this paper, we propose a novel TCP-based timing channel, named TCPScript to address these shortcomings. TCPScript embeds messages in ldquonormalrdquo TCP data bursts and exploits TCPpsilas feedback and reliability service to increase the decoding accuracy. Our theoretical capacity analysis and extensive experiments have shown that TCPScript offers much higher channel capacity and decoding accuracy than an IP timing channel and JitterBug. On the countermeasure, we have proposed three new metrics to detect aggressive TCPScript channels.", "title": "" }, { "docid": "0b7ed990d65be35f445d4243d627f9cd", "text": "A middle-1x nm design rule multi-level NAND flash memory cell (M1X-NAND) has been successfully developed for the first time. 1) QSPT (Quad Spacer Patterning Technology) of ArF immersion lithography is used for patterning mid-1x nm rule wordline (WL). In order to achieve high performance and reliability, several integration technologies are adopted, such as 2) advanced WL air-gap process, 3) floating gate slimming process, and 4) optimized junction formation scheme. And also, by using 5) new N±1 WL Vpass scheme during programming, charge loss and program speed are greatly improved. As a result, mid-1x nm design rule NAND flash memories has been successfully realized.", "title": "" }, { "docid": "17ed907c630ec22cbbb5c19b5971238d", "text": "The fastest tools for network reachability queries use adhoc algorithms to compute all packets from a source S that can reach a destination D. This paper examines whether network reachability can be solved efficiently using existing verification tools. While most verification tools only compute reachability (“Can S reach D?”), we efficiently generalize them to compute all reachable packets. Using new and old benchmarks, we compare model checkers, SAT solvers and various Datalog implementations. The only existing verification method that worked competitively on all benchmarks in seconds was Datalog with a new composite Filter-Project operator and a Difference of Cubes representation. While Datalog is slightly slower than the Hassel C tool, it is far more flexible. We also present new results that more precisely characterize the computational complexity of network verification. This paper also provides a gentle introduction to program verification for the networking community.", "title": "" }, { "docid": "db8b26229ced95bab2028d0b8eb8a43f", "text": "OBJECTIVES\nThis study investigated isometric and isokinetic hip strength in individuals with and without symptomatic femoroacetabular impingement (FAI). The specific aims were to: (i) determine whether differences exist in isometric and isokinetic hip strength measures between groups; (ii) compare hip strength agonist/antagonist ratios between groups; and (iii) examine relationships between hip strength and self-reported measures of either hip pain or function in those with FAI.\n\n\nDESIGN\nCross-sectional.\n\n\nMETHODS\nFifteen individuals (11 males; 25±5 years) with symptomatic FAI (clinical examination and imaging (alpha angle >55° (cam FAI), and lateral centre edge angle >39° and/or positive crossover sign (combined FAI))) and 14 age- and sex-matched disease-free controls (no morphological FAI on magnetic resonance imaging) underwent strength testing. Maximal voluntary isometric contraction strength of hip muscle groups and isokinetic hip internal (IR) and external rotation (ER) strength (20°/s) were measured. Groups were compared with independent t-tests and Mann-Whitney U tests.\n\n\nRESULTS\nParticipants with FAI had 20% lower isometric abduction strength than controls (p=0.04). There were no significant differences in isometric strength for other muscle groups or peak isokinetic ER or IR strength. The ratio of isometric, but not isokinetic, ER/IR strength was significantly higher in the FAI group (p=0.01). There were no differences in ratios for other muscle groups. Angle of peak IR torque was the only feature correlated with symptoms.\n\n\nCONCLUSIONS\nIndividuals with symptomatic FAI demonstrate isometric hip abductor muscle weakness and strength imbalance in the hip rotators. Strength measurement, including agonist/antagonist ratios, may be relevant for clinical management of FAI.", "title": "" }, { "docid": "d284fff9eed5e5a332bb3cfc612a081a", "text": "This paper describes the NILC USP system that participated in SemEval-2013 Task 2: Sentiment Analysis in Twitter. Our system adopts a hybrid classification process that uses three classification approaches: rulebased, lexicon-based and machine learning approaches. We suggest a pipeline architecture that extracts the best characteristics from each classifier. Our system achieved an Fscore of 56.31% in the Twitter message-level subtask.", "title": "" } ]
scidocsrr
73692ba8a5f51af778e831da4f05c222
MEMD: A Diversity-Promoting Learning Framework for Short-Text Conversation
[ { "docid": "b712552d760c887131f012e808dca253", "text": "To the same utterance, people’s responses in everyday dialogue may be diverse largely in terms of content semantics, speaking styles, communication intentions and so on. Previous generative conversational models ignore these 1-to-n relationships between a post to its diverse responses, and tend to return high-frequency but meaningless responses. In this study we propose a mechanism-aware neural machine for dialogue response generation. It assumes that there exists some latent responding mechanisms, each of which can generate different responses for a single input post. With this assumption we model different responding mechanisms as latent embeddings, and develop a encoder-diverter-decoder framework to train its modules in an end-to-end fashion. With the learned latent mechanisms, for the first time these decomposed modules can be used to encode the input into mechanism-aware context, and decode the responses with the controlled generation styles and topics. Finally, the experiments with human judgements, intuitive examples, detailed discussions demonstrate the quality and diversity of the generated responses with 9.80% increase of acceptable ratio over the best of six baseline methods.", "title": "" } ]
[ { "docid": "f530aab20b4650bf767cfd77d6676130", "text": "Obfuscated malware has become popular because of pure benefits brought by obfuscation: low cost and readily availability of obfuscation tools accompanied with good result of evading signature based anti-virus detection as well as prevention of reverse engineer from understanding malwares' true nature. Regardless obfuscation methods, a malware must deobfuscate its core code back to clear executable machine code so that malicious portion will be executed. Thus, to analyze the obfuscation pattern before unpacking provide a chance for us to prevent malware from further execution. In this paper, we propose a heuristic detection approach that targets obfuscated windows binary files being loaded into memory - prior to execution. We perform a series of static check on binary file's PE structure for common traces of a packer or obfuscation, and gauge a binary's maliciousness with a simple risk rating mechanism. As a result, a newly created process, if flagged as possibly malicious by the static screening, will be prevented from further execution. This paper explores the foundation of this research, as well as the testing methodology and current results.", "title": "" }, { "docid": "d56563772b2c3132166d810cc150d402", "text": "PURPOSE\nFour US National Clinical Trials Network components (Southwest Oncology Group, Cancer and Leukemia Group B/Alliance, Eastern Cooperative Oncology Group, and the AIDS Malignancy Consortium) conducted a phase II Intergroup clinical trial that used early interim fluorodeoxyglucose positron emission tomography (FDG-PET) imaging to determine the utility of response-adapted therapy for stage III to IV classic Hodgkin lymphoma.\n\n\nPATIENTS AND METHODS\nThe Southwest Oncology Group S0816 (Fludeoxyglucose F 18-PET/CT Imaging and Combination Chemotherapy With or Without Additional Chemotherapy and G-CSF in Treating Patients With Stage III or Stage IV Hodgkin Lymphoma) trial enrolled 358 HIV-negative patients between July 1, 2009, and December 2, 2012. A PET scan was performed after two initial cycles of doxorubicin, bleomycin, vinblastine, and dacarbazine (ABVD) and was labeled PET2. PET2-negative patients (Deauville score 1 to 3) received an additional four cycles of ABVD, whereas PET2-positive patients (Deauville score 4 to 5) were switched to escalated bleomycin, etoposide, doxorubicin, cyclophosphamide, vincristine, procarbazine, and prednisone (eBEACOPP) for six cycles. Among 336 eligible and evaluable patients, the median age was 32 years (range, 18 to 60 years), with 52% stage III, 48% stage IV, 49% International Prognostic Score 0 to 2, and 51% score 3 to 7.\n\n\nRESULTS\nThree hundred thirty-six of the enrolled patients were evaluable. Central review of the interim PET2 scan was performed in 331 evaluable patients, with 271 (82%) PET2-negative and 60 (18%) PET2-positive. Of 60 eligible PET2-positive patients, 49 switched to eBEACOPP as planned and 11 declined. With a median follow-up of 39.7 months, the Kaplan-Meier estimate for 2-year overall survival was 98% (95% CI, 95% to 99%), and the 2-year estimate for progression-free survival (PFS) was 79% (95% CI, 74% to 83%). The 2-year estimate for PFS in the subset of patients who were PET2-positive after two cycles of ABVD was 64% (95% CI, 50% to 75%). Both nonhematologic and hematologic toxicities were greater in the eBEACOPP arm than in the continued ABVD arm.\n\n\nCONCLUSION\nResponse-adapted therapy based on interim PET imaging after two cycles of ABVD seems promising with a 2-year PFS of 64% for PET2-positive patients, which is much higher than the expected 2-year PFS of 15% to 30%.", "title": "" }, { "docid": "ae2f92c2e3254185a0a459d485c5f266", "text": "Automatic age estimation from facial images is challenging not only for computers, but also for humans in some cases. Therefore, coarse age groups such as children, teen age, adult and senior adult are considered in age classification, instead of evaluating specific age. In this paper, we propose an approach that provides a significant improvement in performance on benchmark databases and standard protocols for age classification. Our approach is based on deep learning techniques. We optimize the network architecture using the Deep IDentification-verification features, which are proved very efficient for face representation. After reducing the redundancy among the large number of output features, we apply different classifiers to classify the facial images to different age group with the final features. The experimental analysis shows that the proposed approach outperforms the reported state-of-the-arts on both constrained and unconstrained databases.", "title": "" }, { "docid": "db8fde03e01920c45f507f7e0e94d918", "text": "OBJECTIVE\nTo assess the relative importance of independent risk factors for peripheral intravenous catheter (PIVC) failure.\n\n\nMETHODS\nSecondary data analysis from a randomized controlled trial of PIVC dwell time. The Prentice, Williams, and Peterson statistical model was used to identify and compare risk factors for phlebitis, occlusion, and accidental removal.\n\n\nSETTING\nThree acute care hospitals in Queensland, Australia.\n\n\nPARTICIPANTS\nThe trial included 3,283 adult medical and surgical patients (5,907 catheters) with a PIVC with greater than 4 days of expected use.\n\n\nRESULTS\nModifiable risk factors for occlusion included hand, antecubital fossa, or upper arm insertion compared with forearm (hazard ratio [HR], 1.47 [95% confidence interval (CI), 1.28-1.68], 1.27 [95% CI, 1.08-1.49], and 1.25 [95% CI, 1.04-1.50], respectively); and for phlebitis, larger diameter PIVC (HR, 1.48 [95% CI, 1.08-2.03]). PIVCs inserted by the operating and radiology suite staff had lower occlusion risk than ward insertions (HR, 0.80 [95% CI, 0.67-0.94]). Modifiable risks for accidental removal included hand or antecubital fossa insertion compared with forearm (HR, 2.45 [95% CI, 1.93-3.10] and 1.65 [95% CI, 1.23-2.22], respectively), clinical staff insertion compared with intravenous service (HR, 1.69 [95% CI, 1.30-2.20]); and smaller PIVC diameter (HR, 1.29 [95% CI, 1.02-1.61]). Female sex was a nonmodifiable factor associated with an increased risk of both phlebitis (HR, 1.64 [95% CI, 1.28-2.09]) and occlusion (HR, 1.44 [95% CI, 1.30-1.61]).\n\n\nCONCLUSIONS\nPIVC survival is improved by preferential forearm insertion, selection of appropriate PIVC diameter, and insertion by intravenous teams and other specialists.\n\n\nTRIAL REGISTRATION\nThe original randomized controlled trial on which this secondary analysis is based is registered with the Australian New Zealand Clinical Trials Registry (http://www.anzctr.org.au; ACTRN12608000445370).", "title": "" }, { "docid": "c1918430cadc2bf8355f3fb8beef80f6", "text": "This paper presents the research results of an ongoing technology transfer project carried out in cooperation between the University of Salerno and a small software company. The project is aimed at developing and transferring migration technology to the industrial partner. The partner should be enabled to migrate monolithic multi-user COBOL legacy systems to a multi-tier Web-based architecture. The assessment of the legacy systems of the partner company revealed that these systems had a very low level of decomposability with spaghetti-like code and embedded control flow and database accesses within the user interface descriptions. For this reason, it was decided to adopt an incremental migration strategy based on the reengineering of the user interface using Web technology, on the transformation of interactive legacy programs into batch programs, and the wrapping of the legacy programs. A middleware framework links the new Web-based user interface with the Wrapped Legacy System. An Eclipse plug-in, named MELIS (migration environment for legacy information systems), was also developed to support the migration process. Both the migration strategy and the tool have been applied to two essential subsystems of the most business critical legacy system of the partner company. Copyright © 2008 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "9687de8102eecaf14261e0a32318b146", "text": "This paper presents a set of algorithms for distinguishing personal names with multiple real referents in text, based on little or no supervision. The approach utilizes an unsupervised clustering technique over a rich feature space of biographic facts, which are automatically extracted via a language-independent bootstrapping process. The induced clustering of named entities are then partitioned and linked to their real referents via the automatically extracted biographic data. Performance is evaluated based on both a test set of handlabeled multi-referent personal names and via automatically generated pseudonames.", "title": "" }, { "docid": "48f0ab93c3a281b78cce57136200b05d", "text": "Many dependable systems rely on the integrity of the position of their components. In such systems, two key problems are secure localization and secure location verification of the components. Researchers proposed several solutions, which generally require expensive infrastructures of several fixed stations (anchors) with trusted positions. In this paper, we explore the approach of replacing all the fixed anchors with a single drone that flies through a sequence of waypoints. At each waypoint, the drone acts as an anchor and securely determines the positions. This approach completely eliminates the need for many expensive anchors. The main challenge becomes how to find a convenient path for the drone to do this for all the devices. The problem presents novel aspects, which make existing path planning algorithms unsuitable. We propose LocalizerBee, VerifierBee, and PreciseVerifierBee: three path planning algorithms that allow a drone to respectively measure, verify, and verify with a guaranteed precision a set of positions in a secure manner. They are able to securely localize all the positions in a generic deployment area, even in the presence of drone control errors. Moreover, they produce short path lengths and they run in a reasonable processing time.", "title": "" }, { "docid": "6eda7075de9d47851b2b5be026af7d84", "text": "Maintaining consistent styles across glyphs is an arduous task in typeface design. In this work we introduce FlexyFont, a flexible tool for synthesizing a complete typeface that has a consistent style with a given small set of glyphs. Motivated by a key fact that typeface designers often maintain a library of glyph parts to achieve a consistent typeface, we intend to learn part consistency between glyphs of different characters across typefaces. We take a part assembling approach by firstly decomposing the given glyphs into semantic parts and then assembling them according to learned sets of transferring rules to reconstruct the missing glyphs. To maintain style consistency, we represent the style of a font as a vector of pairwise part similarities. By learning a distribution over these feature vectors, we are able to predict the style of a novel typeface given only a few examples. We utilize a popular machine learning method as well as retrieval-based methods to quantitatively assess the performance of our feature vector, resulting in favorable results. We also present an intuitive interface that allows users to interactively create novel typefaces with ease. The synthesized fonts can be directly used in real-world design.", "title": "" }, { "docid": "9ace030a915a6ec8bf8f35b918c8c8aa", "text": "Why are boys at risk? To address this question, I use the perspective of regulation theory to offer a model of the deeper psychoneurobiological mechanisms that underlie the vulnerability of the developing male. The central thesis of this work dictates that significant gender differences are seen between male and female social and emotional functions in the earliest stages of development, and that these result from not only differences in sex hormones and social experiences but also in rates of male and female brain maturation, specifically in the early developing right brain. I present interdisciplinary research which indicates that the stress-regulating circuits of the male brain mature more slowly than those of the female in the prenatal, perinatal, and postnatal critical periods, and that this differential structural maturation is reflected in normal gender differences in right-brain attachment functions. Due to this maturational delay, developing males also are more vulnerable over a longer period of time to stressors in the social environment (attachment trauma) and toxins in the physical environment (endocrine disruptors) that negatively impact right-brain development. In terms of differences in gender-related psychopathology, I describe the early developmental neuroendocrinological and neurobiological mechanisms that are involved in the increased vulnerability of males to autism, early onset schizophrenia, attention deficit hyperactivity disorder, and conduct disorders as well as the epigenetic mechanisms that can account for the recent widespread increase of these disorders in U.S. culture. I also offer a clinical formulation of early assessments of boys at risk, discuss the impact of early childcare on male psychopathogenesis, and end with a neurobiological model of optimal adult male socioemotional functions.", "title": "" }, { "docid": "01e4741bc502dfc3ec6baf227494dc5d", "text": "In this letter, we present a novel circularly polarized (CP) origami antenna. We fold paper in the form of an origami tetrahedron to serve as the substrate of the antenna. The antenna comprises two triangular monopole elements that are perpendicular to each other. Circular polarization characteristics are achieved by exciting both elements with equal magnitudes and with a phase difference of 90°. In this letter, we explain the origami folding steps in detail. We also verify the proposed concept of the CP origami antenna by performing simulations and measurements using a fabricated prototype. The antenna exhibits a 10-dB impedance bandwidth of 70.2% (2.4–5 GHz), and a 3-dB axial-ratio bandwidth of 8% (3.415–3.7 GHz). The measured left-hand circular polarization gain of the antenna is in the range of 5.2–5.7 dBi for the 3-dB axial-ratio bandwidth.", "title": "" }, { "docid": "2f02235636c5c0aecd8918cba512888d", "text": "To determine whether an AIDS prevention mass media campaign influenced risk perception, self-efficacy and other behavioural predictors. We used household survey data collected from 2,213 sexually experienced male and female Kenyans aged 15-39. Respondents were administered a questionnaire asking them about their exposure to branded and generic mass media messages concerning HIV/AIDS and condom use. They were asked questions concerning their personal risk perception, self-efficacy, condom effectiveness, condom availability, and their embarrassment in obtaining condoms. Logistic regression analysis was used to determine the impact of exposure to mass media messages on these predictors of behaviour change. Those exposed to branded advertising messages were significantly more likely to consider themselves at higher risk of acquiring HIV and to believe in the severity of AIDS. Exposure to branded messages was also associated with a higher level of personal self-efficacy, a greater belief in the efficacy of condoms, a lower level of perceived difficulty in obtaining condoms and reduced embarrassment in purchasing condoms. Moreover, there was a dose-response relationship: a higher intensity of exposure to advertising was associated with more positive outcomes. Exposure to generic advertising messages was less frequently associated with positive health beliefs and these relationships were also weaker. Branded mass media campaigns that promote condom use as an attractive lifestyle choice are likely to contribute to the development of perceptions that are conducive to the adoption of condom use.", "title": "" }, { "docid": "b4c395b97f0482f3c1224ed6c8623ac2", "text": "The Scientific Computation Language (SCL) was designed mainly for developing computational models in education and research. This paper presents the justification for such a language, its relevant features, and a case study of a computational model implemented with the SCL.\n Development of the SCL language is part of the OOPsim project, which has had partial NSF support (CPATH). One of the goals of this project is to develop tools and approaches for designing and implementing computational models, emphasizing multi-disciplinary teams in the development process.\n A computational model is a computer implementation of the solution to a (scientific) problem for which a mathematical representation has been formulated. Developing a computational model consists of applying Computer Science concepts, principles and methods.\n The language syntax is defined at a higher level of abstraction than C, and includes language statements for improving program readability, debugging, maintenance, and correctness. The language design was influenced by Ada, Pascal, Eiffel, Java, C, and C++.\n The keywords have been added to maintain full compatibility with C. The SCL language translator is an executable program that is implemented as a one-pass language processor that generates C source code. The generated code can be integrated conveniently with any C and/or C++ library, on Linux and Windows (and MacOS). The semantics of SCL is informally defined to be the same C semantics.", "title": "" }, { "docid": "469e5c159900b9d6662a9bfe9e01fde7", "text": "In the research of rule extraction from neural networks,fidelity describes how well the rules mimic the behavior of a neural network whileaccuracy describes how well the rules can be generalized. This paper identifies thefidelity-accuracy dilemma. It argues to distinguishrule extraction using neural networks andrule extraction for neural networks according to their different goals, where fidelity and accuracy should be excluded from the rule quality evaluation framework, respectively.", "title": "" }, { "docid": "3db5de38523a93ace428835ecc81f5bb", "text": "Ahstract- Advanced driver assistance systems allow for increasing user comfort and safety by sensing the environment and anticipating upcoming hazards. Often, this requires to accurately predict how situations will change. Recent approaches make simplifying assumptions on the predictive model of the Ego-Vehicle motion or assume prior knowledge, such as road topologies, to be available. However, in many urban areas this assumption is not satisfied. Furthermore, temporary changes (e.g. construction areas, vehicles parked on the street) are not considered by such models. Since many cars observe the environment with several different sensors, predictive models can benefit from them by considering environmental properties. In this work, we present an approach for an Ego-Vehicle path prediction from such sensor measurements of the static vehicle environment. Besides proposing a learned model for predicting the driver's multi-modal future path as a grid-based prediction, we derive an approach for extracting paths from it. In driver assistance systems both can be used to solve varying assistance tasks. The proposed approach is evaluated on real driving data and outperforms several baseline approaches.", "title": "" }, { "docid": "0bcb2fdf59b88fca5760bfe456d74116", "text": "A good distance metric is crucial for unsupervised learning from high-dimensional data. To learn a metric without any constraint or class label information, most unsupervised metric learning algorithms appeal to projecting observed data onto a low-dimensional manifold, where geometric relationships such as local or global pairwise distances are preserved. However, the projection may not necessarily improve the separability of the data, which is the desirable outcome of clustering. In this paper, we propose a novel unsupervised adaptive metric learning algorithm, called AML, which performs clustering and distance metric learning simultaneously. AML projects the data onto a low-dimensional manifold, where the separability of the data is maximized. We show that the joint clustering and distance metric learning can be formulated as a trace maximization problem, which can be solved via an iterative procedure in the EM framework. Experimental results on a collection of benchmark data sets demonstrated the effectiveness of the proposed algorithm.", "title": "" }, { "docid": "04c33e2e517b01fd2555586018087d67", "text": "The increased fuel economy and driveability of modern internal combustion engine vehicles (ICEVs) are the result of the application of advanced digital electronics to control the operation of the internal combustion engine (ICE). Microprocessors (and microcontrollers) play a key role in the engine control, by precisely controlling the amount of both air and fuel admitted into the cylinders. Air intake is controlled by utilizing a throttle valve equipped with a motor and gear mechanism as actuator, and a sensor enabling the measurement of the angular position of the blades. This paperwork presents a lab setup that allows students to control the throttle position using a microcontroller that runs a program developed by them. A commercial throttle body has been employed, whereas a power amplifier and a microcontroller board have been hand assembled to complete the experimental setup. This setup, while based in a high-tech, microprocessor-based solution for a real-world, engine operation optimization problem, has the potential to engage students around a hands-on multidisciplinary lab activity and ignite their interest in learning fundamental and advanced topics of microprocessors systems.", "title": "" }, { "docid": "afaa988666cc6b2790696bbb0d69ff73", "text": "Despite being one of the most popular tasks in lexical semantics, word similarity has often been limited to the English language. Other languages, even those that are widely spoken such as Spanish, do not have a reliable word similarity evaluation framework. We put forward robust methodologies for the extension of existing English datasets to other languages, both at monolingual and cross-lingual levels. We propose an automatic standardization for the construction of cross-lingual similarity datasets, and provide an evaluation, demonstrating its reliability and robustness. Based on our procedure and taking the RG-65 word similarity dataset as a reference, we release two high-quality Spanish and Farsi (Persian) monolingual datasets, and fifteen cross-lingual datasets for six languages: English, Spanish, French, German, Portuguese, and Farsi.", "title": "" }, { "docid": "555f06011d03cbe8dedb2fcd198540e9", "text": "We focus on the challenging task of real-time semantic segmentation in this paper. It finds many practical applications and yet is with fundamental difficulty of reducing a large portion of computation for pixel-wise label inference. We propose an image cascade network (ICNet) that incorporates multi-resolution branches under proper label guidance to address this challenge. We provide in-depth analysis of our framework and introduce the cascade feature fusion unit to quickly achieve highquality segmentation. Our system yields real-time inference on a single GPU card with decent quality results evaluated on challenging datasets like Cityscapes, CamVid and COCO-Stuff.", "title": "" }, { "docid": "00b73790bb0bb2b828e1d443d3e13cf4", "text": "Grippers and robotic hands are an important field in robotics. Recently, the combination of grasping devices and haptic feedback has been a promising avenue for many applications such as laparoscopic surgery and spatial telemanipulation. This paper presents the work behind a new selfadaptive, a.k.a. underactuated, gripper with a proprioceptive haptic feedback in which the apparent stiffness of the gripper as seen by its actuator is used to estimate contact location. This system combines many technologies and concepts in an integrated mechatronic tool. Among them, underactuated grasping, haptic feedback, compliant joints and a differential seesaw mechanism are used. Following a theoretical modeling of the gripper based on the virtual work principle, the authors present numerical data used to validate this model. Then, a presentation of the practical prototype is given, discussing the sensors, controllers, and mechanical architecture. Finally, the control law and the experimental validation of the haptic feedback are presented.", "title": "" }, { "docid": "0edc89fbf770bbab2fb4d882a589c161", "text": "A calculus is developed in this paper (Part I) and the sequel (Part 11) for obtaining bounds on delay and buffering requirements in a communication network operating in a packet switched mode under a fixed routing strategy. The theory we develop is different from traditional approaches to analyzing delay because the model we use to describe the entry of data into the network is nonprobabilistic: We suppose that the data stream entered intq the network by any given user satisfies “burstiness constraints.” A data stream is said to satisfy a burstiness constraint if the quantity of data from the stream contained in any interval of time is less than a value that depends on the length of the interval. Several network elements are defined that can be used as building blocks to model a wide variety of communication networks. Each type of network element is analyzed by assuming that the traffic entering it satisfies burstiness constraints. Under this assumption bounds are obtained on delay and buffering requirements for the network element, burstiness constraints satisfied by the traffic that exits the element are derived. Index Terms -Queueing networks, burstiness, flow control, packet switching, high speed networks.", "title": "" } ]
scidocsrr
80e660b1f34ece6588bd8e90f3ee3849
A Regulation Scheme Based on the Ciphertext-Policy Hierarchical Attribute-Based Encryption in Bitcoin System
[ { "docid": "0a2ad953e83268b1dde1ba1598190414", "text": "This paper looks at the challenges and opportunities of implementing blockchain technology across banking, providing food for thought about the potentialities of this disruptive technology. The blockchain technology can optimize the global financial infrastructure, achieving sustainable development, using more efficient systems than at present. In fact, many banks are currently focusing on blockchain technology to promote economic growth and accelerate the development of green technologies. In order to understand the potential of blockchain technology to support the financial system, we studied the actual performance of the Bitcoin system, also highlighting its major limitations, such as the significant energy consumption due to the high computing power required, and the high cost of hardware. We estimated the electrical power and the hash rate of the Bitcoin network, over time, and, in order to evaluate the efficiency of the Bitcoin system in its actual operation, we defined three quantities: “economic efficiency”, “operational efficiency”, and “efficient service”. The obtained results show that by overcoming the disadvantages of the Bitcoin system, and therefore of blockchain technology, we could be able to handle financial processes in a more efficient way than under the current system.", "title": "" }, { "docid": "d8e7c9b871f542cd40835b131eedb60a", "text": "Attribute-based encryption (ABE) systems allow encrypting to uncertain receivers by means of an access policy specifying the attributes that the intended receivers should possess. ABE promises to deliver fine-grained access control of encrypted data. However, when data are encrypted using an ABE scheme, key management is difficult if there is a large number of users from various backgrounds. In this paper, we elaborate ABE and propose a new versatile cryptosystem referred to as ciphertext-policy hierarchical ABE (CPHABE). In a CP-HABE scheme, the attributes are organized in a matrix and the users having higher-level attributes can delegate their access rights to the users at a lower level. These features enable a CP-HABE system to host a large number of users from different organizations by delegating keys, e.g., enabling efficient data sharing among hierarchically organized large groups. We construct a CP-HABE scheme with short ciphertexts. The scheme is proven secure in the standard model under non-interactive assumptions.", "title": "" } ]
[ { "docid": "333b3349cdcb6ddf44c697e827bcfe62", "text": "Harmful cyanobacterial blooms, reflecting advanced eutrophication, are spreading globally and threaten the sustainability of freshwater ecosystems. Increasingly, non-nitrogen (N(2))-fixing cyanobacteria (e.g., Microcystis) dominate such blooms, indicating that both excessive nitrogen (N) and phosphorus (P) loads may be responsible for their proliferation. Traditionally, watershed nutrient management efforts to control these blooms have focused on reducing P inputs. However, N loading has increased dramatically in many watersheds, promoting blooms of non-N(2) fixers, and altering lake nutrient budgets and cycling characteristics. We examined this proliferating water quality problem in Lake Taihu, China's 3rd largest freshwater lake. This shallow, hyper-eutrophic lake has changed from bloom-free to bloom-plagued conditions over the past 3 decades. Toxic Microcystis spp. blooms threaten the use of the lake for drinking water, fisheries and recreational purposes. Nutrient addition bioassays indicated that the lake shifts from P limitation in winter-spring to N limitation in cyanobacteria-dominated summer and fall months. Combined N and P additions led to maximum stimulation of growth. Despite summer N limitation and P availability, non-N(2) fixing blooms prevailed. Nitrogen cycling studies, combined with N input estimates, indicate that Microcystis thrives on both newly supplied and previously-loaded N sources to maintain its dominance. Denitrification did not relieve the lake of excessive N inputs. Results point to the need to reduce both N and P inputs for long-term eutrophication and cyanobacterial bloom control in this hyper-eutrophic system.", "title": "" }, { "docid": "250d9c5222f8c968cd79bd2e3881c1e0", "text": "Computer networks have evolved due to new trends in the society's needs since their emergence as a way of providing remote access and sharing of computational resources. The architecture inflexibility of the computer networks presents a challenge for researchers, since their experiments can hardly be evaluated in real networks. Thus, in general, tests of new technologies are conducted on network simulators, which imply in a streamline of the reality. The paradigm of Software Defined Networks (SDN) and OpenFlow architecture, offer a way for the implementation of a programmable network architecture, able to be implemented gradually in production networks, which offers the possibility of separating the control mechanisms of the many traffic flows served, so that a scientific experiment can be performed in a real network (adapted for SDN) without interfering with its operation. This paper contextualizes the existing problems in current computer networks, and presents the SDN network as one of the main proposals for the viability of the Internet of the Future. In this context, it is discussed the OpenFlow architecture, which allows the creation of applications for Software Defined Networks. Finally it is presented the network simulator SDN, the Mininet, which implements the OpenFlow interface in a network simulation scenario containing a controller POX with two components, one OpenFlow switch and three nodes. The main objective was to evaluate the communication and bandwidth between nodes.", "title": "" }, { "docid": "51ac5dde554fd8363fcf95e6d3caf439", "text": "Swarm intelligence is a relatively novel field. It addresses the study of the collective behaviors of systems made by many components that coordinate using decentralized controls and self-organization. A large part of the research in swarm intelligence has focused on the reverse engineering and the adaptation of collective behaviors observed in natural systems with the aim of designing effective algorithms for distributed optimization. These algorithms, like their natural systems of inspiration, show the desirable properties of being adaptive, scalable, and robust. These are key properties in the context of network routing, and in particular of routing in wireless sensor networks. Therefore, in the last decade, a number of routing protocols for wireless sensor networks have been developed according to the principles of swarm intelligence, and, in particular, taking inspiration from the foraging behaviors of ant and bee colonies. In this paper, we provide an extensive survey of these protocols. We discuss the general principles of swarm intelligence and of its application to routing. We also introduce a novel taxonomy for routing protocols in wireless sensor networks and use it to classify the surveyed protocols. We conclude the paper with a critical analysis of the status of the field, pointing out a number of fundamental issues related to the (mis) use of scientific methodology and evaluation procedures, and we identify some future research directions. 2010 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "7b99f2b0c903797c5ed33496f69481fc", "text": "Dance imagery is a consciously created mental representation of an experience, either real or imaginary, that may affect the dancer and her or his movement. In this study, imagery research in dance was reviewed in order to: 1. describe the themes and ideas that the current literature has attempted to illuminate and 2. discover the extent to which this literature fits the Revised Applied Model of Deliberate Imagery Use. A systematic search was performed, and 43 articles from 24 journals were found to fit the inclusion criteria. The articles were reviewed, analyzed, and categorized. The findings from the articles were then reported using the Revised Applied Model as a framework. Detailed descriptions of Who, What, When and Where, Why, How, and Imagery Ability were provided, along with comparisons to the field of sports imagery. Limitations within the field, such as the use of non-dance-specific and study-specific measurements, make comparisons and clear conclusions difficult to formulate. Future research can address these problems through the creation of dance-specific measurements, higher participant rates, and consistent methodologies between studies.", "title": "" }, { "docid": "5fb732fd3210a5c9bba42426b1b4ce49", "text": "While there are optimal TSP solvers, as well as recent learning-based approaches, the generalization of the TSP to the Multiple Traveling Salesmen Problem is much less studied. Here, we design a neural network solution that treats the salesmen, cities and depot as three different sets of varying cardinalities. We apply a novel technique that combines elements from recent architectures that were developed for sets, as well as elements from graph networks. Coupled with new constraint enforcing output layers, a dedicated loss, and a search method, our solution is shown to outperform all the meta-heuristics of the leading solver in the field.", "title": "" }, { "docid": "53343bc045189bf7578619e7d60a36ba", "text": "Financial technology (FinTech) is the new business model and technology which aims to compete with traditional financial services and blockchain is one of most famous technology use of FinTech. Blockchain is a type of distributed, electronic database (ledger) which can hold any information (e.g. records, events, transactions) and can set rules on how this information is updated. The most well-known application of blockchain is bitcoin, which is a kind of cryptocurrencies. But it can also be used in many other financial and commercial applications. A prominent example is smart contracts, for instance as offered in Ethereum. A contract can execute a transfer when certain events happen, such as payment of a security deposit, while the correct execution is enforced by the consensus protocol. The purpose of this paper is to explore the research and application landscape of blockchain technology acceptance by following a more comprehensive approach to address blockchain technology adoption. This research is to propose a unified model integrating Innovation Diffusion Theory (IDT) model and Technology Acceptance Model (TAM) to investigate continuance intention to adopt blockchain technology.", "title": "" }, { "docid": "91d8e79b31a07aff4c1ee16570ae49ad", "text": "Endmember extraction is a process to identify the hidden pure source signals from the mixture. In the past decade, numerous algorithms have been proposed to perform this estimation. One commonly used assumption is the presence of pure pixels in the given image scene, which are detected to serve as endmembers. When such pixels are absent, the image is referred to as the highly mixed data, for which these algorithms at best can only return certain data points that are close to the real endmembers. To overcome this problem, we present a novel method without the pure-pixel assumption, referred to as the minimum volume constrained nonnegative matrix factorization (MVC-NMF), for unsupervised endmember extraction from highly mixed image data. Two important facts are exploited: First, the spectral data are nonnegative; second, the simplex volume determined by the endmembers is the minimum among all possible simplexes that circumscribe the data scatter space. The proposed method takes advantage of the fast convergence of NMF schemes, and at the same time eliminates the pure-pixel assumption. The experimental results based on a set of synthetic mixtures and a real image scene demonstrate that the proposed method outperforms several other advanced endmember detection approaches", "title": "" }, { "docid": "3bf37b20679ca6abd022571e3356e95d", "text": "OBJECTIVE\nOur goal is to create an ontology that will allow data integration and reasoning with subject data to classify subjects, and based on this classification, to infer new knowledge on Autism Spectrum Disorder (ASD) and related neurodevelopmental disorders (NDD). We take a first step toward this goal by extending an existing autism ontology to allow automatic inference of ASD phenotypes and Diagnostic & Statistical Manual of Mental Disorders (DSM) criteria based on subjects' Autism Diagnostic Interview-Revised (ADI-R) assessment data.\n\n\nMATERIALS AND METHODS\nKnowledge regarding diagnostic instruments, ASD phenotypes and risk factors was added to augment an existing autism ontology via Ontology Web Language class definitions and semantic web rules. We developed a custom Protégé plugin for enumerating combinatorial OWL axioms to support the many-to-many relations of ADI-R items to diagnostic categories in the DSM. We utilized a reasoner to infer whether 2642 subjects, whose data was obtained from the Simons Foundation Autism Research Initiative, meet DSM-IV-TR (DSM-IV) and DSM-5 diagnostic criteria based on their ADI-R data.\n\n\nRESULTS\nWe extended the ontology by adding 443 classes and 632 rules that represent phenotypes, along with their synonyms, environmental risk factors, and frequency of comorbidities. Applying the rules on the data set showed that the method produced accurate results: the true positive and true negative rates for inferring autistic disorder diagnosis according to DSM-IV criteria were 1 and 0.065, respectively; the true positive rate for inferring ASD based on DSM-5 criteria was 0.94.\n\n\nDISCUSSION\nThe ontology allows automatic inference of subjects' disease phenotypes and diagnosis with high accuracy.\n\n\nCONCLUSION\nThe ontology may benefit future studies by serving as a knowledge base for ASD. In addition, by adding knowledge of related NDDs, commonalities and differences in manifestations and risk factors could be automatically inferred, contributing to the understanding of ASD pathophysiology.", "title": "" }, { "docid": "e3c3f3fb3dd432017bf92e0fe5f7c341", "text": "This study aimed to evaluate the accuracy of intraoral scanners in full-arch scans. A representative model with 14 prepared abutments was digitized using an industrial scanner (reference scanner) as well as four intraoral scanners (iTero, CEREC AC Bluecam, Lava C.O.S., and Zfx IntraScan). Datasets obtained from different scans were loaded into 3D evaluation software, superimposed, and compared for accuracy. One-way analysis of variance (ANOVA) was implemented to compute differences within groups (precision) as well as comparisons with the reference scan (trueness). A level of statistical significance of p < 0.05 was set. Mean trueness values ranged from 38 to 332.9 μm. Data analysis yielded statistically significant differences between CEREC AC Bluecam and other scanners as well as between Zfx IntraScan and Lava C.O.S. Mean precision values ranged from 37.9 to 99.1 μm. Statistically significant differences were found between CEREC AC Bluecam and Lava C.O.S., CEREC AC Bluecam and iTero, Zfx Intra Scan and Lava C.O.S., and Zfx Intra Scan and iTero (p < 0.05). Except for one intraoral scanner system, all tested systems showed a comparable level of accuracy for full-arch scans of prepared teeth. Further studies are needed to validate the accuracy of these scanners under clinical conditions. Despite excellent accuracy in single-unit scans having been demonstrated, little is known about the accuracy of intraoral scanners in simultaneous scans of multiple abutments. Although most of the tested scanners showed comparable values, the results suggest that the inaccuracies of the obtained datasets may contribute to inaccuracies in the final restorations.", "title": "" }, { "docid": "58cfc1f2f7c56794cdf0d81133253c00", "text": "Machine reading comprehension with unanswerable questions aims to abstain from answering when no answer can be inferred. In addition to extract answers, previous works usually predict an additional “no-answer” probability to detect unanswerable cases. However, they fail to validate the answerability of the question by verifying the legitimacy of the predicted answer. To address this problem, we propose a novel read-then-verify system, which not only utilizes a neural reader to extract candidate answers and produce noanswer probabilities, but also leverages an answer verifier to decide whether the predicted answer is entailed by the input snippets. Moreover, we introduce two auxiliary losses to help the reader better handle answer extraction as well as noanswer detection, and investigate three different architectures for the answer verifier. Our experiments on the SQuAD 2.0 dataset show that our system obtains a score of 74.2 F1 on test set, achieving state-of-the-art results at the time of submission (Aug. 28th, 2018).", "title": "" }, { "docid": "91f3268092606d2bd1698096e32c824f", "text": "Classic pipeline models for task-oriented dialogue system require explicit modeling the dialogue states and hand-crafted action spaces to query a domain-specific knowledge base. Conversely, sequence-to-sequence models learn to map dialogue history to the response in current turn without explicit knowledge base querying. In this work, we propose a novel framework that leverages the advantages of classic pipeline and sequence-to-sequence models. Our framework models a dialogue state as a fixed-size distributed representation and use this representation to query a knowledge base via an attention mechanism. Experiment on Stanford Multi-turn Multi-domain Taskoriented Dialogue Dataset shows that our framework significantly outperforms other sequenceto-sequence based baseline models on both automatic and human evaluation. Title and Abstract in Chinese 面向任务型对话中基于对话状态表示的序列到序列学习 面向任务型对话中,传统流水线模型要求对对话状态进行显式建模。这需要人工定义对 领域相关的知识库进行检索的动作空间。相反地,序列到序列模型可以直接学习从对话 历史到当前轮回复的一个映射,但其没有显式地进行知识库的检索。在本文中,我们提 出了一个结合传统流水线与序列到序列二者优点的模型。我们的模型将对话历史建模为 一组固定大小的分布式表示。基于这组表示,我们利用注意力机制对知识库进行检索。 在斯坦福多轮多领域对话数据集上的实验证明,我们的模型在自动评价与人工评价上优 于其他基于序列到序列的模型。", "title": "" }, { "docid": "20b28dd4a0717add4e032976a7946109", "text": "In planning an s-curve speed profile for a computer numerical control (CNC) machine, centripetal acceleration and its derivative have to be considered. In a CNC machine, these quantities dictate how much voltage and current should be applied to servo motor windings. In this paper, the necessity of considering centripetal jerk in speed profile generation especially in the look-ahead mode is explained. It is demonstrated that the magnitude of centripetal jerk is proportional to the curvature derivative of the path known as \"sharpness\". It is also explained that a proper limited jerk motion is only possible when a G2-continuous machining path is planned. Then using a simplified mathematical representation of clothoids, a novel method for approximating a given path with a sequence of clothoid segments is proposed. Using this method, a semi-parallel G2-continuous path with adjustable deviation from the original shape for a sample machining contour is generated. Maximum permissible feed rate for the generated path is also calculated.", "title": "" }, { "docid": "24a1aae42134632d5091ab0b2b008c6b", "text": "Several visual feature extraction algorithms have recently appeared in the literature, with the goal of reducing the computational complexity of state-of-the-art solutions (e.g., SIFT and SURF). Therefore, it is necessary to evaluate the performance of these emerging visual descriptors in terms of processing time, repeatability and matching accuracy, and whether they can obtain competitive performance in applications such as image retrieval. This paper aims to provide an up-to-date detailed, clear, and complete evaluation of local feature detector and descriptors, focusing on the methods that were designed with complexity constraints, providing a much needed reference for researchers in this field. Our results demonstrate that recent feature extraction algorithms, e.g., BRISK and ORB, have competitive performance requiring much lower complexity and can be efficiently used in low-power devices.", "title": "" }, { "docid": "a48622ff46323acf1c40345d3e61b636", "text": "In this paper we present a novel dataset for a critical aspect of autonomous driving, the joint attention that must occur between drivers and of pedestrians, cyclists or other drivers. This dataset is produced with the intention of demonstrating the behavioral variability of traffic participants. We also show how visual complexity of the behaviors and scene understanding is affected by various factors such as different weather conditions, geographical locations, traffic and demographics of the people involved. The ground truth data conveys information regarding the location of participants (bounding boxes), the physical conditions (e.g. lighting and speed) and the behavior of the parties involved.", "title": "" }, { "docid": "a5f557ddac63cd24a11c1490e0b4f6d4", "text": "Continuous opinion dynamics optimizer (CODO) is an algorithm based on human collective opinion formation process for solving continuous optimization problems. In this paper, we have studied the impact of topology and introduction of leaders in the society on the optimization performance of CODO. We have introduced three new variants of CODO and studied the efficacy of algorithms on several benchmark functions. Experimentation demonstrates that scale free CODO performs significantly better than all algorithms. Also, the role played by individuals with different degrees during the optimization process is studied.", "title": "" }, { "docid": "544a5a95a169b9ac47960780ac09de80", "text": "Monte Carlo Tree Search methods have led to huge progress in Computer Go. Still, program performance is uneven most current Go programs are much stronger in some aspects of the game, such as local fighting and positional evaluation, than in others. Well known weaknesses of many programs include the handling of several simultaneous fights, including the “two safe groups” problem, and dealing with coexistence in seki. Starting with a review of MCTS techniques, several conjectures regarding the behavior of MCTS-based Go programs in specific types of Go situations are made. Then, an extensive empirical study of ten leading Go programs investigates their performance of two specifically designed test sets containing “two safe group” and seki situations. The results give a good indication of the state of the art in computer Go as of 2012/2013. They show that while a few of the very top programs can apparently solve most of these evaluation problems in their playouts already, these problems are difficult to solve by global search. ∗shihchie@ualberta.ca †mmueller@ualberta.ca", "title": "" }, { "docid": "329259263340b063bfad7bc34f5d376a", "text": "We analyze the problem of disparate impact in credit scoring and evaluate three approaches to identifying and correcting the problem, namely: 1) post-development univariate test with variable elimination, 2) postdevelopment multivariate test with variable elimination, 3) control variable approach with coefficient adjustment. The third approach is a new innovation developed by the authors. Results are illustrated with simulation data calibrated to actual distributions of typical variables used in score development. Results show that controlling disparate impact by eliminating variables may have unintended and undesirable consequences.", "title": "" }, { "docid": "193943c42bbc4e9cba28a483f175b66a", "text": "While a user's preference is directly reflected in the interactive choice process between her and the recommender, this wealth of information was not fully exploited for learning recommender models. In particular, existing collaborative filtering (CF) approaches take into account only the binary events of user actions but totally disregard the contexts in which users' decisions are made. In this paper, we propose Collaborative Competitive Filtering (CCF), a framework for learning user preferences by modeling the choice process in recommender systems. CCF employs a multiplicative latent factor model to characterize the dyadic utility function. But unlike CF, CCF models the user behavior of choices by encoding a local competition effect. In this way, CCF allows us to leverage dyadic data that was previously lumped together with missing data in existing CF models. We present two formulations and an efficient large scale optimization algorithm. Experiments on three real-world recommendation data sets demonstrate that CCF significantly outperforms standard CF approaches in both offline and online evaluations.", "title": "" }, { "docid": "2895d69b786be001e09e92ac8f0919a7", "text": "Wireless networking plays an extremely important role in civil and military applications. However, security of information transfer via wireless networks remains a challenging issue. It is critical to ensure that confidential data are accessible only to the intended users rather than intruders. Jamming and eavesdropping are two primary attacks at the physical layer of a wireless network. This article offers a tutorial on several prevalent methods to enhance security at the physical layer in wireless networks. We classify these methods based on their characteristic features into five categories, each of which is discussed in terms of two metrics. First, we compare their secret channel capacities, and then we show their computational complexities in exhaustive key search. Finally, we illustrate their security requirements via some examples with respect to these two metrics.", "title": "" }, { "docid": "c210e0a2ba0d8daf6935f4d825319886", "text": "Monte Carlo integration is a powerful technique for the evaluation of difficult integrals. Applications in rendering include distribution ray tracing, Monte Carlo path tracing, and form-factor computation for radiosity methods. In these cases variance can often be significantly reduced by drawing samples from several distributions, each designed to sample well some difficult aspect of the integrand. Normally this is done by explicitly partitioning the integration domain into regions that are sampled differently. We present a powerful alternative for constructing robust Monte Carlo estimators, by combining samples from several distributions in a way that is provably good. These estimators are unbiased, and can reduce variance significantly at little additional cost. We present experiments and measurements from several areas in rendering: calculation of glossy highlights from area light sources, the “final gather” pass of some radiosity algorithms, and direct solution of the rendering equation using bidirectional path tracing. CR Categories: I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism; I.3.3 [Computer Graphics]: Picture/Image Generation; G.1.9 [Numerical Analysis]: Integral Equations— Fredholm equations. Additional", "title": "" } ]
scidocsrr
b4e5cdf6e0a12cc02c858cdd6fd7eed1
Reverse Engineering of SPARQL Queries using Examples Anonymous
[ { "docid": "8c4fb5e2579b9a0e4808cee43551c635", "text": "The Semantic Web is the initiative of the W3C to make information on the Web readable not only by humans but also by machines. RDF is the data model for Semantic Web data, and SPARQL is the standard query language for this data model. In the last ten years, we have witnessed a constant growth in the amount of RDF data available on the Web, which have motivated the theoretical study of some fundamental aspects of SPARQL and the development of efficient mechanisms for implementing this query language.\n Some of the distinctive features of RDF have made the study and implementation of SPARQL challenging. First, as opposed to usual database applications, the semantics of RDF is open world, making RDF databases inherently incomplete. Thus, one usually obtains partial answers when querying RDF with SPARQL, and the possibility of adding optional information if present is a crucial feature of SPARQL. Second, RDF databases have a graph structure and are interlinked, thus making graph navigational capabilities a necessary component of SPARQL. Last, but not least, SPARQL has to work at Web scale!\n RDF and SPARQL have attracted interest from the database community. However, we think that this community has much more to say about these technologies, and, in particular, about the fundamental database problems that need to be solved in order to provide solid foundations for the development of these technologies. In this paper, we survey some of the main results about the theory of RDF and SPARQL putting emphasis on some research opportunities for the database community.", "title": "" } ]
[ { "docid": "bbc2645372369d0ad68551b20e57e24b", "text": "The objective of this paper is to present an approach to electromagnetic field simulation based on the systematic use of the global (i.e. integral) quantities. In this approach, the equations of electromagnetism are obtained directly in a finite form starting from experimental laws without resorting to the differential formulation. This finite formulation is the natural extension of the network theory to electromagnetic field and it is suitable for computational electromagnetics.", "title": "" }, { "docid": "a7e5f9cf618d6452945cb6c4db628bbb", "text": "we present a motion capture device to measure in real-time table tennis strokes. A six degree-of-freedom sensing device, inserted into the racket handle, measures 3D acceleration and 3-axis angular velocity values at a high sampling rate. Data are wirelessly transmitted to a computer in real-time. This flexible system allows for recording and analyzing kinematics information on the motion of the racket, along with synchronized video and sound recordings. Recorded gesture data are analyzed using several algorithms we developed to segment and extract movement features, and to build a reference motion database.", "title": "" }, { "docid": "19b9445fb89be143d1c32691e5e3a64b", "text": "The typical approach for solving the problem of single-image super-resolution (SR) is to learn a nonlinear mapping between the low-resolution (LR) and high-resolution (HR) representations of images in a training set. Training-based approaches can be tuned to give high accuracy on a given class of images, but they call for retraining if the HR <inline-formula><tex-math notation=\"LaTeX\">$\\rightarrow$</tex-math></inline-formula> LR generative model deviates or if the test images belong to a different class, which limits their applicability. On the other hand, we propose a solution that does not require a training dataset. Our method relies on constructing a dynamic convolutional network (DCN) to learn the relation between the consecutive scales of Gaussian and Laplacian pyramids. The relation is in turn used to predict the detail at a finer scale, thus leading to SR. Comparisons with state-of-the-art techniques on standard datasets show that the proposed DCN approach results in about 0.8 and 0.3 dB gain in peak signal-to-noise ratio for <inline-formula><tex-math notation=\"LaTeX\">$2\\times$</tex-math></inline-formula> and <inline-formula><tex-math notation=\"LaTeX\">$3\\times$</tex-math></inline-formula> SR, respectively. The structural similarity index is on par with the competing techniques.", "title": "" }, { "docid": "041100d94d87a9a142bee5db98514d14", "text": "High stator core losses can pose a significant problem in interior permanent-magnet (IPM) machines operating over wide constant-power speed ranges. At lower speeds, the torque ripple can be undesirably large in some IPM machine designs, contributing to acoustic noise and vibration. While previous work has addressed these two problems independently, this paper shows that the conditions for reducing stator core losses during flux-weakening operation, dominated by harmonic eddy-current losses in the stator teeth, can conflict with the conditions for reducing the torque ripple of IPM machines. It is also shown that the resulting design tradeoffs depend on the details of the IPM machine topologies that are under consideration. The appropriate IPM machine topologies that offer more favorable tradeoffs are identified to achieve the best possible compromise of high-speed stator core losses and torque ripple characteristics.", "title": "" }, { "docid": "17d06584c35a9879b0bd4b653ff64b40", "text": "We present a solution to the rolling shutter (RS) absolute camera pose problem with known vertical direction. Our new solver, R5Pup, is an extension of the general minimal solution R6P, which uses a double linearized RS camera model initialized by the standard perspective P3P. Here, thanks to using known vertical directions, we avoid double linearization and can get the camera absolute pose directly from the RS model without the initialization by a standard P3P. Moreover, we need only five 2D-to-3D matches while R6P needed six such matches. We demonstrate in simulated and real experiments that our new R5Pup is robust, fast and a very practical method for absolute camera pose computation for modern cameras on mobile devices. We compare our R5Pup to the state of the art RS and perspective methods and demonstrate that it outperforms them when vertical direction is known in the range of accuracy available on modern mobile devices. We also demonstrate that when using R5Pup solver in structure from motion (SfM) pipelines, it is better to transform already reconstructed scenes into the standard position, rather than using hard constraints on the verticality of up vectors.", "title": "" }, { "docid": "d388e381e918ba764b4c1805fa7551fc", "text": "In this paper a new protection scheme for DC traction supply system is introduced, which is known as “overload protection method”. In this scheme, with the knowledge of the number of traveling trains between two traction substations and the value of current which is drawn from each substation, the occurrence of short circuit fault is detected. The aforementioned data can be extracted and transmitted from railway traffic control system. Recently DDL (“Détection Défaut Lign in French”, which means “Line Fault detection”) protection method is used in supply line protection. In this paper, the electrical system of railway system is simulated using the data obtained from Tabriz (located in Iran) Urban Railway Organization. The performance of the conventional and proposed protection schemes is compared and the simulation results are presented and then the practical measures and requirements of both methods are investigated. According to the results obtained, both methods accomplish satisfactory protection performance; however the DDL protection scheme is severely sensitive to the change in components and supply system parameters and it is also hard to determine the setting range of protection parameters of this method. Therefore, it may lead to some undesired operations; while the proposed protection scheme is simpler and more reliable.", "title": "" }, { "docid": "c2a2c29b03ee90558325df7461124092", "text": "Effective thermal conductivity of mixtures of  uids and nanometer-size particles is measured by a steady-state parallel-plate method. The tested  uids contain two types of nanoparticles, Al2O3 and CuO, dispersed in water, vacuum pump  uid, engine oil, and ethylene glycol. Experimental results show that the thermal conductivities of nanoparticle– uid mixtures are higher than those of the base  uids. Using theoretical models of effective thermal conductivity of a mixture, we have demonstrated that the predicted thermal conductivities of nanoparticle– uid mixtures are much lower than our measured data, indicating the deŽ ciency in the existing models when used for nanoparticle– uid mixtures. Possible mechanisms contributing to enhancement of the thermal conductivity of the mixtures are discussed. A more comprehensive theory is needed to fully explain the behavior of nanoparticle– uid mixtures.", "title": "" }, { "docid": "81136c9fb730dbd45d2a266273d8d0fd", "text": "A stabilizing observer-based control algorithm for an in-wheel-motored vehicle is proposed, which generates direct yaw moment to compensate for the state deviations. The control scheme is based on a fuzzy rule-based body slip angle (beta) observer. In the design strategy of the fuzzy observer, the vehicle dynamics is represented by Takagi-Sugeno-like fuzzy models. Initially, local equivalent vehicle models are built using the linear approximations of vehicle dynamics for low and high lateral acceleration operating regimes, respectively. The optimal beta observer is then designed for each local model using Kalman filter theory. Finally, local observers are combined to form the overall control system by using fuzzy rules. These fuzzy rules represent the qualitative relationships among the variables associated with the nonlinear and uncertain nature of vehicle dynamics, such as tire force saturation and the influence of road adherence. An adaptation mechanism for the fuzzy membership functions has been incorporated to improve the accuracy and performance of the system. The effectiveness of this design approach has been demonstrated in simulations and in a real-time experimental setting.", "title": "" }, { "docid": "4f059822d0da0ada039b11c1d65c7c32", "text": "Lead time reduction is a key concern of many industrial buyers of capital facilities given current economic conditions. Supply chain initiatives in manufacturing settings have led owners to expect that dramatic reductions in lead time are possible in all phases of their business, including the delivery of capital materials. Further, narrowing product delivery windows and increasing pressure to be first-tomarket create significant external pressure to reduce lead time. In this paper, a case study is presented in which an owner entered the construction supply chain to procure and position key long-lead materials. The materials were held at a position in the supply chain selected to allow some flexibility for continued customization, but dramatic reduction in the time-to-site. Simulation was used as a tool to consider time-to-site tradeoffs for multiple inventory locations so as to better match the needs of the construction effort.", "title": "" }, { "docid": "2088fcfb9651e2dfcbaa123b723ef8aa", "text": "Head pose estimation is not only a crucial preprocessing task in applications such as facial expression and face recognition, but also the core task for many others, e.g. gaze; driver focus of attention; head gesture recognitions. In real scenarios, the fine location and scale of a processed face patch should be consistently and automatically obtained. To this end, we propose a depth-based face spotting technique in which the face is cropped with respect to its depth data, and is modeled by its appearance features. By employing this technique, the localization rate was gained. additionally, by building a head pose estimator on top of it, we achieved more accurate pose estimates and better generalization capability. To estimate the head pose, we exploit Support Vector (SV) regressors to map Histogram of oriented Gradient (HoG) features extracted from the spotted face patches in both depth and RGB images to the head rotation angles. The developed pose estimator compared favorably to state-of-the-art approaches on two challenging DRGB databases.", "title": "" }, { "docid": "a049d8375465cadb67a796c52bf42f79", "text": "We extend continuous assurance research by proposing a novel continuous assurance architecture grounded in information fusion research. Existing continuous assurance architectures focus primarily on methods of monitoring assurance clients’ systems to detect anomalous activities and have not addressed the question of how to process the detected anomalies. Consequently, actual implementations of these systems typically detect a large number of anomalies, with the resulting information overload leading to suboptimal decision making due to human information processing limitations. The proposed architecture addresses these issues by performing anomaly detection, aggregation and evaluation. Within the proposed architecture, artifacts developed in prior continuous assurance, ontology, and artificial intelligence research are used to perform the detection, aggregation and evaluation information fusion tasks. The architecture contributes to the academic continuous assurance literature and has implications for practitioners involved in the development of more robust and useful continuous assurance systems.", "title": "" }, { "docid": "da24fa3885407e0906880ea108f3e70c", "text": "Detection and recognition of text from natural images is very important for extracting information from images but is an extensively challenging task. This paper proposes an approach for detection of text area from natural scene images using Maximally Stable Extremal Regions (MSER) and recognizing the text using a self-trained Neural Network. Some preprocessing is applied to the image then MSER and canny edge is used to locate the smaller areas that may more likely contain text. The text is individually isolated as single characters by simple algorithms on the binary image and then passed through the recognition model specially designed for hazy and unaligned characters.", "title": "" }, { "docid": "c77fec3ea0167df15cfd4105a7101a1e", "text": "This paper is about extending the reach and endurance of outdoor localisation using stereo vision. At the heart of the localisation is the fundamental task of discovering feature correspondences between recorded and live images. One aspect of this problem involves deciding where to look for correspondences in an image and the second is deciding what to look for. This latter point, which is the main focus of our paper, requires understanding how and why the appearance of visual features can change over time. In particular, such knowledge allows us to better deal with abrupt and challenging changes in lighting. We show how by instantiating a parallel image processing stream which operates on illumination-invariant images, we can substantially improve the performance of an outdoor visual navigation system. We will demonstrate, explain and analyse the effect of the RGB to illumination-invariant transformation and suggest that for little cost it becomes a viable tool for those concerned with having robots operate for long periods outdoors.", "title": "" }, { "docid": "1c7457ef393a604447b0478451ef0c62", "text": "Melasma is an acquired increased pigmentation of the skin [1], a symmetric hypermelanosis, characterized by irregular light to gray brown macules. Melasma comes from the Greek word melas [= black color), formerly known as Chloasma, another Greek word meaning green color, even though the term was more often used for melasma cases during pregnancy. It is considered to be part of a large group of facial melanosis, such as Riehl’s melanosis, Lichen planuspigmentous, erythema dyschromicumperstans, erythrosis and poikiloderma of Civatte [2]. Hyperpigmented macules and patches are most commonly developed in the sun-exposed areas of the skin [3]. Melasma is considered to be a chronic acquired hypermelanosis of the skin [4], with poorly understood pathogenesis [5]. The increased pigmentation and the photo damaged features that characterize melasma include solar elastosis, even though the main pathogenesis still remains unknown [6].", "title": "" }, { "docid": "455b2a46ef0a6a032686eaaedf9cacf3", "text": "Recently, taxonomy has attracted much attention. Both automatic construction solutions and human-based computation approaches have been proposed. The automatic methods suffer from the problem of either low precision or low recall and human computation, on the other hand, is not suitable for large scale tasks. Motivated by the shortcomings of both approaches, we present a hybrid framework, which combines the power of machine-based approaches and human computation (the crowd) to construct a more complete and accurate taxonomy. Specifically, our framework consists of two steps: we first construct a complete but noisy taxonomy automatically, then crowd is introduced to adjust the entity positions in the constructed taxonomy. However, the adjustment is challenging as the budget (money) for asking the crowd is often limited. In our work, we formulate the problem of finding the optimal adjustment as an entity selection optimization (ESO) problem, which is proved to be NP-hard. We then propose an exact algorithm and a more efficient approximation algorithm with an approximation ratio of 1/2(1-1/e). We conduct extensive experiments on real datasets, the results show that our hybrid approach largely improves the recall of the taxonomy with little impairment for precision.", "title": "" }, { "docid": "08eac8e69ef59d9149f071472fb55670", "text": "This paper describes the issues and tradeoffs in the design and monolithic implementation of direct-conversion receivers and proposes circuit techniques that can alleviate the drawbacks of this architecture. Following a brief study of heterodyne and image-reject topologies, the direct-conversion architecture is introduced and effects such as dc offset, I=Q mismatch, even-order distortion, flicker noise, and oscillator leakage are analyzed. Related design techniques for amplification and mixing, quadrature phase calibration, and baseband processing are also described.", "title": "" }, { "docid": "96973058d3ca943f3621dfe843baf631", "text": "Many organizations are gradually catching up with the tide of adopting agile practices at workplace, but they seem to be struggling with how to choose the agile practices and mix them into their IT software project development and management. These organizations have already had their own development styles, many of which have adhered to the traditional plan-driven methods such as waterfall. The inherent corporate culture of resisting to change or hesitation to abandon what they have established for a whole new methodology hampers the process change. In this paper, we will review the current state of agile adoption in business organizations and propose a new approach to IT project development and management by blending Scrum, an agile method, into traditional plan-driven project development and management. The management activity involved in Scrum is discussed, the team and meeting composing of Scrum are investigated, the challenges and benefits of applying Scrum in traditional IT project development and management are analyzed, the blending structure is illustrated and discussed, and the iterative process with Scrum and planned process without Scrum are compared.", "title": "" }, { "docid": "a6fec60aeb6e5824ed07eaa3257969aa", "text": "What aspects of information assurance can be identified in Business-to-Consumer (B-toC) online transactions? The purpose of this research is to build a theoretical framework for studying information assurance based on a detailed analysis of academic literature for online exchanges in B-to-C electronic commerce. Further, a semantic network content analysis is conducted to analyze the representations of information assurance in B-to-C electronic commerce in the real online market place (transaction Web sites of selected Fortune 500 firms). The results show that the transaction websites focus on some perspectives and not on others. For example, we see an emphasis on the importance of technological and consumer behavioral elements of information assurance such as issues of online security and privacy. Further corporate practitioners place most emphasis on transaction-related information assurance issues. Interestingly, the product and institutional dimension of information assurance in online transaction websites are only", "title": "" }, { "docid": "a4059636cbdc058e3f3a7621155c68b7", "text": "A <italic>K</italic>-d tree represents a set of <italic>N</italic> points in <italic>K</italic>-dimensional space. Operations on a <italic>semidynamic</italic> tree may delete and undelete points, but may not insert new points. This paper shows that several operations that require <italic>&Ogr;</italic>(log <italic>N</italic>) expected time in general <italic>K</italic>-d trees may be performed in constant expected time in semidynamic trees. These operations include deletion, undeletion, nearest neighbor searching, and fixed-radius near neighbor searching (the running times of the first two are proved, while the last two are supported by experiments and heuristic arguments). Other new techniques can also be applied to general <italic>K</italic>-d trees: simple sampling reduces the time to build a tree from <italic>&Ogr;</italic>(<italic>KN</italic> log <italic>N</italic>) to <italic>&Ogr;</italic>(<italic>KN</italic> + <italic>N</italic> log <italic>N</italic>), and more advanced sampling builds a robust tree in the same time. The methods are straightforward to implement, and lead to a data structure that is significantly faster and less vulnerable to pathological inputs than ordinary <italic>K</italic>-d trees.", "title": "" } ]
scidocsrr
e857774073b48f639f1c3903f7f51615
Chinese Handwriting Imitation with Hierarchical Generative Adversarial Network
[ { "docid": "556c0c1662a64f484aff9d7556b2d0b5", "text": "In this paper, we investigate the Chinese calligraphy synthesis problem: synthesizing Chinese calligraphy images with specified style from standard font(eg. Hei font) images (Fig. 1(a)). Recent works mostly follow the stroke extraction and assemble pipeline which is complex in the process and limited by the effect of stroke extraction. In this work we treat the calligraphy synthesis problem as an image-to-image translation problem and propose a deep neural network based model which can generate calligraphy images from standard font images directly. Besides, we also construct a large scale benchmark that contains various styles for Chinese calligraphy synthesis. We evaluate our method as well as some baseline methods on the proposed dataset, and the experimental results demonstrate the effectiveness of our proposed model.", "title": "" } ]
[ { "docid": "3a81f0fc24dd90f6c35c47e60db3daa4", "text": "Advances in information and Web technologies have open numerous opportunities for online retailing. The pervasiveness of the Internet coupled with the keenness in competition among online retailers has led to virtual experiential marketing (VEM). This study examines the relationship of five VEM elements on customer browse and purchase intentions and loyalty, and the moderating effects of shopping orientation and Internet experience on these relationships. A survey was conducted of customers who frequently visited two online game stores to play two popular games in Taiwan. The results suggest that of the five VEM elements, three have positive effects on browse intention, and two on purchase intentions. Both browse and purchase intentions have positive effects on customer loyalty. Economic orientation was found to moderate that relationships between the VEM elements and browse and purchase intentions. However, convenience orientation moderated only the relationships between the VEM elements and browse intention.", "title": "" }, { "docid": "e25b65f5aa5a6322b4896136a17f427c", "text": "Applications with temperatures higher than the melting point of eutectic tin-lead solder (183°C) require high-melting-point solders. However, they are expensive and not widely available. With the adoption of lead-free legislation, first in Europe and then in many other countries, the electronics industry has transitioned from eutectic tin-lead to lead-free solders that have higher melting points. This higher melting point presents an opportunity for the manufacturers of high-temperature electronics to shift to mainstream lead-free solders. In this paper, ball grid arrays (BGAs), quad flat packages, and surface mount resistors assembled with SAC305 (96.5%Sn+3.0%Ag+0.5Cu) and Sn3.5Ag (96.5%Sn+3.5%Ag) solder pastes were subjected to thermal cycling from -40°C to 185°C. Commercially available electroless nickel immersion gold board finish was compared to custom Sn-based board finish designed for high temperatures. The data analysis showed that the type of solder paste and board finish used did not have an impact on the reliability of BGA solder joints. The failure analysis revealed the failure site to be on the package side of the solder joint. The evolution of intermetallic compounds after thermal cycling was analyzed.", "title": "" }, { "docid": "9b8ae286375fc40a027dba38f8fbdc9f", "text": "Video summarization is defined as creating a shorter video clip or a video poster which includes only the important scenes in the original video streams. In this paper, we propose two methods of generating a summary of arbitrary length for large sports video archives. One is to create a concise video clip by temporally compressing the amount of the video data. The other is to provide a video poster by spatially presenting the image keyframes which together represent the whole video content. Our methods deal with the metadata which has semantic descriptions of video content. Summaries are created according to the significance of each video segment which is normalized in order to handle large sports video archives. We experimentally verified the effectiveness of our methods by comparing the results with man-made video summaries", "title": "" }, { "docid": "817c64e272a744c00b46d2a98828dacb", "text": "Depression is highly prevalent in children and adolescents. Psychodynamic therapies are only insufficiently evaluated in this field although many children and adolescents suffering from depression are treated using this approach. Therefore, the aim of our study was to evaluate the efficacy of psychodynamic short-term psychotherapy (PSTP) for the treatment of depression in children and adolescents. In a waiting-list controlled study, 20 children and adolescents fulfilling diagnosis of major depression or dysthymia were included. The treatment group received 25 sessions of psychodynamic psychotherapy. Main outcome criterion was the Impairment-Score for Children and Adolescents (IS-CA) as well as the Psychic and Social-Communicative Findings Sheet for Children and Adolescents (PSCFS-CA) and the Child Behavior Checklist (CBCL), which were assessed at the beginning and the end of treatment. The statistical and clinical significance of changes in these measures were evaluated. There was a significant advantage of the treatment group compared to the waiting group for the IS-CA. The effect size of the IS-CA total score was 1,3. In contrast to the treatment group, where 20% of the children showed clinically significant and reliable improvement, no subject in the waiting-list control group met this criterion. Comparable results were found for the PSCFS-CA and for the internalising score assessed with the CBCL. The results show that psychodynamic short-term psychotherapy (PSTP) is an effective treatment for depressed children and adolescents. Still, some of the children surely require more intensive treatment.", "title": "" }, { "docid": "513239885e48a729e6f80a2df2e061c7", "text": "Schemes for FPE enable one to encrypt Social Security numbers (SSNs), credit card numbers (CCNs), and the like, doing so in such a way that the ciphertext has the same format as the plaintext. In the case of SSNs, for example, this means that the ciphertext, like the plaintext, consists of a nine decimal-digit string. Similarly, encryption of a 16-digit CCN results in a 16-digit ciphertext. FPE is rapidly emerging as a useful cryptographic tool, with applications including financial-information security, data sanitization, and transparently encrypting fields in a legacy database.", "title": "" }, { "docid": "9218a87b0fba92874e5f7917c925843a", "text": "For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than 1% of our agent’s interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any which have been previously learned from human feedback.", "title": "" }, { "docid": "c3cb261d9dc6b92a6e69e4be7ec44978", "text": "An increasing number of studies in political communication focus on the “sentiment” or “tone” of news content, political speeches, or advertisements. This growing interest in measuring sentiment coincides with a dramatic increase in the volume of digitized information. Computer automation has a great deal of potential in this new media environment. The objective here is to outline and validate a new automated measurement instrument for sentiment analysis in political texts. Our instrument uses a dictionary-based approach consisting of a simple word count of the frequency of keywords in a text from a predefined dictionary. The design of the freely available Lexicoder Sentiment Dictionary (LSD) is discussed in detail here. The dictionary is tested against a body of human-coded news content, and the resulting codes are also compared to results from nine existing content-analytic dictionaries. Analyses suggest that the LSD produces results that are more systematically related to human coding than are results based on the other available dictionaries. The LSD is thus a useful starting point for a revived discussion about dictionary construction and validation in sentiment analysis for political communication.", "title": "" }, { "docid": "c6eb01a11e88dd686a47ca594b424350", "text": "Automatic fake news detection is an important, yet very challenging topic. Traditional methods using lexical features have only very limited success. This paper proposes a novel method to incorporate speaker profiles into an attention based LSTM model for fake news detection. Speaker profiles contribute to the model in two ways. One is to include them in the attention model. The other includes them as additional input data. By adding speaker profiles such as party affiliation, speaker title, location and credit history, our model outperforms the state-of-the-art method by 14.5% in accuracy using a benchmark fake news detection dataset. This proves that speaker profiles provide valuable information to validate the credibility of news articles.", "title": "" }, { "docid": "bea319596dd62f7b26b5ec22ff58aadb", "text": "We present a novel technique for texture mapping on arbitrar y su faces with minimal distortions, by preserving the local and globa l structure of the texture. The recent introduction of the fast marching metho d on triangulated surfaces [9], made it possible to compute geodesic distance s i O(~ n lg ~ n) where~ n is the number of triangles that represent the surface. We use this method to design a surface flattening approach based on multi -dimensional scaling (MDS). MDS is a family of methods that map a set of poin ts to a finite dimensional flat (Euclidean) domain, where the only gi ven data is the corresponding distances between every pair of points. The M DS mapping yields minimal changes of the distances between the corresp onding points. We then solve an ‘inverse’ problem and map a flat texture patch onto the curved surface while preserving the structure of the textur .", "title": "" }, { "docid": "a7ca3ffcae09ad267281eb494532dc54", "text": "A substrate integrated metamaterial-based leaky-wave antenna is proposed to improve its boresight radiation bandwidth. The proposed leaky-wave antenna based on a composite right/left-handed substrate integrated waveguide consists of two leaky-wave radiator elements which are with different unit cells. The dual-element antenna prototype features boresight gain of 12.0 dBi with variation of 1.0 dB over the frequency range of 8.775-9.15 GHz or 4.2%. In addition, the antenna is able to offer a beam scanning from to with frequency from 8.25 GHz to 13.0 GHz.", "title": "" }, { "docid": "28600f0ee7ca1128874e830e01a028de", "text": "This paper presents and analyzes a three-tier architecture for collecting sensor data in sparse sensor networks. Our approach exploits the presence of mobile entities (called MULEs) present in the environment. When in close range, MULEs pick up data from the sensors, buffer it, and deliver it to wired access points. This can lead to substantial power savings at the sensors as they only have to transmit over a short-range. This paper focuses on a simple analytical model for understanding performance as system parameters are scaled. Our model assumes a two-dimensional random walk for mobility and incorporates key system variables such as number of MULEs, sensors and access points. The performance metrics observed are the data success rate (the fraction of generated data that reaches the access points), latency and the required buffer capacities on the sensors and the MULEs. The modeling and simulation results can be used for further analysis and provide certain guidelines for deployment of such systems. 2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "5940949b1fd6f6b8ab2c45dcb1ece016", "text": "Despite significant work on the problem of inferring a Twitter user’s gender from her online content, no systematic investigation has been made into leveraging the most obvious signal of a user’s gender: first name. In this paper, we perform a thorough investigation of the link between gender and first name in English tweets. Our work makes several important contributions. The first and most central contribution is two different strategies for incorporating the user’s self-reported name into a gender classifier. We find that this yields a 20% increase in accuracy over a standard baseline classifier. These classifiers are the most accurate gender inference methods for Twitter data developed to date. In order to evaluate our classifiers, we developed a novel way of obtaining gender-labels for Twitter users that does not require analysis of the user’s profile or textual content. This is our second contribution. Our approach eliminates the troubling issue of a label being somehow derived from the same text that a classifier will use to", "title": "" }, { "docid": "8f2b100dac154c54d928928296f830f6", "text": "The RPL routing protocol published in RFC 6550 was designed for efficient and reliable data collection in low-power and lossy networks. Specifically, it constructs a Destination Oriented Directed Acyclic Graph (DODAG) for data forwarding. However, due to the uneven deployment of sensor nodes in large areas, and the heterogeneous traffic patterns in the network, some sensor nodes may have much heavier workload in terms of packets forwarded than others. Such unbalanced workload distribution will result in these sensor nodes quickly exhausting their energy, and therefore shorten the overall network lifetime. In this paper, we propose a load balanced routing protocol based on the RPL protocol, named LB-RPL, to achieve balanced workload distribution in the network. Targeted at the low-power and lossy network environments, LB-RPL detects workload imbalance in a distributed and non-intrusive fashion. In addition, it optimizes the data forwarding path by jointly considering both workload distribution and link-layer communication qualities. We demonstrate the performance superiority of our LB-RPL protocol over original RPL through extensive simulations.", "title": "" }, { "docid": "dde7db9c7a8ce740d3d12088bd847021", "text": "This paper describes the Twitter lexical normalization system submitted by IHS R&D Belarus team for the ACL 2015 workshop on noisy user-generated text. The proposed system consists of two components: a CRFbased approach to identify possible normalization candidates, and a post-processing step in an attempt to normalize words that do not have normalization variants in the lexicon. Evaluation on the test data set showed that our unconstrained system achieved the Fmeasure of 0.8272 (rank 1 out of 5 submissions for the unconstrained mode, rank 2 out of all 11 submissions).", "title": "" }, { "docid": "e625c5dc123f0b1e7394c4bae47f7cd8", "text": "Interconnected embedded devices are increasingly used in various scenarios, including industrial control, building automation, or emergency communication. As these systems commonly process sensitive information or perform safety critical tasks, they become appealing targets for cyber attacks. A promising technique to remotely verify the safe and secure operation of networked embedded devices is remote attestation. However, existing attestation protocols only protect against software attacks or show very limited scalability. In this paper, we present the first scalable attestation protocol for interconnected embedded devices that is resilient to physical attacks. Based on the assumption that physical attacks require an adversary to capture and disable devices for some time, our protocol identifies devices with compromised hardware and software. Compared to existing solutions, our protocol reduces communication complexity and runtimes by orders of magnitude, precisely identifies compromised devices, supports highly dynamic and partitioned network topologies, and is robust against failures. We show the security of our protocol and evaluate it in static as well as dynamic network topologies. Our results demonstrate that our protocol is highly efficient in well-connected networks and robust to network disruptions.", "title": "" }, { "docid": "b63338d2b3d720471ee610cc92e6abf9", "text": "This article illustrates how creativity is constituted by forces beyond the innovating individual, drawing examples from the career of the eminent chemist Linus Pauling. From a systems perspective, a scientific theory or other product is creative only if the innovation gains the acceptance of a field of experts and so transforms the culture. In addition to this crucial selective function vis-à-vis the completed work, the social field can play a catalytic role, fostering productive interactions between person and domain throughout a career. Pauling's case yields examples of how variously the social field contributes to creativity, shaping the individual's standards of judgment and providing opportunities, incentives, and critical evaluation. A formidable set of strengths suited Pauling for his scientific achievements, but examination of his career qualifies the notion of a lone genius whose brilliance carries the day.", "title": "" }, { "docid": "eaf7b6b0cc18453538087cc90254dbd8", "text": "We present a real-time system that renders antialiased hard shadows using irregular z-buffers (IZBs). For subpixel accuracy, we use 32 samples per pixel at roughly twice the cost of a single sample. Our system remains interactive on a variety of game assets and CAD models while running at 1080p and 2160p and imposes no constraints on light, camera or geometry, allowing fully dynamic scenes without precomputation. Unlike shadow maps we introduce no spatial or temporal aliasing, smoothly animating even subpixel shadows from grass or wires.\n Prior irregular z-buffer work relies heavily on GPU compute. Instead we leverage the graphics pipeline, including hardware conservative raster and early-z culling. We observe a duality between irregular z-buffer performance and shadow map quality; this allows common shadow map algorithms to reduce our cost. Compared to state-of-the-art ray tracers, we spawn similar numbers of triangle intersections per pixel yet completely rebuild our data structure in under 2 ms per frame.", "title": "" }, { "docid": "8b08fbd7610e68e39026011fec7034ec", "text": "Smart grid initiatives will produce a grid that is increasingly dependent on its cyber infrastructure in order to support the numerous power applications necessary to provide improved grid monitoring and control capabilities. However, recent findings documented in government reports and other literature, indicate the growing threat of cyber-based attacks in numbers and sophistication targeting the nation's electric grid and other critical infrastructures. Specifically, this paper discusses cyber-physical security of Wide-Area Monitoring, Protection and Control (WAMPAC) from a coordinated cyber attack perspective and introduces a game-theoretic approach to address the issue. Finally, the paper briefly describes how cyber-physical testbeds can be used to evaluate the security research and perform realistic attack-defense studies for smart grid type environments.", "title": "" }, { "docid": "d35dc7e653dbe5dca7e1238ea8ced0a5", "text": "Temperature-aware computing is becoming more important in design of computer systems as power densities are increasing and the implications of high operating temperatures result in higher failure rates of components and increased demand for cooling capability. Computer architects and system software designers need to understand the thermal consequences of their proposals, and develop techniques to lower operating temperatures to reduce both transient and permanent component failures. Recognizing the need for thermal modeling tools to support those researches, there has been work on modeling temperatures of processors at the micro-architectural level which can be easily understood and employed by computer architects for processor designs. However, there is a dearth of such tools in the academic/research community for undertaking architectural/systems studies beyond a processor - a server box, rack or even a machine room. In this paper we presents a detailed 3-dimensional computational fluid dynamics based thermal modeling tool, called ThermoStat, for rack-mounted server systems. We conduct several experiments with this tool to show how different load conditions affect the thermal profile, and also illustrate how this tool can help design dynamic thermal management techniques. We propose reactive and proactive thermal management for rack mounted server and isothermal workload distribution for rack.", "title": "" } ]
scidocsrr
8fd1f2041f8dc2341cba3ee3d9551c37
The determinants of crowdfunding success: A semantic text analytics approach
[ { "docid": "7c98d4c1ab375526c426f8156650cb22", "text": "Online privacy remains an ongoing source of debate in society. Sensitive to this, many web platforms are offering users greater, more granular control over how and when their information is revealed. However, recent research suggests that information control mechanisms of this sort are not necessarily of economic benefit to the parties involved. We examine the use of these mechanisms and their economic consequences, leveraging data from one of the world's largest global crowdfunding platforms, where contributors can conceal their identity or contribution amounts from public display. We find that information hiding is more likely when contributors are under greater scrutiny or exhibiting “undesirable” behavior. We also identify an anchoring effect from prior contributions, which is eliminated when earlier contributors conceal their amounts. Subsequent analyses indicate that a nuanced approach to the design and provision of information control mechanisms, such as varying default settings based on contribution amounts, can help promote larger contributions.", "title": "" }, { "docid": "e267fe4d2d7aa74ded8988fcdbfb3474", "text": "Consumers have recently begun to play a new role in some markets: that of providing capital and investment support to the offering. This phenomenon, called crowdfunding, is a collective effort by people who network and pool their money together, usually via the Internet, in order to invest in and support efforts initiated by other people or organizations. Successful service businesses that organize crowdfunding and act as intermediaries are emerging, attesting to the viability of this means of attracting investment. Employing a “Grounded Theory” approach, this paper performs an in-depth qualitative analysis of three cases involving crowdfunding initiatives: SellaBand in the music business, Trampoline in financial services, and Kapipal in non-profit services. These cases were selected to represent a diverse set of crowdfunding operations that vary in terms of risk/return for the investorconsumer and the type of consumer involvement. The analysis offers important insights about investor behaviour in crowdfunding service models, the potential determinants of such behaviour, and variations in behaviour and determinants across different service models. The findings have implications for service managers interested in launching and/or managing crowdfunding initiatives, and for service theory in terms of extending the consumer’s role from co-production and co-creation to investment.", "title": "" } ]
[ { "docid": "81060b9d045e2935a77967d0318c4086", "text": "Differential evolution (DE) is arguably one of the most powerful stochastic real-parameter optimization algorithms in current use. DE operates through similar computational steps as employed by a standard evolutionary algorithm (EA). However, unlike traditional EAs, the DE-variants perturb the current-generation population members with the scaled differences of randomly selected and distinct population members. Therefore, no separate probability distribution has to be used for generating the offspring. Since its inception in 1995, DE has drawn the attention of many researchers all over the world resulting in a lot of variants of the basic algorithm with improved performance. This paper presents a detailed review of the basic concepts of DE and a survey of its major variants, its application to multiobjective, constrained, large scale, and uncertain optimization problems, and the theoretical studies conducted on DE so far. Also, it provides an overview of the significant engineering applications that have benefited from the powerful nature of DE.", "title": "" }, { "docid": "1ceab925041160f17163940360354c55", "text": "A complete reconstruction of D.H. Lehmer’s ENIAC set-up for computing the exponents of p modulo 2 is given. This program served as an early test program for the ENIAC (1946). The reconstruction illustrates the difficulties of early programmers to find a way between a man operated and a machine operated computation. These difficulties concern both the content level (the algorithm) and the formal level (the logic of sequencing operations).", "title": "" }, { "docid": "bcee978b0c7b8d533b05ce64daca92e3", "text": "Sentiment analysis of short texts is challenging because of the limited contextual information they usually contain. In recent years, deep learning models such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs) have been applied to text sentiment analysis with comparatively remarkable results. In this paper, we describe a jointed CNN and RNN architecture, taking advantage of the coarse-grained local features generated by CNN and long-distance dependencies learned via RNN for sentiment analysis of short texts. Experimental results show an obvious improvement upon the state-of-the-art on three benchmark corpora, MR, SST1 and SST2, with 82.28%, 51.50% and 89.95% accuracy, respectively. 1", "title": "" }, { "docid": "1b3afef7a857d436635a3de056559e1f", "text": "This paper presents Haggle, an architecture for mobile devices that enables seamless network connectivity and application functionality in dynamic mobile environments. Current applications must contain significant network binding and protocol logic, which makes them inflexible to the dynamic networking environments facing mobile devices. Haggle allows separating application logic from transport bindings so that applications can be communication agnostic. Internally, the Haggle framework provides a mechanism for late-binding interfaces, names, protocols, and resources for network communication. This separation allows applications to easily utilize multiple communication modes and methods across infrastructure and infrastructure-less environments. We provide a prototype implementation of the Haggle framework and evaluate it by demonstrating support for two existing legacy applications, email and web browsing. Haggle makes it possible for these applications to seamlessly utilize mobile networking opportunities both with and without infrastructure.", "title": "" }, { "docid": "8841397018c52a57ce3f1b025fa76a7a", "text": "The G-banding technique was performed on chromosomes from gill tissue of three cupped oyster species: Crassostrea gigas, Crassostrea angulata and Crassostrea virginica. Identification of the ten individual chromosome pairs was obtained. Comparative analysis of G-banded karyotypes of the three species showed that their banding patterns generally resembled each other, with chromosome pair 3 being similar in all three species. However, differences from one species to another were also observed. The G-banding pattern highlighted greater similarities between C. gigas and C. angulata than between these two species and C. virginica, thus providing an additional argument for genetic divergence between these two evolutionary lineages. C. gigas and C. angulata showed a different G-banding patterns on the two arms of chromosome pair 7, which agrees with their taxonomic separation. The application of this banding technique offers a new approach to specific problems in oyster taxonomy and genetics. &copy; Inra/Elsevier, Paris chromosome / G-banding / Crassostrea gigas / Crassostrea angulata / Crassostrea", "title": "" }, { "docid": "4097fe8240f8399de8c0f7f6bdcbc72f", "text": "Feature extraction of EEG signals is core issues on EEG based brain mapping analysis. The classification of EEG signals has been performed using features extracted from EEG signals. Many features have proved to be unique enough to use in all brain related medical application. EEG signals can be classified using a set of features like Autoregression, Energy Spectrum Density, Energy Entropy, and Linear Complexity. However, different features show different discriminative power for different subjects or different trials. In this research, two-features are used to improve the performance of EEG signals. Neural Network based techniques are applied to feature extraction of EEG signal. This paper discuss on extracting features based on Average method and Max & Min method of the data set. The Extracted Features are classified using Neural Network Temporal Pattern Recognition Technique. The two methods are compared and performance is analyzed based on the results obtained from the Neural Network classifier.", "title": "" }, { "docid": "4421a42fc5589a9b91215b68e1575a3f", "text": "We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets.", "title": "" }, { "docid": "a40c24c01f13952516a613724dac98b7", "text": "In this work, we address the task of dense stereo matching with Convolutional Neural Networks (CNNs). Particularly, we focus on improving matching cost computation by better aggregating contextual information. Towards this goal, we advocate to use atrous convolution, a powerful tool for dense prediction task that allows us to control the resolution at which feature responses are computed within CNNs and to enlarge the receptive field of the network without losing image resolution and requiring learning extra parameters. Aiming to improve the performance of atrous convolution, we propose different frameworks for further boosting performance. We evaluate our models on KITTI 2015 benchmark, the result shows that we achieve on-par performance with fewer post-processing methods applied.", "title": "" }, { "docid": "fe11678f122efc57603321b61c1f52eb", "text": "Recognition of grocery products in store shelves poses peculiar challenges. Firstly, the task mandates the recognition of an extremely high number of different items, in the order of several thousands for medium-small shops, with many of them featuring small inter and intra class variability. Then, available product databases usually include just one or a few studio-quality images per product (referred to herein as reference images), whilst at test time recognition is performed on pictures displaying a portion of a shelf containing several products and taken in the store by cheap cameras (referred to as query images). Moreover, as the items on sale in a store as well as their appearance change frequently over time, a practical recognition system should handle seamlessly new products/packages. Inspired by recent advances in object detection and image retrieval, we propose to leverage on state of the art object detectors based on deep learning to obtain an initial productagnostic item detection. Then, we pursue product recognition through a similarity search between global descriptors computed on reference and cropped query images. To maximize performance, we learn an ad-hoc global descriptor by a CNN trained on reference images based on an image embedding loss. Our system is computationally expensive at training time but can perform recognition rapidly and accurately at test time.", "title": "" }, { "docid": "2c68945d68f8ccf90648bec7fd5b0547", "text": "The number of seniors and other people needing daily assistance continues to increase, but the current human resources available to achieve this in the coming years will certainly be insufficient. To remedy this situation, smart habitats have emerged as an innovative avenue for supporting needs of daily assistance. Smart homes aim to provide cognitive assistance in decision making by giving hints, suggestions, and reminders, with different kinds of effectors, to residents. To implement such technology, the first challenge to overcome is the recognition of ongoing activity. Some researchers have proposed solutions based on binary sensors or cameras, but these types of approaches infringed on residents' privacy. A new affordable activity-recognition system based on passive RFID technology can detect errors related to cognitive impairment. The entire system relies on an innovative model of elliptical trilateration with several filters, as well as on an ingenious representation of activities with spatial zones. The authors have deployed the system in a real smart-home prototype; this article renders the results of a complete set of experiments conducted on this new activity-recognition system with real scenarios.", "title": "" }, { "docid": "17055a66f80354bf5a614a510a4ef689", "text": "People can recognize scenes across many different modalities beyond natural images. In this paper, we investigate how to learn cross-modal scene representations that transfer across modalities. To study this problem, we introduce a new cross-modal scene dataset. While convolutional neural networks can categorize cross-modal scenes well, they also learn an intermediate representation not aligned across modalities, which is undesirable for crossmodal transfer applications. We present methods to regularize cross-modal convolutional neural networks so that they have a shared representation that is agnostic of the modality. Our experiments suggest that our scene representation can help transfer representations across modalities for retrieval. Moreover, our visualizations suggest that units emerge in the shared representation that tend to activate on consistent concepts independently of the modality.", "title": "" }, { "docid": "89f157fd5c42ba827b7d613f80770992", "text": "Generating emotional language is a key step towards building empathetic natural language processing agents. However, a major challenge for this line of research is the lack of large-scale labeled training data, and previous studies are limited to only small sets of human annotated sentiment labels. Additionally, explicitly controlling the emotion and sentiment of generated text is also difficult. In this paper, we take a more radical approach: we exploit the idea of leveraging Twitter data that are naturally labeled with emojis. We collect a large corpus of Twitter conversations that include emojis in the response and assume the emojis convey the underlying emotions of the sentence. We investigate several conditional variational autoencoders training on these conversations, which allow us to use emojis to control the emotion of the generated text. Experimentally, we show in our quantitative and qualitative analyses that the proposed models can successfully generate highquality abstractive conversation responses in accordance with designated emotions.", "title": "" }, { "docid": "0f0799a04328852b8cfa742cbc2396c9", "text": "Bitcoin does not scale, because its synchronization mechanism, the blockchain, limits the maximum rate of transactions the network can process. However, using off-blockchain transactions it is possible to create long-lived channels over which an arbitrary number of transfers can be processed locally between two users, without any burden to the Bitcoin network. These channels may form a network of payment service providers (PSPs). Payments can be routed between any two users in real time, without any confirmation delay. In this work we present a protocol for duplex micropayment channels, which guarantees end-to-end security and allow instant transfers, laying the foundation of the PSP network.", "title": "" }, { "docid": "68f0bdda44beba9203a785b8be1035bb", "text": "Nasal mucociliary clearance is one of the most important factors affecting nasal delivery of drugs and vaccines. This is also the most important physiological defense mechanism inside the nasal cavity. It removes inhaled (and delivered) particles, microbes and substances trapped in the mucus. Almost all inhaled particles are trapped in the mucus carpet and transported with a rate of 8-10 mm/h toward the pharynx. This transport is conducted by the ciliated cells, which contain about 100-250 motile cellular appendages called cilia, 0.3 µm wide and 5 µm in length that beat about 1000 times every minute or 12-15 Hz. For efficient mucociliary clearance, the interaction between the cilia and the nasal mucus needs to be well structured, where the mucus layer is a tri-layer: an upper gel layer that floats on the lower, more aqueous solution, called the periciliary liquid layer and a third layer of surfactants between these two main layers. Pharmacokinetic calculations of the mucociliary clearance show that this mechanism may account for a substantial difference in bioavailability following nasal delivery. If the formulation irritates the nasal mucosa, this mechanism will cause the irritant to be rapidly diluted, followed by increased clearance, and swallowed. The result is a much shorter duration inside the nasal cavity and therefore less nasal bioavailability.", "title": "" }, { "docid": "351daae8d137eaff56caf4640c83cbfc", "text": "There are numerous applications in which we would like to assess what opinions are being expressed in text documents. For example, Martha Stewart’s company may have wished to assess the degree of harshness of news articles about her in the recent past. Likewise, a World Bank official may wish to assess the degree of criticism of a proposed dam in Bangladesh. The ability to gauge opinion on a given topic is therefore of critical interest. In this paper, we develop a suite of algorithms which take as input, a set D of documents as well as a topic t, and gauge the degree of opinion expressed about topic t in the set D of documents. Our algorithms can return both a number (larger the number, more positive the opinion) as well as a qualitative opinion (e.g. harsh, complimentary). We assess the accuracy of these algorithms via human experiments and show that the best of these algorithms can accurately reflect human opinions. We have also conducted performance experiments showing that our algorithms are computationally fast.", "title": "" }, { "docid": "d5debb44bb6cf518bbc3d8d5f88201e7", "text": "In multi-label learning, each training example is associated with multiple class labels and the task is to learn a mapping from the feature space to the power set of label space. It is generally demanding and time-consuming to obtain labels for training examples, especially for multi-label learning task where a number of class labels need to be annotated for the instance. To circumvent this difficulty, semi-supervised multi-label learning aims to exploit the readily-available unlabeled data to help build multi-label predictive model. Nonetheless, most semi-supervised solutions to multi-label learning work under transductive setting, which only focus on making predictions on existing unlabeled data and cannot generalize to unseen instances. In this paper, a novel approach named COINS is proposed to learning from labeled and unlabeled data by adapting the well-known co-training strategy which naturally works under inductive setting. In each co-training round, a dichotomy over the feature space is learned by maximizing the diversity between the two classifiers induced on either dichotomized feature subset. After that, pairwise ranking predictions on unlabeled data are communicated between either classifier for model refinement. Extensive experiments on a number of benchmark data sets show that COINS performs favorably against state-of-the-art multi-label learning approaches.", "title": "" }, { "docid": "b01436481aa77ebe7538e760132c5f3c", "text": "We propose two algorithms based on Bregman iteration and operator splitting technique for nonlocal TV regularization problems. The convergence of the algorithms is analyzed and applications to deconvolution and sparse reconstruction are presented.", "title": "" }, { "docid": "13a4d7ce920b6b215a76d34708303e14", "text": "ion is also critical to the success of planning and scheduling activities. In our scenarios, the crew will often have to deal with planning and scheduling at a very high level (e.g., what crops do I need to plant now so they can be harvested in six months) and planning and scheduling at a detailed level (e.g., what is my next task). The autonomous system must be able to move between various time scales and levels of abstraction, presenting the correct level of information to the user at the correct time. Model-based diagnosis and recovery When something goes wrong, a robust autonomous should figure out what went wrong and recover as best as it can. A model-based diagnosis and recovery system, such as Livingstone [Williams and Nayak, 96], does this. It is analogous to the autonomic and immune systems of a living creature. If the autonomous system has a model of the system it controls, it can use this to figure out what is the most likely cause that explains the observed symptoms as well as how can the system recover given this diagnosis so its mission can continue. For example, if the pressure of a tank is low, it could be because the tank has a leak, the pump blew a fuse, a valve is not open to fill the tank or not closed to keep the tank from draining. However, it could be that the tank pressure is not low and the pressure sensor is defective. By analyzing the system from other sensors, it may say the pressure is normal or suggest closing a valve, resetting the pump circuit breaker, or requesting a crewmember to check the tank for a leak.", "title": "" }, { "docid": "e9ba4e76a3232e25233a4f5fe206e8ba", "text": "Systems code is often written in low-level languages like C/C++, which offer many benefits but also delegate memory management to programmers. This invites memory safety bugs that attackers can exploit to divert control flow and compromise the system. Deployed defense mechanisms (e.g., ASLR, DEP) are incomplete, and stronger defense mechanisms (e.g., CFI) often have high overhead and limited guarantees [19, 15, 9]. We introduce code-pointer integrity (CPI), a new design point that guarantees the integrity of all code pointers in a program (e.g., function pointers, saved return addresses) and thereby prevents all control-flow hijack attacks, including return-oriented programming. We also introduce code-pointer separation (CPS), a relaxation of CPI with better performance properties. CPI and CPS offer substantially better security-to-overhead ratios than the state of the art, they are practical (we protect a complete FreeBSD system and over 100 packages like apache and postgresql), effective (prevent all attacks in the RIPE benchmark), and efficient: on SPEC CPU2006, CPS averages 1.2% overhead for C and 1.9% for C/C++, while CPI’s overhead is 2.9% for C and 8.4% for C/C++. A prototype implementation of CPI and CPS can be obtained from http://levee.epfl.ch.", "title": "" }, { "docid": "118526b566b800d9dea30d2e4c904feb", "text": "With the problem of increased web resources and the huge amount of information available, the necessity of having automatic summarization systems appeared. Since summarization is needed the most in the process of searching for information on the web, where the user aims at a certain domain of interest according to his query, in this case domain-based summaries would serve the best. Despite the existence of plenty of research work in the domain-based summarization in English, there is lack of them in Arabic due to the shortage of existing knowledge bases. In this paper we introduce a query based, Arabic text, single document summarization using an existing Arabic language thesaurus and an extracted knowledge base. We use an Arabic corpus to extract domain knowledge represented by topic related concepts/ keywords and the lexical relations among them. The user’s query is expanded once by using the Arabic WordNet thesaurus and then by adding the domain specific knowledge base to the expansion. For the summarization dataset, Essex Arabic Summaries Corpus was used. It has many topic based articles with multiple human summaries. The performance appeared to be enhanced when using our extracted knowledge base than to just use the WordNet.", "title": "" } ]
scidocsrr
62033235c6aa05b1442b204e73fd0aa3
Static analysis for probabilistic programs: inferring whole program properties from finitely many paths
[ { "docid": "e49aa0d0f060247348f8b3ea0a28d3c6", "text": "Over the past five years a new approach to privacy-preserving data analysis has born fruit [13, 18, 7, 19, 5, 37, 35, 8, 32]. This approach differs from much (but not all!) of the related literature in the statistics, databases, theory, and cryptography communities, in that a formal and ad omnia privacy guarantee is defined, and the data analysis techniques presented are rigorously proved to satisfy the guarantee. The key privacy guarantee that has emerged is differential privacy. Roughly speaking, this ensures that (almost, and quantifiably) no risk is incurred by joining a statistical database. In this survey, we recall the definition of differential privacy and two basic techniques for achieving it. We then show some interesting applications of these techniques, presenting algorithms for three specific tasks and three general results on differentially private learning.", "title": "" } ]
[ { "docid": "36afb791436e95cec6167499bf4b0214", "text": "Leveraging historical data from the movie industry, this study built a predictive model for movie success, deviating from past studies by predicting profit (as opposed to revenue) at early stages of production (as opposed to just prior to release) to increase investor certainty. Our work derived several groups of novel features for each movie, based on the cast and collaboration network (who’), content (‘what’), and time of release (‘when’).", "title": "" }, { "docid": "b017fd773265c73c7dccad86797c17b8", "text": "Active learning, which has a strong impact on processing data prior to the classification phase, is an active research area within the machine learning community, and is now being extended for remote sensing applications. To be effective, classification must rely on the most informative pixels, while the training set should be as compact as possible. Active learning heuristics provide capability to select unlabeled data that are the “most informative” and to obtain the respective labels, contributing to both goals. Characteristics of remotely sensed image data provide both challenges and opportunities to exploit the potential advantages of active learning. We present an overview of active learning methods, then review the latest techniques proposed to cope with the problem of interactive sampling of training pixels for classification of remotely sensed data with support vector machines (SVMs). We discuss remote sensing specific approaches dealing with multisource and spatially and time-varying data, and provide examples for high-dimensional hyperspectral imagery.", "title": "" }, { "docid": "0d2ddb448c01172e53f19d9d5ac39f21", "text": "Malicious Android applications are currently the biggest threat in the scope of mobile security. To cope with their exponential growth and with their deceptive and hideous behaviors, static analysis signature based approaches are not enough to timely detect and tackle brand new threats such as polymorphic and composition malware. This work presents BRIDEMAID, a novel framework for analysis of Android apps' behavior, which exploits both a static and dynamic approach to detect malicious apps directly on mobile devices. The static analysis is based on n-grams matching to statically recognize malicious app execution patterns. The dynamic analysis is instead based on multi-level monitoring of device, app and user behavior to detect and prevent at runtime malicious behaviors. The framework has been tested against 2794 malicious apps reporting a detection accuracy of 99,7% and a negligible false positive rate, tested on a set of 10k genuine apps.", "title": "" }, { "docid": "23eb737d3930862326f81bac73c5e7f5", "text": "O discussion communities have become a widely used medium for interaction, enabling conversations across a broad range of topics and contexts. Their success, however, depends on participants’ willingness to invest their time and attention in the absence of formal role and control structures. Why, then, would individuals choose to return repeatedly to a particular community and engage in the various behaviors that are necessary to keep conversation within the community going? Some studies of online communities argue that individuals are driven by self-interest, while others emphasize more altruistic motivations. To get beyond these inconsistent explanations, we offer a model that brings dissimilar rationales into a single conceptual framework and shows the validity of each rationale in explaining different online behaviors. Drawing on typologies of organizational commitment, we argue that members may have psychological bonds to a particular online community based on (a) need, (b) affect, and/or (c) obligation. We develop hypotheses that explain how each form of commitment to a community affects the likelihood that a member will engage in particular behaviors (reading threads, posting replies, moderating the discussion). Our results indicate that each form of community commitment has a unique impact on each behavior, with need-based commitment predicting thread reading, affect-based commitment predicting reply posting and moderating behaviors, and obligation-based commitment predicting only moderating behavior. Researchers seeking to understand how discussion-based communities function will benefit from this more precise theorizing of how each form of member commitment relates to different kinds of online behaviors. Community managers who seek to encourage particular behaviors may use our results to target the underlying form of commitment most likely to encourage the activities they wish to promote.", "title": "" }, { "docid": "f2f5495973c560f15c307680bd5d3843", "text": "The Bayesian analysis of neural networks is difficult because a simple prior over weights implies a complex prior distribution over functions . In this paper we investigate the use of Gaussian process priors over functions, which permit the predictive Bayesian analysis for fixed values of hyperparameters to be carried out exactly using matrix operations. Two methods, using optimization and averaging (via Hybrid Monte Carlo) over hyperparameters have been tested on a number of challenging problems and have produced excellent results.", "title": "" }, { "docid": "91504378f63ba0c0d662180981f30f03", "text": "Closely matching natural teeth with an artificial restoration can be one of the most challenging procedures in restorative dentistry. Natural teeth vary greatly in color and shape. They reveal ample information about patients' background and personality. Dentistry provides the opportunity to restore unique patient characteristics or replace them with alternatives. Whether one tooth or many are restored, the ability to assess and properly communicate information to the laboratory can be greatly improved by learning the language of color and light characteristics. It is only possible to duplicate in ceramic what has been distinguished, understood, and communicated in the shade-matching process of the natural dentition. This article will give the reader a better understanding of what happens when incident light hits the surface of a tooth and give strategies for best assessing and communicating this to the dental laboratory.", "title": "" }, { "docid": "3f4d83525145a963c87167e3e02136a6", "text": "Using the GTZAN Genre Collection [1], we start with a set of 1000 30 second song excerpts subdivided into 10 pre-classified genres: Blues, Classical, Country, Disco, Hip-Hop, Jazz, Metal, Pop, Reggae, and Rock. We downsampled to 4000 Hz, and further split each excerpt into 5-second clips For each clip, we compute a spectrogram using Fast Fourier Transforms, giving us 22 timestep vectors of dimensionality 513 for each clip. Spectrograms separate out component audio signals at different frequencies from a raw audio signal, and provide us with a tractable, loosely structured feature set for any given audio clip that is well-suited for deep learning techniques. (See, for example, the spectrogram produced by a jazz excerpt below) Models", "title": "" }, { "docid": "a56650db0651fc0e76f9c0f383aec0e9", "text": "Solid evidence of virtual reality's benefits has graduated from impressive visual demonstrations to producing results in practical applications. Further, a realistic experience is no longer immersion's sole asset. Empirical studies show that various components of immersion provide other benefits - full immersion is not always necessary. The goal of immersive virtual environments (VEs) was to let the user experience a computer-generated world as if it were real - producing a sense of presence, or \"being there,\" in the user's mind.", "title": "" }, { "docid": "499fe7f6bf5c7d8fcfe690e7390a5d36", "text": "Compressional or traumatic asphyxia is a well recognized entity to most forensic pathologists. The vast majority of reported cases have been accidental. The case reported here describes the apparent inflicted compressional asphyxia of a small child. A review of mechanisms and related controversy regarding proposed mechanisms is discussed.", "title": "" }, { "docid": "2cc1373758f509c39275562f69b602c1", "text": "This paper presents our solution for enabling a quadrotor helicopter to autonomously navigate unstructured and unknown indoor environments. We compare two sensor suites, specifically a laser rangefinder and a stereo camera. Laser and camera sensors are both well-suited for recovering the helicopter’s relative motion and velocity. Because they use different cues from the environment, each sensor has its own set of advantages and limitations that are complimentary to the other sensor. Our eventual goal is to integrate both sensors on-board a single helicopter platform, leading to the development of an autonomous helicopter system that is robust to generic indoor environmental conditions. In this paper, we present results in this direction, describing the key components for autonomous navigation using either of the two sensors separately.", "title": "" }, { "docid": "fa2e8f411d74030bbec7937114f88f35", "text": "We present a method for synthesizing a frontal, neutralexpression image of a person’s face given an input face photograph. This is achieved by learning to generate facial landmarks and textures from features extracted from a facial-recognition network. Unlike previous generative approaches, our encoding feature vector is largely invariant to lighting, pose, and facial expression. Exploiting this invariance, we train our decoder network using only frontal, neutral-expression photographs. Since these photographs are well aligned, we can decompose them into a sparse set of landmark points and aligned texture maps. The decoder then predicts landmarks and textures independently and combines them using a differentiable image warping operation. The resulting images can be used for a number of applications, such as analyzing facial attributes, exposure and white balance adjustment, or creating a 3-D avatar.", "title": "" }, { "docid": "246cddf2c76383e82dab8f498b6974bb", "text": "With the growing use of the Social Web, an increasing number of applications for exchanging opinions with other people are becoming available online. These applications are widely adopted with the consequence that the number of opinions about the debated issues increases. In order to cut in on a debate, the participants need first to evaluate the opinions in favour or against the debated issue. Argumentation theory proposes algorithms and semantics to evaluate the set of accepted arguments, given the conflicts among them. The main problem is how to automatically generate the arguments from the natural language formulation of the opinions used in these applications. Our paper addresses this problem by proposing and evaluating the use of natural language techniques to generate the arguments. In particular, we adopt the textual entailment approach, a generic framework for applied semantics, where linguistic objects are mapped by means of semantic inferences at a textual level. We couple textual entailment together with a Dung-like argumentation system which allows us to identify the arguments that are accepted in the considered online debate. The originality of the proposed framework lies in the following point: natural language debates are analyzed and the arguments are automatically extracted.", "title": "" }, { "docid": "7dc7eaef334fc7678821fa66424421f1", "text": "The present research complements extant variable-centered research that focused on the dimensions of autonomous and controlled motivation through adoption of a person-centered approach for identifying motivational profiles. Both in high school students (Study 1) and college students (Study 2), a cluster analysis revealed 4 motivational profiles: a good quality motivation group (i.e., high autonomous, low controlled); a poor quality motivation group (i.e., low autonomous, high controlled); a low quantity motivation group (i.e., low autonomous, low controlled); and a high quantity motivation group (i.e., high autonomous, high controlled). To compare the 4 groups, the authors derived predictions from qualitative and quantitative perspectives on motivation. Findings generally favored the qualitative perspective; compared with the other groups, the good quality motivation group displayed the most optimal learning pattern and scored highest on perceived need-supportive teaching. Theoretical and practical implications of the findings are discussed.", "title": "" }, { "docid": "7f5af3806f0baa040a26f258944ad3f9", "text": "Linear Discriminant Analysis (LDA) is a widely-used supervised dimensionality reduction method in computer vision and pattern recognition. In null space based LDA (NLDA), a well-known LDA extension, between-class distance is maximized in the null space of the within-class scatter matrix. However, there are some limitations in NLDA. Firstly, for many data sets, null space of within-class scatter matrix does not exist, thus NLDA is not applicable to those datasets. Secondly, NLDA uses arithmetic mean of between-class distances and gives equal consideration to all between-class distances, which makes larger between-class distances can dominate the result and thus limits the performance of NLDA. In this paper, we propose a harmonic mean based Linear Discriminant Analysis, Multi-Class Discriminant Analysis (MCDA), for image classification, which minimizes the reciprocal of weighted harmonic mean of pairwise between-class distance. More importantly, MCDA gives higher priority to maximize small between-class distances. MCDA can be extended to multi-label dimension reduction. Results on 7 single-label data sets and 4 multi-label data sets show that MCDA has consistently better performance than 10 other single-label approaches and 4 other multi-label approaches in terms of classification accuracy, macro and micro average F1 score.", "title": "" }, { "docid": "8c47d9a93e3b9d9f31b77b724bf45578", "text": "A high-sensitivity fully passive 868-MHz wake-up radio (WUR) front-end for wireless sensor network nodes is presented. The front-end does not have an external power source and extracts the entire energy from the radio-frequency (RF) signal received at the antenna. A high-efficiency differential RF-to-DC converter rectifies the incident RF signal and drives the circuit blocks including a low-power comparator and reference generators; and at the same time detects the envelope of the on-off keying (OOK) wake-up signal. The front-end is designed and simulated 0.13μm CMOS and achieves a sensitivity of -33 dBm for a 100 kbps wake-up signal.", "title": "" }, { "docid": "5ae22c0209333125c61f66aafeeda139", "text": "The author reports the development of a multi-finger robot hand with the mechatronics approach. The proposed robot hand has 4 fingers with 14 under-actuated joints driven by 10 linear actuators with linkages. Each of the 10 nodes in the distributed control system uses position and current feedback to monitor the contact stiffness and control the grasping force according to the motor current change rate. The combined force and position control loop enable the robot hand to grasp an object with the unknown shape. Pre-defined tasks, such as grasping and pinching are stored as scripts in the hand controller to provide a high-level programming interface for the upstream robot controller. The mechanical design, controller design and co-simulation are performed in an integrated model-based software environment, and also for the real time code generation and for mechanical parts manufacturing with a 3D printer. Based on the same model for design, a virtual robot hand interface is developed to provide off-line simulation tool and user interface to the robot hand to reduce the programming effort in fingers' motion planning. In the development of the robot hand, the mechatronics approach has been proven to be an indispensable tool for such a complex system.", "title": "" }, { "docid": "3a948bb405b89376807a60a2a70ce7f7", "text": "The objective of this research is to develop feature extraction and classification techniques for the task of acoustic event recognition (AER) in unstructured environments, which are those where adverse effects such as noise, distortion and multiple sources are likely to occur. The goal is to design a system that can achieve human-like sound recognition performance on a variety of hearing tasks in different environments. The research is important, as the field is commonly overshadowed by the more popular area of automatic speech recognition (ASR), and typical AER systems are often based on techniques taken directly from this. However, direct application presents difficulties, as the characteristics of acoustic events are less well defined than those of speech, and there is no sub-word dictionary available like the phonemes in speech. In addition, the performance of ASR systems typically degrades dramatically in such adverse, unstructured environments. Therefore, it is important to develop a system that can perform well for this challenging task. In this work, two novel feature extraction methods are proposed for recognition of environmental sounds in severe noisy conditions, based on the visual signature of the sounds. The first method is called the Spectrogram Image Feature (SIF), and is based on the timefrequency spectrogram of the sound. This is captured through an image-processing inspired quantisation and mapping of the dynamic range prior to feature extraction. Experimental results show that the feature based on the raw-power spectrogram has a good performance, and is particularly suited to severe mismatched conditions. The second proposed method is the Spectral Power Distribution Image Feature (SPD-IF), which uses the same image feature approach, but is based on an SPD image derived from the stochastic distribution of power over the sound clip. This is combined with a missing feature classification system, which marginalises the image regions containing only noise, and experiments show the method achieves the high accuracy of the baseline methods in clean conditions combined with robust results in mismatched noise.", "title": "" }, { "docid": "eadc50aebc6b9c2fbd16f9ddb3094c00", "text": "Instance segmentation is the problem of detecting and delineating each distinct object of interest appearing in an image. Current instance segmentation approaches consist of ensembles of modules that are trained independently of each other, thus missing opportunities for joint learning. Here we propose a new instance segmentation paradigm consisting in an end-to-end method that learns how to segment instances sequentially. The model is based on a recurrent neural network that sequentially finds objects and their segmentations one at a time. This net is provided with a spatial memory that keeps track of what pixels have been explained and allows occlusion handling. In order to train the model we designed a principled loss function that accurately represents the properties of the instance segmentation problem. In the experiments carried out, we found that our method outperforms recent approaches on multiple person segmentation, and all state of the art approaches on the Plant Phenotyping dataset for leaf counting.", "title": "" }, { "docid": "378f0e528dddcb0319d0015ebc5f8ccb", "text": "Specific and non specific cholinesterase activities were demonstrated in the ABRM of Mytilus edulis L. and Mytilus galloprovincialis L. by means of different techniques. The results were found identical for both species: neuromuscular junctions “en grappe”-type scarcely distributed within the ABRM, contain AChE. According to the histochemical inhibition tests, (a) the eserine inhibits AChE activity of the ABRM with a level of 5·10−5 M or higher, (b) the ChE non specific activities are inhibited by iso-OMPA level between 5·10−5 to 10−4 M. The histo- and cytochemical observations were completed by showing the existence of neuromuscular junctions containing small clear vesicles: they probably are the morphological support for ACh presence. Moreover, specific and non specific ChE activities were localized in the glio-interstitial cells. AChE precipitates were developped along the ABRM sarcolemma, some muscle mitochondria and in the intercellular spaces remain enigmatic.", "title": "" }, { "docid": "301373338fe35426f5186f400f63dbd3", "text": "OBJECTIVE\nThis paper describes state of the art, scientific publications and ongoing research related to the methods of analysis of respiratory sounds.\n\n\nMETHODS AND MATERIAL\nReview of the current medical and technological literature using Pubmed and personal experience.\n\n\nRESULTS\nThe study includes a description of the various techniques that are being used to collect auscultation sounds, a physical description of known pathologic sounds for which automatic detection tools were developed. Modern tools are based on artificial intelligence and on technics such as artificial neural networks, fuzzy systems, and genetic algorithms…\n\n\nCONCLUSION\nThe next step will consist in finding new markers so as to increase the efficiency of decision aid algorithms and tools.", "title": "" } ]
scidocsrr
652c6793f6933a4e9cd82e5b167afc1c
A Gait Recognition Method for Human Following in Service Robots
[ { "docid": "6c4d6eff1fb7ef03efc3197726545ed8", "text": "Gait enjoys advantages over other biometrics in that it can be perceived from a distance and is di/cult to disguise. Current approaches are mostly statistical and concentrate on walking only. By analysing leg motion we show how we can recognise people not only by the walking gait, but also by the running gait. This is achieved by either of two new modelling approaches which employ coupled oscillators and the biomechanics of human locomotion as the underlying concepts. These models give a plausible method for data reduction by providing estimates of the inclination of the thigh and of the leg, from the image data. Both approaches derive a phase-weighted Fourier description gait signature by automated non-invasive means. One approach is completely automated whereas the other requires speci5cation of a single parameter to distinguish between walking and running. Results show that both gaits are potential biometrics, with running being more potent. By its basis in evidence gathering, this new technique can tolerate noise and low resolution. ? 2003 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "d65a047b3f381ca5039d75fd6330b514", "text": "This paper presents an enhanced algorithm for matching laser scan maps using histogram correlations. The histogram representation effectively summarizes a map's salient features such that pairs of maps can be matched efficiently without any prior guess as to their alignment. The histogram matching algorithm has been enhanced in order to work well in outdoor unstructured environments by using entropy metrics, weighted histograms and proper thresholding of quality metrics. Thus our large-scale scan-matching SLAM implementation has a vastly improved ability to close large loops in real-time even when odometry is not available. Our experimental results have demonstrated a successful mapping of the largest area ever mapped to date using only a single laser scanner. We also demonstrate our ability to solve the lost robot problem by localizing a robot to a previously built map without any prior initialization.", "title": "" }, { "docid": "26295dded01b06c8b11349723fea81dd", "text": "The increasing popularity of parametric design tools goes hand in hand with the use of building performance simulation (BPS) tools from the early design phase. However, current methods require a significant computational time and a high number of parameters as input, as they are based on traditional BPS tools conceived for detailed building design phase. Their application to the urban scale is hence difficult. As an alternative to the existing approaches, we developed an interface to CitySim, a validated building simulation tool adapted to urban scale assessments, bundled as a plug-in for Grasshopper, a popular parametric design platform. On the one hand, CitySim allows faster simulations and requires fewer parameters than traditional BPS tools, as it is based on algorithms providing a good trade-off between the simulations requirements and their accuracy at the urban scale; on the other hand, Grasshopper allows the easy manipulation of building masses and energy simulation parameters through semi-automated parametric", "title": "" }, { "docid": "791f440add573b1c35daca1d6eb7bcf4", "text": "PURPOSE\nNivolumab, a programmed death-1 (PD-1) immune checkpoint inhibitor antibody, has demonstrated improved survival over docetaxel in previously treated advanced non-small-cell lung cancer (NSCLC). First-line monotherapy with nivolumab for advanced NSCLC was evaluated in the phase I, multicohort, Checkmate 012 trial.\n\n\nMETHODS\nFifty-two patients received nivolumab 3 mg/kg intravenously every 2 weeks until progression or unacceptable toxicity; postprogression treatment was permitted per protocol. The primary objective was to assess safety; secondary objectives included objective response rate (ORR) and 24-week progression-free survival (PFS) rate; overall survival (OS) was an exploratory end point.\n\n\nRESULTS\nAny-grade treatment-related adverse events (AEs) occurred in 71% of patients, most commonly: fatigue (29%), rash (19%), nausea (14%), diarrhea (12%), pruritus (12%), and arthralgia (10%). Ten patients (19%) reported grade 3 to 4 treatment-related AEs; grade 3 rash was the only grade 3 to 4 event occurring in more than one patient (n = 2; 4%). Six patients (12%) discontinued because of a treatment-related AE. The confirmed ORR was 23% (12 of 52), including four ongoing complete responses. Nine of 12 responses (75%) occurred by first tumor assessment (week 11); eight (67%) were ongoing (range, 5.3+ to 25.8+ months) at the time of data lock. ORR was 28% (nine of 32) in patients with any degree of tumor PD-ligand 1 expression and 14% (two of 14) in patients with no PD-ligand 1 expression. Median PFS was 3.6 months, and the 24-week PFS rate was 41% (95% CI, 27 to 54). Median OS was 19.4 months, and the 1-year and 18-month OS rates were 73% (95% CI, 59 to 83) and 57% (95% CI, 42 to 70), respectively.\n\n\nCONCLUSION\nFirst-line nivolumab monotherapy demonstrated a tolerable safety profile and durable responses in first-line advanced NSCLC.", "title": "" }, { "docid": "b7c094fbecd52432781a8db8cc2342fd", "text": "The Human Visual System (HVS) exhibits multi-resolution characteristics, where the fovea is at the highest resolution while the resolution tapers off towards the periphery. Given enough activity at the periphery, the HVS is then capable to foveate to the next region of interest (ROI), to attend to it at full resolution. Saliency models in the past have focused on identifying features that can be used in a bottom-up manner to generate conspicuity maps, which are then combined together to provide regions of fixated interest. However, these models neglect to take into consideration the foveal relation of an object of interest. The model proposed in this work aims to compute saliency as a function of distance from a given fixation point, using a multi-resolution framework. Apart from computational benefits, significant motivation can be found from this work in areas such as visual search, robotics, communications etc.", "title": "" }, { "docid": "c4535d9b0de17e67f8933ed54cf6d09d", "text": "In the past decade, online music streaming services (MSS), e.g., Pandora and Spotify, revolutionized the way people access, consume and share music. MSS serve users with a huge digital music library, various kinds of music discovery channels, and a number of tools for music sharing and management (e.g. bookmark, playlist, comment, etc.). As a result, metadata and user-generated data hosted on MSS demonstrate great heterogeneity, which provides important potential to enhance music recommendation performance. In this study, we propose a novel music recommendation approach by leveraging heterogeneous graph schema mining and ranking feature selection. Unlike existing heterogeneous graph-based recommendation techniques, the new method can automatically generate and select the optimized meta-path-based features for the learning to rank model. To make feature selection more efficient, we propose the Dynamic Feature Generation Tree algorithm (DFGT), which can activate and eliminate the short sub-meta-paths for feature evolution at a low cost. Experiments show that the proposed algorithm can efficiently generate optimized ranking feature set for meta-path-based music recommendation, which significantly enhances the state-of-the-art collaborative filtering algorithms.", "title": "" }, { "docid": "93f1ee5523f738ab861bcce86d4fc906", "text": "Semantic role labeling (SRL) is one of the basic natural language processing (NLP) problems. To this date, most of the successful SRL systems were built on top of some form of parsing results (Koomen et al., 2005; Palmer et al., 2010; Pradhan et al., 2013), where pre-defined feature templates over the syntactic structure are used. The attempts of building an end-to-end SRL learning system without using parsing were less successful (Collobert et al., 2011). In this work, we propose to use deep bi-directional recurrent network as an end-to-end system for SRL. We take only original text information as input feature, without using any syntactic knowledge. The proposed algorithm for semantic role labeling was mainly evaluated on CoNLL-2005 shared task and achieved F1 score of 81.07. This result outperforms the previous state-of-the-art system from the combination of different parsing trees or models. We also obtained the same conclusion with F1 = 81.27 on CoNLL2012 shared task. As a result of simplicity, our model is also computationally efficient that the parsing speed is 6.7k tokens per second. Our analysis shows that our model is better at handling longer sentences than traditional models. And the latent variables of our model implicitly capture the syntactic structure of a sentence.", "title": "" }, { "docid": "4902f8f8c03e5c0ed0d60d8be7c7060b", "text": "Traffic sign classification is an important function for driver assistance systems. In this paper, we propose a hierarchical method for traffic sign classification. There are two hierarchies in the method: the first one classifies traffic signs into several super classes, while the second one further classifies the signs within their super classes and provides the final results. Two perspective adjustment methods are proposed and performed before the second hierarchy, which significantly improves the classification accuracy. Experimental results show that the proposed method gets an accuracy of 99.52% on the German Traffic Sign Recognition Benchmark (GTSRB), which outperforms the state-of-the-art method. In addition, it takes about 40 ms to process one image, making it suitable for realtime applications.", "title": "" }, { "docid": "f45b7caf3c599a6de835330c39599570", "text": "Describes an automated method to locate and outline blood vessels in images of the ocular fundus. Such a tool should prove useful to eye care specialists for purposes of patient screening, treatment evaluation, and clinical study. The authors' method differs from previously known methods in that it uses local and global vessel features cooperatively to segment the vessel network. The authors evaluate their method using hand-labeled ground truth segmentations of 20 images. A plot of the operating characteristic shows that the authors' method reduces false positives by as much as 15 times over basic thresholding of a matched filter response (MFR), at up to a 75% true positive rate. For a baseline, they also compared the ground truth against a second hand-labeling, yielding a 90% true positive and a 4% false positive detection rate, on average. These numbers suggest there is still room for a 15% true positive rate improvement, with the same false positive rate, over the authors' method. They are making all their images and hand labelings publicly available for interested researchers to use in evaluating related methods.", "title": "" }, { "docid": "5bee78694f3428d3882e27000921f501", "text": "We introduce a new approach to perform background subtraction in moving camera scenarios. Unlike previous treatments of the problem, we do not restrict the camera motion or the scene geometry. The proposed approach relies on Bayesian selection of the transformation that best describes the geometric relation between consecutive frames. Based on the selected transformation, we propagate a set of learned background and foreground appearance models using a single or a series of homography transforms. The propagated models are subjected to MAP-MRF optimization framework that combines motion, appearance, spatial, and temporal cues; the optimization process provides the final background/foreground labels. Extensive experimental evaluation with challenging videos shows that the proposed method outperforms the baseline and state-of-the-art methods in most cases.", "title": "" }, { "docid": "081dbece10d1363eca0ac01ce0260315", "text": "With the surge of mobile internet traffic, Cloud RAN (C-RAN) becomes an innovative architecture to help mobile operators maintain profitability and financial growth as well as to provide better services to the customers. It consists of Base Band Units (BBU) of several base stations, which are co-located in a secured place called Central Office and connected to Radio Remote Heads (RRH) via high bandwidth, low latency links. With BBU centralization in C-RAN, handover, the most important feature for mobile communications, could achieve simplified procedure or improved performance. In this paper, we analyze the handover performance of C-RAN over a baseline decentralized RAN (D-RAN) for GSM, UMTS and LTE systems. The results indicate that, lower total average handover interrupt time could be achieved in GSM thanks to the synchronous nature of handovers in C-RAN. For UMTS, inter-NodeB soft handover in D-RAN would become intra-pool softer handover in C-RAN. This brings some gains in terms of reduced signalling, less Iub transport bearer setup and reduced transport bandwidth requirement. For LTE X2-based inter-eNB handover, C-RAN could reduce the handover delay and to a large extent eliminate the risk of UE losing its connection with the serving cell while still waiting for the handover command, which in turn decrease the handover failure rate.", "title": "" }, { "docid": "83eb03b97eb945965cf47422c4d0bbbc", "text": "Trio is a new database system that manages not only data, but also theaccuracyandlineageof the data. Inexact (uncertain, probabilistic, fuzzy, approximate, incomplete, and imprecise!) databases have been proposed in the past, and the lineage problem also has been studied. The goals of the Trio project are to combine and distill previous work into a simple and usable model, design a query language as an understandable extension to SQL, and most importantly build a working system—a system that augments conventional data management with both accuracy and lineage as an integral part of the data. This paper provides numerous motivating applications for Trio and lays out preliminary plans for the data model, query language, and prototype system.", "title": "" }, { "docid": "4f52077553ebd94ed6ce9ff2120dfe9d", "text": "A new type of deep neural networks (DNNs) is presented in this paper. Traditional DNNs use the multinomial logistic regression (softmax activation) at the top layer for classification. The new DNN instead uses a support vector machine (SVM) at the top layer. Two training algorithms are proposed at the frame and sequence-level to learn parameters of SVM and DNN in the maximum-margin criteria. In the frame-level training, the new model is shown to be related to the multiclass SVM with DNN features; In the sequence-level training, it is related to the structured SVM with DNN features and HMM state transition features. Its decoding process is similar to the DNN-HMM hybrid system but with frame-level posterior probabilities replaced by scores from the SVM. We term the new model deep neural support vector machine (DNSVM). We have verified its effectiveness on the TIMIT task for continuous speech recognition.", "title": "" }, { "docid": "5619e2a46cefc5c23163d9fb487635b3", "text": "Support vector machines (SVMs) are rarely benchmarked against other classi1cation or regression methods. We compare a popular SVM implementation (libsvm) to 16 classi1cation methods and 9 regression methods—all accessible through the software R—by the means of standard performance measures (classi1cation error and mean squared error) which are also analyzed by the means of bias-variance decompositions. SVMs showed mostly good performances both on classi1cation and regression tasks, but other methods proved to be very competitive. c © 2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "5549ffbce989740179d934ebd64ed679", "text": "Recurrent neural network language models (RNNLMs) are becoming increasingly popular for speech recognition. Previously, we have shown that RNNLMs with a full (non-classed) output layer (F-RNNLMs) can be trained efficiently using a GPU giving a large reduction in training time over conventional class-based models (C-RNNLMs) on a standard CPU. However, since test-time RNNLM evaluation is often performed entirely on a CPU, standard F-RNNLMs are inefficient since the entire output layer needs to be calculated for normalisation. In this paper, it is demonstrated that C-RNNLMs can be efficiently trained on a GPU, using our spliced sentence bunch technique which allows good CPU test-time performance (42× speedup over F-RNNLM). Furthermore, the performance of different classing approaches is investigated. We also examine the use of variance regularisation of the softmax denominator for F-RNNLMs and show that it allows F-RNNLMs to be efficiently used in test (56× speedup on a CPU). Finally the use of two GPUs for F-RNNLM training using pipelining is described and shown to give a reduction in training time over a single GPU by a factor of 1.6×.", "title": "" }, { "docid": "051fc43d9e32d8b9d8096838b53c47cb", "text": "Median filtering is a cornerstone of modern image processing and is used extensively in smoothing and de-noising applications. The fastest commercial implementations (e.g. in Adobe® Photoshop® CS2) exhibit O(r) runtime in the radius of the filter, which limits their usefulness in realtime or resolution-independent contexts. We introduce a CPU-based, vectorizable O(log r) algorithm for median filtering, to our knowledge the most efficient yet developed. Our algorithm extends to images of any bit-depth, and can also be adapted to perform bilateral filtering. On 8-bit data our median filter outperforms Photoshop's implementation by up to a factor of fifty.", "title": "" }, { "docid": "dd4cfd8973d837b3182deeeb5801d2c0", "text": "We examine methods for clustering in high dimensions. In the first part of the paper, we perform an experimental comparison between three batch clustering algorithms: the Expectation–Maximization (EM) algorithm, a “winner take all” version of the EM algorithm reminiscent of the K-means algorithm, and model-based hierarchical agglomerative clustering. We learn naive-Bayes models with a hidden root node, using high-dimensional discrete-variable data sets (both real and synthetic). We find that the EM algorithm significantly outperforms the other methods, and proceed to investigate the effect of various initialization schemes on the final solution produced by the EM algorithm. The initializations that we consider are (1) parameters sampled from an uninformative prior, (2) random perturbations of the marginal distribution of the data, and (3) the output of hierarchical agglomerative clustering. Although the methods are substantially different, they lead to learned models that are strikingly similar in quality.", "title": "" }, { "docid": "78bf0b1d4065fd0e1740589c4e060c70", "text": "This paper presents an efficient metric for quantifying the visual fidelity of natural images based on near-threshold and suprathreshold properties of human vision. The proposed metric, the visual signal-to-noise ratio (VSNR), operates via a two-stage approach. In the first stage, contrast thresholds for detection of distortions in the presence of natural images are computed via wavelet-based models of visual masking and visual summation in order to determine whether the distortions in the distorted image are visible. If the distortions are below the threshold of detection, the distorted image is deemed to be of perfect visual fidelity (VSNR = infin)and no further analysis is required. If the distortions are suprathreshold, a second stage is applied which operates based on the low-level visual property of perceived contrast, and the mid-level visual property of global precedence. These two properties are modeled as Euclidean distances in distortion-contrast space of a multiscale wavelet decomposition, and VSNR is computed based on a simple linear sum of these distances. The proposed VSNR metric is generally competitive with current metrics of visual fidelity; it is efficient both in terms of its low computational complexity and in terms of its low memory requirements; and it operates based on physical luminances and visual angle (rather than on digital pixel values and pixel-based dimensions) to accommodate different viewing conditions.", "title": "" }, { "docid": "2e6b034cbb73d91b70e3574a06140621", "text": "ETHNOPHARMACOLOGICAL RELEVANCE\nBitter melon (Momordica charantia L.) has been widely used as an traditional medicine treatment for diabetic patients in Asia. In vitro and animal studies suggested its hypoglycemic activity, but limited human studies are available to support its use.\n\n\nAIM OF STUDY\nThis study was conducted to assess the efficacy and safety of three doses of bitter melon compared with metformin.\n\n\nMATERIALS AND METHODS\nThis is a 4-week, multicenter, randomized, double-blind, active-control trial. Patients were randomized into 4 groups to receive bitter melon 500 mg/day, 1,000 mg/day, and 2,000 mg/day or metformin 1,000 mg/day. All patients were followed for 4 weeks.\n\n\nRESULTS\nThere was a significant decline in fructosamine at week 4 of the metformin group (-16.8; 95% CI, -31.2, -2.4 μmol/L) and the bitter melon 2,000 mg/day group (-10.2; 95% CI, -19.1, -1.3 μmol/L). Bitter melon 500 and 1,000 mg/day did not significantly decrease fructosamine levels (-3.5; 95% CI -11.7, 4.6 and -10.3; 95% CI -22.7, 2.2 μmol/L, respectively).\n\n\nCONCLUSIONS\nBitter melon had a modest hypoglycemic effect and significantly reduced fructosamine levels from baseline among patients with type 2 diabetes who received 2,000 mg/day. However, the hypoglycemic effect of bitter melon was less than metformin 1,000 mg/day.", "title": "" }, { "docid": "420fa81c2dbe77622108c978d5c6c019", "text": "Reasoning about a scene's thermal signature, in addition to its visual appearance and spatial configuration, would facilitate significant advances in perceptual systems. Applications involving the segmentation and tracking of persons, vehicles, and other heat-emitting objects, for example, could benefit tremendously from even coarsely accurate relative temperatures. With the increasing affordability of commercially available thermal cameras, as well as the imminent introduction of new, mobile form factors, such data will be readily and widely accessible. However, in order for thermal processing to complement existing methods in RGBD, there must be an effective procedure for calibrating RGBD and thermal cameras to create RGBDT (red, green, blue, depth, and thermal) data. In this paper, we present an automatic method for the synchronization and calibration of RGBD and thermal cameras in arbitrary environments. While traditional calibration methods fail in our multimodal setting, we leverage invariant features visible by both camera types. We first synchronize the streams with a simple optimization procedure that aligns their motion statistic time series. We then find the relative poses of the cameras by minimizing an objective that measures the alignment between edge maps from the two streams. In contrast to existing methods that use special calibration targets with key points visible to both cameras, our method requires nothing more than some edges visible to both cameras, such as those arising from humans. We evaluate our method and demonstrate that it consistently converges to the correct transform and that it results in high-quality RGBDT data.", "title": "" }, { "docid": "d3095d26a0fa1ea75b6496d59cbb6b8e", "text": "This paper describes the application of artificial intelligence (AI) to the creation of digital art. AI is a computational paradigm that codifies intelligence into machines. There are generally three types of AI and these are machine learning, evolutionary programming and soft computing. Machine learning is the statistical approach to building intelligent systems. Evolutionary programming is the use of natural evolutionary systems to design intelligent machines. Some of the evolutionary programming systems include genetic algorithm which is inspired by the principles of evolution and swarm optimization which is inspired by the swarming of birds, fish, ants etc. Soft computing includes techniques such as agent based modelling and fuzzy logic. Opportunities on the applications of these to digital art are explored.", "title": "" } ]
scidocsrr
be3bde921a65f73375afbcdd6a19940a
Intergroup emotions: explaining offensive action tendencies in an intergroup context.
[ { "docid": "59af1eb49108e672a35f7c242c5b4683", "text": "“The value concept, more than any other, should occupy a central position . . . able to unify the apparently diverse interests of all the sciences concerned with human behavior.” These words, proclaiming the centrality of the value concept, were written by a psychologist (Rokeach, 1973, p. 3), but similar stands have been taken by sociologists (e.g., Williams, 1968) and anthropologists (e.g., Kluckhohn, 1951). These theorists view values as the criteria people use to select and justify actions and to evaluate people (including the self) and events. We, too, adopt this view of values as criteria rather than as qualities inherent in objects. This article discusses work that is part of a larger project intended to explore the importance of values in a wide variety of contexts. The project addresses three broad questions about values. First, how are the value priorities of individuals affected by their social experiences? That is, how do the common experiences people have, because of their shared locations in the social structure (their education, age, gender, occupation, etc.), influence their value priorities? And, how do individuals’ unique experiences (trauma, relations with parents, immigration, etc.) affect their value priorities? Second, how do the value priorities held by individuals affect their behavioral orientations and choices? That is, how do value priorities influence ideologies, attitudes, and actions in the political, religious, environmental, and other domains?", "title": "" } ]
[ { "docid": "bc57dfee1a00d7cfb025a1a5840623f8", "text": "Production and consumption relationship shows that marketing plays an important role in enterprises. In the competitive market, it is very important to be able to sell rather than produce. Nowadays, marketing is customeroriented and aims to meet the needs and expectations of customers to increase their satisfaction. While creating a marketing strategy, an enterprise must consider many factors. Which is why, the process can and should be considered as a multi-criteria decision making (MCDM) case. In this study, marketing strategies and marketing decisions in the new-product-development process has been analyzed in a macro level. To deal quantitatively with imprecision or uncertainty, fuzzy sets theory has been used throughout the analysis.", "title": "" }, { "docid": "f267f44fe9463ac0114335959f9739fa", "text": "HTTP Adaptive Streaming (HAS) is today the number one video technology for over-the-top video distribution. In HAS, video content is temporally divided into multiple segments and encoded at different quality levels. A client selects and retrieves per segment the most suited quality version to create a seamless playout. Despite the ability of HAS to deal with changing network conditions, HAS-based live streaming often suffers from freezes in the playout due to buffer under-run, low average quality, large camera-to-display delay, and large initial/channel-change delay. Recently, IETF has standardized HTTP/2, a new version of the HTTP protocol that provides new features for reducing the page load time in Web browsing. In this paper, we present ten novel HTTP/2-based methods to improve the quality of experience of HAS. Our main contribution is the design and evaluation of a push-based approach for live streaming in which super-short segments are pushed from server to client as soon as they become available. We show that with an RTT of 300 ms, this approach can reduce the average server-to-display delay by 90.1% and the average start-up delay by 40.1%.", "title": "" }, { "docid": "59c83aa2f97662c168316f1a4525fd4d", "text": "Light field microscopy is a new technique for high-speed volumetric imaging of weakly scattering or fluorescent specimens. It employs an array of microlenses to trade off spatial resolution against angular resolution, thereby allowing a 4-D light field to be captured using a single photographic exposure without the need for scanning. The recorded light field can then be used to computationally reconstruct a full volume. In this paper, we present an optical model for light field microscopy based on wave optics, instead of previously reported ray optics models. We also present a 3-D deconvolution method for light field microscopy that is able to reconstruct volumes at higher spatial resolution, and with better optical sectioning, than previously reported. To accomplish this, we take advantage of the dense spatio-angular sampling provided by a microlens array at axial positions away from the native object plane. This dense sampling permits us to decode aliasing present in the light field to reconstruct high-frequency information. We formulate our method as an inverse problem for reconstructing the 3-D volume, which we solve using a GPU-accelerated iterative algorithm. Theoretical limits on the depth-dependent lateral resolution of the reconstructed volumes are derived. We show that these limits are in good agreement with experimental results on a standard USAF 1951 resolution target. Finally, we present 3-D reconstructions of pollen grains that demonstrate the improvements in fidelity made possible by our method.", "title": "" }, { "docid": "765e766515c9c241ffd2d84572fd887f", "text": "The cost of reconciling consistency and state management with high availability is highly magnified by the unprecedented scale and robustness requirements of today’s Internet applications. We propose two strategies for improving overall availability using simple mechanisms that scale over large applications whose output behavior tolerates graceful degradation. We characterize this degradation in terms of harvest and yield, and map it directly onto engineering mechanisms that enhance availability by improving fault isolation, and in some cases also simplify programming. By collecting examples of related techniques in the literature and illustrating the surprising range of applications that can benefit from these approaches, we hope to motivate a broader research program in this area. 1. Motivation, Hypothesis, Relevance Increasingly, infrastructure services comprise not only routing, but also application-level resources such as search engines [15], adaptation proxies [8], and Web caches [20]. These applications must confront the same operational expectations and exponentially-growing user loads as the routing infrastructure, and consequently are absorbing comparable amounts of hardware and software. The current trend of harnessing commodity-PC clusters for scalability and availability [9] is reflected in the largest web server installations. These sites use tens to hundreds of PC’s to deliver 100M or more read-mostly page views per day, primarily using simple replication or relatively small data sets to increase throughput. The scale of these applications is bringing the wellknown tradeoff between consistency and availability [4] into very sharp relief. In this paper we propose two general directions for future work in building large-scale robust systems. Our approaches tolerate partial failures by emphasizing simple composition mechanisms that promote fault containment, and by translating possible partial failure modes into engineering mechanisms that provide smoothlydegrading functionality rather than lack of availability of the service as a whole. The approaches were developed in the context of cluster computing, where it is well accepted [22] that one of the major challenges is the nontrivial software engineering required to automate partial-failure handling in order to keep system management tractable. 2. Related Work and the CAP Principle In this discussion, strong consistency means singlecopy ACID [13] consistency; by assumption a stronglyconsistent system provides the ability to perform updates, otherwise discussing consistency is irrelevant. High availability is assumed to be provided through redundancy, e.g. data replication; data is considered highly available if a given consumer of the data can always reach some replica. Partition-resilience means that the system as whole can survive a partition between data replicas. Strong CAP Principle. Strong Consistency, High Availability, Partition-resilience: Pick at most 2. The CAP formulation makes explicit the trade-offs in designing distributed infrastructure applications. It is easy to identify examples of each pairing of CAP, outlining the proof by exhaustive example of the Strong CAP Principle: CA without P: Databases that provide distributed transactional semantics can only do so in the absence of a network partition separating server peers. CP without A: In the event of a partition, further transactions to an ACID database may be blocked until the partition heals, to avoid the risk of introducing merge conflicts (and thus inconsistency). AP without C: HTTP Web caching provides clientserver partition resilience by replicating documents, but a client-server partition prevents verification of the freshness of an expired replica. In general, any distributed database problem can be solved with either expiration-based caching to get AP, or replicas and majority voting to get PC (the minority is unavailable). In practice, many applications are best described in terms of reduced consistency or availability. For example, weakly-consistent distributed databases such as Bayou [5] provide specific models with well-defined consistency/availability tradeoffs; disconnected filesystems such as Coda [16] explicitly argued for availability over strong consistency; and expiration-based consistency mechanisms such as leases [12] provide fault-tolerant consistency management. These examples suggest that there is a Weak CAP Principle which we have yet to characterize precisely: The stronger the guarantees made about any two of strong consistency, high availability, or resilience to partitions, the weaker the guarantees that can be made about the third. 3. Harvest, Yield, and the CAP Principle Both strategies we propose for improving availability with simple mechanisms rely on the ability to broaden our notion of “correct behavior” for the target application, and then exploit the tradeoffs in the CAP principle to improve availability at large scale. We assume that clients make queries to servers, in which case there are at least two metrics for correct behavior: yield, which is the probability of completing a request, and harvest, which measures the fraction of the data reflected in the response, i.e. the completeness of the answer to the query. Yield is the common metric and is typically measured in “nines”: “four-nines availability” means a completion probability of . In practice, good HA systems aim for four or five nines. In the presence of faults there is typically a tradeoff between providing no answer (reducing yield) and providing an imperfect answer (maintaining yield, but reducing harvest). Some applications do not tolerate harvest degradation because any deviation from the single well-defined correct behavior renders the result useless. For example, a sensor application that must provide a binary sensor reading (presence/absence) does not tolerate degradation of the output.1 On the other hand, some applications tolerate graceful degradation of harvest: online aggregation [14] allows a user to explicitly trade running time for precision and confidence in performing arithmetic aggregation queries over a large dataset, thereby smoothly trading harvest for response time, which is particularly useful for approximate answers and for avoiding work that looks unlikely to be worthwhile based on preliminary results. At first glance, it would appear that this kind of degradation applies only to queries and not to updates. However, the model can be applied in the case of “single-location” updates: those changes that are localized to a single node (or technically a single partition). In this case, updates that 1This is consistent with the use of the term yield in semiconductor manufacturing: typically, each die on a wafer is intolerant to harvest degradation, and yield is defined as the fraction of working dice on a wafer. affect reachable nodes occur correctly but have limited visibility (a form of reduced harvest), while those that require unreachable nodes fail (reducing yield). These localized changes are consistent exactly because the new values are not available everywhere. This model of updates fails for global changes, but it is still quite useful for many practical applications, including personalization databases and collaborative filtering. 4. Strategy 1: Trading Harvest for Yield— Probabilistic Availability Nearly all systems are probabilistic whether they realize it or not. In particular, any system that is 100% available under single faults is probabilistically available overall (since there is a non-zero probability of multiple failures), and Internet-based servers are dependent on the best-effort Internet for true availability. Therefore availability maps naturally to probabilistic approaches, and it is worth addressing probabilistic systems directly, so that we can understand and limit the impact of faults. This requires some basic decisions about what needs to be available and the expected nature of faults. For example, node faults in the Inktomi search engine remove a proportional fraction of the search database. Thus in a 100-node cluster a single-node fault reduces the harvest by 1% during the duration of the fault (the overall harvest is usually measured over a longer interval). Implicit in this approach is graceful degradation under multiple node faults, specifically, linear degradation in harvest. By randomly placing data on nodes, we can ensure that the 1% lost is a random 1%, which makes the average-case and worstcase fault behavior the same. In addition, by replicating a high-priority subset of data, we reduce the probability of losing that data. This gives us more precise control of harvest, both increasing it and reducing the practical impact of missing data. Of course, it is possible to replicate all data, but doing so may have relatively little impact on harvest and yield despite significant cost, and in any case can never ensure 100% harvest or yield because of the best-effort Internet protocols the service relies on. As a similar example, transformation proxies for thin clients [8] also trade harvest for yield, by degrading results on demand to match the capabilities of clients that might otherwise be unable to get results at all. Even when the 100%-harvest answer is useful to the client, it may still be preferable to trade response time for harvest when clientto-server bandwidth is limited, for example, by intelligent degradation to low-bandwidth formats [7]. 5. Strategy 2: Application Decomposition and Orthogonal Mechanisms Some large applications can be decomposed into subsystems that are independently intolerant to harvest degradation (i.e. they fail by reducing yield), but whose independent failure allows the overall application to continue functioning with reduced utility. The application as a whole is then tolerant of harvest degradation. A good decomposition has at least one actual benefit and one potential benefit. The actual benefi", "title": "" }, { "docid": "227f23f0357e0cad280eb8e6dec4526b", "text": "This paper presents an iterative and analytical approach to optimal synthesis of a multiplexer with a star-junction. Two types of commonly used lumped-element junction models, namely, nonresonant node (NRN) type and resonant type, are considered and treated in a uniform way. A new circuit equivalence called phased-inverter to frequency-invariant reactance inverter transformation is introduced. It allows direct adoption of the optimal synthesis theory of a bandpass filter for synthesizing channel filters connected to a star-junction by converting the synthesized phase shift to the susceptance compensation at the junction. Since each channel filter is dealt with individually and alternately, when synthesizing a multiplexer with a high number of channels, good accuracy can still be maintained. Therefore, the approach can be used to synthesize a wide range of multiplexers. Illustrative examples of synthesizing a diplexer with a common resonant type of junction and a triplexer with an NRN type of junction are given to demonstrate the effectiveness of the proposed approach. A prototype of a coaxial resonator diplexer according to the synthesized circuit model is fabricated to validate the synthesized result. Excellent agreement is obtained.", "title": "" }, { "docid": "a8d6fe9d4670d1ccc4569aa322f665ee", "text": "Abstract Improved feedback on electricity consumption may provide a tool for customers to better control their consumption and ultimately save energy. This paper asks which kind of feedback is most successful. For this purpose, a psychological model is presented that illustrates how and why feedback works. Relevant features of feedback are identified that may determine its effectiveness: frequency, duration, content, breakdown, medium and way of presentation, comparisons, and combination with other instruments. The paper continues with an analysis of international experience in order to find empirical evidence for which kinds of feedback work best. In spite of considerable data restraints and research gaps, there is some indication that the most successful feedback combines the following features: it is given frequently and over a long time, provides an appliance-specific breakdown, is presented in a clear and appealing way, and uses computerized and interactive tools.", "title": "" }, { "docid": "6aa9eaad1024bf49e24eabc70d5d153d", "text": "High-quality documentary photo series have a special place in rhinoplasty. The exact photographic reproduction of the nasal contours is an essential part of surgical planning, documentation and follow-up of one’s own work. Good photographs can only be achieved using suitable technology and with a good knowledge of photography. Standard operating procedures are also necessary. The photographic equipment should consist of a digital single-lens reflex camera, studio flash equipment and a suitable room for photography with a suitable backdrop. The high standards required cannot be achieved with simple photographic equipment. The most important part of the equipment is the optics. Fixed focal length lenses with a focal length of about 105 mm are especially suited to this type of work. Nowadays, even a surgeon without any photographic training is in a position to produce a complete series of clinical images. With digital technology, any of us can take good photographs. The correct exposure, the right depth of focus for the key areas of the nose and the right camera angle are the decisive factors in a good image series. Up to six standard images are recommended in the literature for the proper documentation of nasal surgery. The most important are frontal, three quarters and profile views. In special cases, close-up images may also be necessary. Preparing a professional image series is labour-intensive and very expensive. Large hospitals no longer employ professional photographers. Despite this, we must strive to maintain a high standard of photodocumenation for publications and to ensure that cases can be compared at congresses.", "title": "" }, { "docid": "d0a6ca9838f8844077fdac61d1d75af1", "text": "Depth-first search, as developed by Tarjan and coauthors, is a fundamental technique of efficient algorithm design for graphs [23]. This note presents depth-first search algorithms for two basic problems, strong and biconnected components. Previous algorithms either compute auxiliary quantities based on the depth-first search tree (e.g., LOWPOINT values) or require two passes. We present one-pass algorithms that only maintain a representation of the depth-first search path. This gives a simplified view of depth-first search without sacrificing efficiency. In greater detail, most depth-first search algorithms (e.g., [23,10,11]) compute so-called LOWPOINT values that are defined in terms of the depth-first search tree. Because of the success of this method LOWPOINT values have become almost synonymous with depth-first search. LOWPOINT values are regarded as crucial in the strong and biconnected component algorithms, e.g., [14, pp. 94, 514]. Tarjan’s LOWPOINT method for strong components is presented in texts [1, 7,14,16,17,21]. The strong component algorithm of Kosaraju and Sharir [22] is often viewed as conceptu-", "title": "" }, { "docid": "82835828a7f8c073d3520cdb4b6c47be", "text": "Simultaneous Localization and Mapping (SLAM) for mobile robots is a computationally expensive task. A robot capable of SLAM needs a powerful onboard computer, but this can limit the robot's mobility because of weight and power demands. We consider moving this task to a remote compute cloud, by proposing a general cloud-based architecture for real-time robotics computation, and then implementing a Rao-Blackwellized Particle Filtering-based SLAM algorithm in a multi-node cluster in the cloud. In our implementation, expensive computations are executed in parallel, yielding significant improvements in computation time. This allows the algorithm to increase the complexity and frequency of calculations, enhancing the accuracy of the resulting map while freeing the robot's onboard computer for other tasks. Our method for implementing particle filtering in the cloud is not specific to SLAM and can be applied to other computationally-intensive tasks.", "title": "" }, { "docid": "48e917ffb0e5636f5ca17b3242c07706", "text": "Two studies examined the influence of approach and avoidance social goals on memory for and evaluation of ambiguous social information. Study 1 found that individual differences in avoidance social goals were associated with greater memory of negative information, negatively biased interpretation of ambiguous social cues, and a more pessimistic evaluation of social actors. Study 2 experimentally manipulated social goals and found that individuals high in avoidance social motivation remembered more negative information and expressed more dislike for a stranger in the avoidance condition than in the approach condition. Results suggest that avoidance social goals are associated with emphasizing potential threats when making sense of the social environment.", "title": "" }, { "docid": "9666ac68ee1aeb8ce18ccd2615cdabb2", "text": "As the bring your own device (BYOD) to work trend grows, so do the network security risks. This fast-growing trend has huge benefits for both employees and employers. With malware, spyware and other malicious downloads, tricking their way onto personal devices, organizations need to consider their information security policies. Malicious programs can download onto a personal device without a user even knowing. This can have disastrous results for both an organization and the personal device. When this happens, it risks BYODs making unauthorized changes to policies and leaking sensitive information into the public domain. A privacy breach can cause a domino effect with huge financial and legal implications, and loss of productivity for organizations. This is a difficult challenge. Organizations need to consider user privacy and rights together with protecting networks from attacks. This paper evaluates a new architectural framework to control the risks that challenge organizations and the use of BYODs. After analysis of large volumes of research, the previous studies addressed single issues. We integrated parts of these single solutions into a new framework to develop a complete solution for access control. With too many organizations failing to implement and enforce adequate security policies, the process needs to be simpler. This framework reduces system restrictions while enforcing access control policies for BYOD and cloud environments using an independent platform. Primary results of the study are positive with the framework reducing access control issues. Keywords—Bring your own device; access control; policy; security", "title": "" }, { "docid": "ec237c01100bf6afa26f3b01a62577f3", "text": "Polyphenols are secondary metabolites of plants and are generally involved in defense against ultraviolet radiation or aggression by pathogens. In the last decade, there has been much interest in the potential health benefits of dietary plant polyphenols as antioxidant. Epidemiological studies and associated meta-analyses strongly suggest that long term consumption of diets rich in plant polyphenols offer protection against development of cancers, cardiovascular diseases, diabetes, osteoporosis and neurodegenerative diseases. Here we present knowledge about the biological effects of plant polyphenols in the context of relevance to human health.", "title": "" }, { "docid": "61d8761f3c6a8974d0384faf9a084b53", "text": "With many thyroid nodules being incidentally detected, it is important to identify as many malignant nodules as possible while excluding those that are highly likely to be benign from fine needle aspiration (FNA) biopsies or surgeries. This paper presents a computer-aided diagnosis (CAD) system for classifying thyroid nodules in ultrasound images. We use deep learning approach to extract features from thyroid ultrasound images. Ultrasound images are pre-processed to calibrate their scale and remove the artifacts. A pre-trained GoogLeNet model is then fine-tuned using the pre-processed image samples which leads to superior feature extraction. The extracted features of the thyroid ultrasound images are sent to a Cost-sensitive Random Forest classifier to classify the images into “malignant” and “benign” cases. The experimental results show the proposed fine-tuned GoogLeNet model achieves excellent classification performance, attaining 98.29% classification accuracy, 99.10% sensitivity and 93.90% specificity for the images in an open access database (Pedraza et al. 16), while 96.34% classification accuracy, 86% sensitivity and 99% specificity for the images in our local health region database.", "title": "" }, { "docid": "9d0ea524b8f591d9ea337a8c789e51c1", "text": "Abstract—The recent development of social media poses new challenges to the research community in analyzing online interactions between people. Social networking sites offer great opportunities for connecting with others, but also increase the vulnerability of young people to undesirable phenomena, such as cybervictimization. Recent research reports that on average, 20% to 40% of all teenagers have been victimized online. In this paper, we focus on cyberbullying as a particular form of cybervictimization. Successful prevention depends on the adequate detection of potentially harmful messages. However, given the massive information overload on the Web, there is a need for intelligent systems to identify potential risks automatically. We present the construction and annotation of a corpus of Dutch social media posts annotated with fine-grained cyberbullying-related text categories, such as insults and threats. Also, the specific participants (harasser, victim or bystander) in a cyberbullying conversation are identified to enhance the analysis of human interactions involving cyberbullying. Apart from describing our dataset construction and annotation, we present proof-of-concept experiments on the automatic identification of cyberbullying events and fine-grained cyberbullying categories.", "title": "" }, { "docid": "458470e18ce2ab134841f76440cfdc2b", "text": "Dependency trees help relation extraction models capture long-range relations between words. However, existing dependency-based models either neglect crucial information (e.g., negation) by pruning the dependency trees too aggressively, or are computationally inefficient because it is difficult to parallelize over different tree structures. We propose an extension of graph convolutional networks that is tailored for relation extraction, which pools information over arbitrary dependency structures efficiently in parallel. To incorporate relevant information while maximally removing irrelevant content, we further apply a novel pruning strategy to the input trees by keeping words immediately around the shortest path between the two entities among which a relation might hold. The resulting model achieves state-of-the-art performance on the large-scale TACRED dataset, outperforming existing sequence and dependency-based neural models. We also show through detailed analysis that this model has complementary strengths to sequence models, and combining them further improves the state of the art.", "title": "" }, { "docid": "f407ea856f2d00dca1868373e1bd9e2f", "text": "Software industry is heading towards centralized computin g. Due to this trend data and programs are being taken away from traditional desktop PCs and placed in compute clouds instead. Compute clouds are enormous server farms packed with computing power and storage space accessible through the Internet. Instead of having to manage one’s own infrastructure to run applications, server time and storage space can can be bought from an external service provider. From the customers’ point of view the benefit behind this idea is to be able to dynamically adjust computing power up or down to meet the demand for that power at a particular moment. This kind of flexibility not only ensures that no costs are incurred by excess processing capacity, but also enables hard ware infrastructure to scale up with business growth. Because of growing interest in taking advantage of cloud computing a number of service providers are working on providing cloud services. As stated in [7], Amazon, Salerforce.co m and Google are examples of firms that already have working solutions on the market. Recently also Microsoft released a preview version of its cloud platform called the Azure. Earl y adopters can test the platform and development tools free of charge.[2, 3, 4] The main purpose of this paper is to shed light on the internals of Microsoft’s Azure platform. In addition to examinin g how Azure platform works, the benefits of Azure platform are explored. The most important benefit in Microsoft’s solu tion is that it resembles existing Windows environment a lot . Developers can use the same application programming interfaces (APIs) and development tools they are already used to. The second benefit is that migrating applications to cloud is easy. This partially stems from the fact that Azure’s servic es can be exploited by an application whether it is run locally or in the cloud.", "title": "" }, { "docid": "eec33c75a0ec9b055a857054d05bcf54", "text": "We introduce a logical process of three distinct phases to begin the evaluation of a new 3D dosimetry array. The array under investigation is a hollow cylinder phantom with diode detectors fixed in a helical shell forming an \"O\" axial detector cross section (ArcCHECK), with comparisons drawn to a previously studied 3D array with diodes fixed in two crossing planes forming an \"X\" axial cross section (Delta⁴). Phase I testing of the ArcCHECK establishes: robust relative calibration (response equalization) of the individual detectors, minor field size dependency of response not present in a 2D predecessor, and uncorrected angular response dependence in the axial plane. Phase II testing reveals vast differences between the two devices when studying fixed-width full circle arcs. These differences are primarily due to arc discretization by the TPS that produces low passing rates for the peripheral detectors of the ArcCHECK, but high passing rates for the Delta⁴. Similar, although less pronounced, effects are seen for the test VMAT plans modeled after the AAPM TG119 report. The very different 3D detector locations of the two devices, along with the knock-on effect of different percent normalization strategies, prove that the analysis results from the devices are distinct and noninterchangeable; they are truly measuring different things. The value of what each device measures, namely their correlation with--or ability to predict--clinically relevant errors in calculation and/or delivery of dose is the subject of future Phase III work.", "title": "" }, { "docid": "e985d20f75d29c24fda39135e0e54636", "text": "Software testing is a highly complex and time consu ming activityIt is even difficult to say when tes ing is complete. The effective combination of black box (external) a nd white box (internal) testing is known as Gray-bo x testing. Gray box testing is a powerful idea if one knows something about how the product works on the inside; one can test it b etter, even from the outside. Gray box testing is not black box testing, because the tester does know some of the internal workings of the software under test. It is not to be confused with white box testing, testi ng approach that attempts to cover the internals of the product in detail. Gray box testing is a test strategy based partly on internal s. This paper will present all the three methodolog y Black-box, White-box, Graybox and how this method has been applied to validat e cri ical software systems. KeywordsBlack-box, White-box, Gray-box or Grey-box Introduction In most software projects, testing is not given the necessary attention. Statistics reveal that the ne arly 30-40% of the effort goes into testing irrespective of the type of project; h ardly any time is allocated for testing. The comput er industry is changing at a very rapid pace. In order to keep pace with a rapidly ch anging computer industry, software test must develo p methods to verify and validate software for all aspects of the product li fecycle. Test case design techniques can be broadly split into two main categories: Black box & White box. Black box + White box = Gray Box Spelling: Note that Gray is also spelt as Grey. Hence Gray Box Testing and Grey Box Testing mean the same. Gray Box testing is a technique to test the applica tion with limited knowledge of the internal working s of an application. In software testing, the term the more you know the be tter carries a lot of weight when testing an applic ation. Mastering the domain of a system always gives the t ester an edge over someone with limited domain know ledge. Unlike black box testing, where the tester only tests the applicatio n's user interface, in Gray box testing, the tester has access to design documents and the database. Having this knowledge, the tester is able to better prepare test data and test scena rios when making the test plan. The gray-box testing goes mainly with the testing of web applications b ecause it considers high-level development, operati ng environment, and compatibility conditions. During b lack-box or white-box analysis it is harder to iden tify problems, related to endto-end data flow. Context-specific problems, associ ated with web site testing are usually found during gray-box verifying. Bridge between Black Box and White Box – ISSN 2277-1956/V2N1-175-185 Testing Methods Fig 1: Classification 1. Black Box Testing Black box testing is a software testing techniques in which looking at the internal code structure, implementation details and knowledge of internal pa ths of the software. testing is based entirely on the software requireme nts and specifications. Black box testing is best suited for rapid test sce nario testing and quick Web Service Services provides quick feedback on the functional re diness of operations t better suited for operations that have enumerated necessary. It is used for finding the following errors: 1. Incorrect or missing functions 2. Interface errors 3. Errors in data structures or External database access 4. Performance errors 5. Initialization and termination errors Example A tester, without knowledge of the internal structu res of a website, tests the web pages by using a br owse ; providing inputs (clicks, keystrokes) and verifying the outputs agai nst the expected outcome. Levels Applicable To Black Box testing method is applicable to all levels of the software testing process: Testing, and Acceptance Testing. The higher the level, and hence the bigger and more c mplex the box, the mo method comes into use. Black Box Testing Techniques Following are some techniques that can be used for esigning black box tests. Equivalence partitioning Equivalence Partitioning is a software test design technique that involves selecting representative values from each partition as test data. Boundary Value Analysis Boundary Value Analysis is a software test design t echnique that involves determination of boundaries for selecting values that are at the boundaries and jus t inside/outside of the boundaries as test data. Cause Effect Graphing Cause Effect Graphing is a software test design tec hnique that involves identifying the cases (input c onditions) and conditions), producing a CauseEffect Graph, and generating test cases accordingly . Gray Box Testing Technique", "title": "" }, { "docid": "7ad4f52279e85f8e20239e1ea6c85bbb", "text": "One of the most exciting but challenging endeavors in music research is to develop a computational model that comprehends the affective content of music signals and organizes a music collection according to emotion. In this paper, we propose a novel acoustic emotion Gaussians (AEG) model that defines a proper generative process of emotion perception in music. As a generative model, AEG permits easy and straightforward interpretations of the model learning processes. To bridge the acoustic feature space and music emotion space, a set of latent feature classes, which are learned from data, is introduced to perform the end-to-end semantic mappings between the two spaces. Based on the space of latent feature classes, the AEG model is applicable to both automatic music emotion annotation and emotion-based music retrieval. To gain insights into the AEG model, we also provide illustrations of the model learning process. A comprehensive performance study is conducted to demonstrate the superior accuracy of AEG over its predecessors, using two emotion annotated music corpora MER60 and MTurk. Our results show that the AEG model outperforms the state-of-the-art methods in automatic music emotion annotation. Moreover, for the first time a quantitative evaluation of emotion-based music retrieval is reported.", "title": "" }, { "docid": "4825e492dc1b7b645a5b92dde0c766cd", "text": "This article shows how language processing is intimately tuned to input frequency. Examples are given of frequency effects in the processing of phonology, phonotactics, reading, spelling, lexis, morphosyntax, formulaic language, language comprehension, grammaticality, sentence production, and syntax. The implications of these effects for the representations and developmental sequence of SLA are discussed. Usage-based theories hold that the acquisition of language is exemplar based. It is the piecemeal learning of many thousands of constructions and the frequency-biased abstraction of regularities within them. Determinants of pattern productivity include the power law of practice, cue competition and constraint satisfaction, connectionist learning, and effects of type and token frequency. The regularities of language emerge from experience as categories and prototypical patterns. The typical route of emergence of constructions is from formula, through low-scope pattern, to construction. Frequency plays a large part in explaining sociolinguistic variation and language change. Learners’ sensitivity to frequency in all these domains has implications for theories of implicit and explicit learning and their interactions. The review concludes by considering the history of frequency as an explanatory concept in theoretical and applied linguistics, its 40 years of exile, and its necessary reinstatement as a bridging variable that binds the different schools of language acquisition research.", "title": "" } ]
scidocsrr
8af8a908e1ca64bc8e48f18cf27399d8
More Accurate Entity Ranking Using Knowledge Graph and Web Corpus
[ { "docid": "1149ffb77bc5d32b07a5bad4e0fb0409", "text": "Real-world factoid or list questions often have a simple structure, yet are hard to match to facts in a given knowledge base due to high representational and linguistic variability. For example, to answer \"who is the ceo of apple\" on Freebase requires a match to an abstract \"leadership\" entity with three relations \"role\", \"organization\" and \"person\", and two other entities \"apple inc\" and \"managing director\". Recent years have seen a surge of research activity on learning-based solutions for this method. We further advance the state of the art by adopting learning-to-rank methodology and by fully addressing the inherent entity recognition problem, which was neglected in recent works.\n We evaluate our system, called Aqqu, on two standard benchmarks, Free917 and WebQuestions, improving the previous best result for each benchmark considerably. These two benchmarks exhibit quite different challenges, and many of the existing approaches were evaluated (and work well) only for one of them. We also consider efficiency aspects and take care that all questions can be answered interactively (that is, within a second). Materials for full reproducibility are available on our website: http://ad.informatik.uni-freiburg.de/publications.", "title": "" } ]
[ { "docid": "7c488db08cea4c44c15479cd57549328", "text": "A biometric is the automatic identification of an individual that is based on physiological or behavioral characteristics. Due to its security-related applications and the current world political climate, biometric is currently the subject of intense research by both private and academic institutions. Fingerprint is emerging as the most common and trusted biometric for personal identification. The main objective of this paper is to review the extensive researches that have been done on fingerprint classification over the last four decades. In particular, it discusses the fingerprint features that are useful for distinguishing fingerprint classes and reviews the methods of classification that have been applied to the problem.", "title": "" }, { "docid": "bf7eb592ad9ad5e51e61749174b60d04", "text": "Solving inverse problems continues to be a challenge in a wide array of applications ranging from deblurring, image inpainting, source separation etc. Most existing techniques solve such inverse problems by either explicitly or implicitly finding the inverse of the model. The former class of techniques require explicit knowledge of the measurement process which can be unrealistic, and rely on strong analytical regularizers to constrain the solution space, which often do not generalize well. The latter approaches have had remarkable success in part due to deep learning, but require a large collection of source-observation pairs, which can be prohibitively expensive. In this paper, we propose an unsupervised technique to solve inverse problems with generative adversarial networks (GANs). Using a pre-trained GAN in the space of source signals, we show that one can reliably recover solutions to under determined problems in a ‘blind’ fashion, i.e., without knowledge of the measurement process. We solve this by making successive estimates on the model and the solution in an iterative fashion. We show promising results in three challenging applications – blind source separation, image deblurring, and recovering an image from its edge map, and perform better than several baselines.", "title": "" }, { "docid": "5762adf6fc9a0bf6da037cdb10191400", "text": "Graphics Processing Unit (GPU) virtualization is an enabling technology in emerging virtualization scenarios. Unfortunately, existing GPU virtualization approaches are still suboptimal in performance and full feature support. This paper introduces gVirt, a product level GPU virtualization implementation with: 1) full GPU virtualization running native graphics driver in guest, and 2) mediated pass-through that achieves both good performance and scalability, and also secure isolation among guests. gVirt presents a virtual full-fledged GPU to each VM. VMs can directly access performance-critical resources, without intervention from the hypervisor in most cases, while privileged operations from guest are trap-and-emulated at minimal cost. Experiments demonstrate that gVirt can achieve up to 95% native performance for GPU intensive workloads, and scale well up to 7 VMs.", "title": "" }, { "docid": "17d484e84b2d30d0108537112e6dc31d", "text": "Surface speckle pattern intensity distribution resulting from laser light scattering from a rough surface contains various information about the surface geometrical and physical properties. A surface roughness measurement technique based on the texture analysis of surface speckle pattern texture images is put forward. In the surface roughness measurement technique, the speckle pattern texture images are taken by a simple setup configuration consisting of a laser and a CCD camera. Our experimental results show that the surface roughness contained in the surface speckle pattern texture images has a good monotonic relationship with their energy feature of the gray-level co-occurrence matrices. After the measurement system is calibrated by a standard surface roughness specimen, the surface roughness of the object surface composed of the same material and machined by the same method as the standard specimen surface can be evaluated from a single speckle pattern texture image. The robustness of the characterization of speckle pattern texture for surface roughness is also discussed. Thus the surface roughness measurement technique can be used for an in-process surface measurement.", "title": "" }, { "docid": "f76717050a5d891f63e475ba3e3ff955", "text": "Computational Advertising is the currently emerging multidimensional statistical modeling sub-discipline in digital advertising industry. Web pages visited per user every day is considerably increasing, resulting in an enormous access to display advertisements (ads). The rate at which the ad is clicked by users is termed as the Click Through Rate (CTR) of an advertisement. This metric facilitates the measurement of the effectiveness of an advertisement. The placement of ads in appropriate location leads to the rise in the CTR value that influences the growth of customer access to advertisement resulting in increased profit rate for the ad exchange, publishers and advertisers. Thus it is imperative to predict the CTR metric in order to formulate an efficient ad placement strategy. This paper proposes a predictive model that generates the click through rate based on different dimensions of ad placement for display advertisements using statistical machine learning regression techniques such as multivariate linear regression (LR), poisson regression (PR) and support vector regression(SVR). The experiment result reports that SVR based click model outperforms in predicting CTR through hyperparameter optimization.", "title": "" }, { "docid": "48143f70eb66d54da2e11a7ba2f29ac8", "text": "The authors present a cyber-physical systems related study on the estimation and prediction of driver states in autonomous vehicles. The first part of this study extends on a previously developed general architecture for estimation and prediction of hybrid-state systems. The extended system utilizes the hybrid characteristics of decision-behavior coupling of many systems such as the driver and the vehicle; uses Kalman Filter estimates of observable parameters to track the instantaneous discrete state, and predicts the most likely outcome. Prediction of the likely driver state outcome depends on the higher level discrete model and the observed behavior of the continuous subsystem. Two approaches to estimate the discrete driver state from filtered continuous observations are presented: rule based estimation, and Hidden Markov Model (HMM) based estimation. Extensions to a prediction application is described through the use of Hierarchical Hidden Markov Models (HHMMs). The proposed method is suitable for scenarios that involve unknown decisions of other individuals, such as lane changes or intersection precedence/access. An HMM implementation for multiple tasks of a single vehicle at an intersection is presented along with preliminary results.", "title": "" }, { "docid": "69ae64969a3bfe518cd003d97e0ee009", "text": "In this research we set out to discover why and how people seek anonymity in their online interactions. Our goal is to inform policy and the design of future Internet architecture and applications. We interviewed 44 people from America, Asia, Europe, and Africa who had sought anonymity and asked them about their experiences. A key finding of our research is the very large variation in interviewees' past experiences and life situations leading them to seek anonymity, and how they tried to achieve it. Our results suggest implications for the design of online communities, challenges for policy, and ways to improve anonymity tools and educate users about the different routes and threats to anonymity on the Internet.", "title": "" }, { "docid": "462e3be75902bf8a39104c75ec2bea53", "text": "A new model for associative memory, based on a correlation matrix, is suggested. In this model information is accumulated on memory elements as products of component data. Denoting a key vector by q(p), and the data associated with it by another vector x(p), the pairs (q(p), x(p)) are memorized in the form of a matrix {see the Equation in PDF File} where c is a constant. A randomly selected subset of the elements of Mxq can also be used for memorizing. The recalling of a particular datum x(r) is made by a transformation x(r)=Mxqq(r). This model is failure tolerant and facilitates associative search of information; these are properties that are usually assigned to holographic memories. Two classes of memories are discussed: a complete correlation matrix memory (CCMM), and randomly organized incomplete correlation matrix memories (ICMM). The data recalled from the latter are stochastic variables but the fidelity of recall is shown to have a deterministic limit if the number of memory elements grows without limits. A special case of correlation matrix memories is the auto-associative memory in which any part of the memorized information can be used as a key. The memories are selective with respect to accumulated data. The ICMM exhibits adaptive improvement under certain circumstances. It is also suggested that correlation matrix memories could be applied for the classification of data.", "title": "" }, { "docid": "578b2b86a50f1b2e43f9efe0233b492a", "text": "Perceived racism contributes to persistent health stress leading to health disparities. African American/Black persons (BPs) believe subtle, rather than overt, interpersonal racism is increasing. Sue and colleagues describe interpersonal racism as racial microaggressions: \"routine\" marginalizing indignities by White persons (WPs) toward BPs that contribute to health stress. In this narrative, exploratory study, Black adults (n = 10) were asked about specific racial microaggressions; they all experienced multiple types. Categorical and narrative analysis captured interpretations, strategies, and health stress attributions. Six iconic narratives contextualized health stress responses. Diverse mental and physical symptoms were attributed to racial microaggressions. Few strategies in response had positive outcomes. Future research includes development of coping strategies for BPs in these interactions, exploration of WPs awareness of their behaviors, and preventing racial microaggressions in health encounters that exacerbate health disparities.", "title": "" }, { "docid": "09d22e636e4651db27d6687d65a8de54", "text": "There is currently no standard or widely accepted subset of features to effectively classify different emotions based on electroencephalogram (EEG) signals. While combining all possible EEG features may improve the classification performance, it can lead to high dimensionality and worse performance due to redundancy and inefficiency. To solve the high-dimensionality problem, this paper proposes a new framework to automatically search for the optimal subset of EEG features using evolutionary computation (EC) algorithms. The proposed framework has been extensively evaluated using two public datasets (MAHNOB, DEAP) and a new dataset acquired with a mobile EEG sensor. The results confirm that EC algorithms can effectively support feature selection to identify the best EEG features and the best channels to maximize performance over a four-quadrant emotion classification problem. These findings are significant for informing future development of EEG-based emotion classification because low-cost mobile EEG sensors with fewer electrodes are becoming popular for many new applications.", "title": "" }, { "docid": "e2c2cdb5245b73b7511c434c4901fff8", "text": "Adversarial machine learning in the context of image processing and related applications has received a large amount of attention. However, adversarial machine learning, especially adversarial deep learning, in the context of malware detection has received much less attention despite its apparent importance. In this paper, we present a framework for enhancing the robustness of Deep Neural Networks (DNNs) against adversarial malware samples, dubbed Hashing Transformation Deep Neural Networks (HashTran-DNN). The core idea is to use hash functions with a certain locality-preserving property to transform samples to enhance the robustness of DNNs in malware classification. The framework further uses a Denoising Auto-Encoder (DAE) regularizer to reconstruct the hash representations of samples, making the resulting DNN classifiers capable of attaining the locality information in the latent space. We experiment with two concrete instantiations of the HashTranDNN framework to classify Android malware. Experimental results show that four known attacks can render standard DNNs useless in classifying Android malware, that known defenses can at most defend three of the four attacks, and that HashTran-DNN can effectively defend against all of the four attacks.", "title": "" }, { "docid": "d2c8a3fd1049713d478fe27bd8f8598b", "text": "In this paper, higher-order correlation clustering (HOCC) is used for text line detection in natural images. We treat text line detection as a graph partitioning problem, where each vertex is represented by a Maximally Stable Extremal Region (MSER). First, weak hypothesises are proposed by coarsely grouping MSERs based on their spatial alignment and appearance consistency. Then, higher-order correlation clustering (HOCC) is used to partition the MSERs into text line candidates, using the hypotheses as soft constraints to enforce long range interactions. We further propose a regularization method to solve the Semidefinite Programming problem in the inference. Finally we use a simple texton-based texture classifier to filter out the non-text areas. This framework allows us to naturally handle multiple orientations, languages and fonts. Experiments show that our approach achieves competitive performance compared to the state of the art.", "title": "" }, { "docid": "4552e4542db450e98f4aee2e5a019f0f", "text": "Time-series data is increasingly collected in many domains. One example is the smart electricity infrastructure, which generates huge volumes of such data from sources such as smart electricity meters. Although today these data are used for visualization and billing in mostly 15-min resolution, its original temporal resolution frequently is more fine-grained, e.g., seconds. This is useful for various analytical applications such as short-term forecasting, disaggregation and visualization. However, transmitting and storing huge amounts of such fine-grained data are prohibitively expensive in terms of storage space in many cases. In this article, we present a compression technique based on piecewise regression and two methods which describe the performance of the compression. Although our technique is a general approach for time-series compression, smart grids serve as our running example and as our evaluation scenario. Depending on the data and the use-case scenario, the technique compresses data by ratios of up to factor 5,000 while maintaining its usefulness for analytics. The proposed technique has outperformed related work and has been applied to three real-world energy datasets in different scenarios. Finally, we show that the proposed compression technique can be implemented in a state-of-the-art database management system.", "title": "" }, { "docid": "f6575043fa4ce5ae3a237bf958a57d9a", "text": "In this article, we study automated agents that are designed to encourage humans to take some actions over others by strategically disclosing key pieces of information. To this end, we utilize the framework of persuasion games—a branch of game theory that deals with asymmetric interactions where one player (Sender) possesses more information about the world, but it is only the other player (Receiver) who can take an action. In particular, we use an extended persuasion model, where the Sender’s information is imperfect and the Receiver has more than two alternative actions available. We design a computational algorithm that, from the Sender’s standpoint, calculates the optimal information disclosure rule. The algorithm is parameterized by the Receiver’s decision model (i.e., what choice he will make based on the information disclosed by the Sender) and can be retuned accordingly.\n We then provide an extensive experimental study of the algorithm’s performance in interactions with human Receivers. First, we consider a fully rational (in the Bayesian sense) Receiver decision model and experimentally show the efficacy of the resulting Sender’s solution in a routing domain. Despite the discrepancy in the Sender’s and the Receiver’s utilities from each of the Receiver’s choices, our Sender agent successfully persuaded human Receivers to select an option more beneficial for the agent. Dropping the Receiver’s rationality assumption, we introduce a machine learning procedure that generates a more realistic human Receiver model. We then show its significant benefit to the Sender solution by repeating our routing experiment. To complete our study, we introduce a second (supply--demand) experimental domain and, by contrasting it with the routing domain, obtain general guidelines for a Sender on how to construct a Receiver model.", "title": "" }, { "docid": "0db1e1304ec2b5d40790677c9ce07394", "text": "Neural sequence-to-sequence model has achieved great success in abstractive summarization task. However, due to the limit of input length, most of previous works can only utilize lead sentences as the input to generate the abstractive summarization, which ignores crucial information of the document. To alleviate this problem, we propose a novel approach to improve neural sentence summarization by using extractive summarization, which aims at taking full advantage of the document information as much as possible. Furthermore, we present both of streamline strategy and system combination strategy to achieve the fusion of the contents in different views, which can be easily adapted to other domains. Experimental results on CNN/Daily Mail dataset demonstrate both our proposed strategies can significantly improve the performance of neural sentence summarization.", "title": "" }, { "docid": "af836023436eaa65ef55f9928312e73f", "text": "We present a probabilistic approach to learning a Gaussian Process classifier in the presence of unlabeled data. Our approach involves a “null category noise model” (NCNM) inspired by ordered categorical noise models. The noise model reflects an assumption that the data density is lower between the class-conditional densities. We illustrate our approach on a toy problem and present comparative results for the semi-supervised classification of handwritten digits.", "title": "" }, { "docid": "0433b6406358479e45dfece9ca6633b7", "text": "The gamification is growing in e-business and the banks are looking for new ways to get more customers on their websites. Therefore, it is important to study what are the most appreciated features of the website that could influence the behaviour of the customer to use an electronic banking system with game features. The gamified e-banking suggests that rich elements/features associated with the games could influence other variables and therefore increasing the client loyalty, to spend more time and increasing the transactions on the website. The aim of this study is to look into the influence of gamification in the e-banking system. Based on the research of 180 publications and 210 variables that could influence the intention to use a certain technology this study develops a theoretical model representing the gamification influence on ease of use, information, web pages characteristics, web design and on the intention to use an e-banking with game features. The results from an online survey of 219 e-banking customers show that the gamification had a positive impact on all variables; special has a medium positive influence in web design and information and a large positive influence on customer intentions to use. Further analysis shows that the website ease of use plays has also a medium positive influence on the intention to use an e-banking gamified. Our findings also show that the clients give more importance to an attractive graphical and architecture website design, and less to web pages with so much information or having pleasure in using an e-banking system.", "title": "" }, { "docid": "91617f4ed1fbd5d37368caa326a91154", "text": "Different evaluation measures assess different character istics of machine learning algorithms. The empirical evaluation of alg orithms and classifiers is a matter of on-going debate among researchers. Most measu res in use today focus on a classifier’s ability to identify classes correctl y. We note other useful properties, such as failure avoidance or class discrimi nation, and we suggest measures to evaluate such properties. These measures – Youd en’s index, likelihood, Discriminant power – are used in medical diagnosis. We show that they are interrelated, and we apply them to a case study from the fie ld of electronic negotiations. We also list other learning problems which ma y benefit from the application of these measures.", "title": "" }, { "docid": "e48b39ce7d5b9cc55dcf7d80ca00d4cd", "text": "To efficiently extract local and global features in face description and recognition, a pyramid-based multi-scale LBP approach is proposed. Firstly, the face image pyramid is constructed through multi-scale analysis. Then the LBP operator is applied to each level of the image pyramid to extract facial features under various scales. Finally, all the extracted features are concatenated into an enhanced feature vector which is used as the face descriptor. Experimental results on ORL and FERET face databases show that the proposed LBP representation is highly efficient with good performance in face recognition and is robust to illumination, facial expression and position variation.", "title": "" }, { "docid": "f25ba85de1d9d25c2b8d19e76e7ca8d3", "text": "Changing definition of TIA from time to a tissue basis questions the validity of the well-established ABCD3-I risk score for recurrent ischemic cerebrovascular events. We analyzed patients with ischemic stroke with mild neurological symptoms arriving < 24 h after symptom onset in a phase where it is unclear, if the event turns out to be a TIA or minor stroke, in the prospective multi-center Austrian Stroke Unit Registry. Patients were retrospectively categorized according to a time-based (symptom duration below/above 24 h) and tissue-based (without/with corresponding brain lesion on CT or MRI) definition of TIA or minor stroke. Outcome parameters were early stroke during stroke unit stay and 3-month ischemic stroke. Of the 5237 TIA and minor stroke patients with prospectively documented ABCD3-I score, 2755 (52.6%) had a TIA by the time-based and 2183 (41.7%) by the tissue-based definition. Of the 2457 (46.9%) patients with complete 3-month followup, corresponding numbers were 1195 (48.3%) for the time- and 971 (39.5%) for the tissue-based definition of TIA. Early and 3-month ischemic stroke occurred in 1.1 and 2.5% of time-based TIA, 3.8 and 5.9% of time-based minor stroke, 1.2 and 2.3% of tissue-based TIA as well as in 3.1 and 5.5% of tissue-based minor stroke patients. Irrespective of the definition of TIA and minor stroke, the risk of early and 3-month ischemic stroke steadily increased with increasing ABCD3-I score points. The ABCD3-I score performs equally in TIA patients in tissue- as well as time-based definition and the same is true for minor stroke patients.", "title": "" } ]
scidocsrr
aff2671ccb9b62683c875b9e135e3d39
If you are not paying for it, you are the product: how much do advertisers pay to reach you?
[ { "docid": "7247eb6b90d23e2421c0d2500359d247", "text": "The large-scale collection and exploitation of personal information to drive targeted online advertisements has raised privacy concerns. As a step towards understanding these concerns, we study the relationship between how much information is collected and how valuable it is for advertising. We use HTTP traces consisting of millions of users to aid our study and also present the first comparative study between aggregators. We develop a simple model that captures the various parameters of today's advertising revenues, whose values are estimated via the traces. Our results show that per aggregator revenue is skewed (5% accounting for 90% of revenues), while the contribution of users to advertising revenue is much less skewed (20% accounting for 80% of revenue). Google is dominant in terms of revenue and reach (presence on 80% of publishers). We also show that if all 5% of the top users in terms of revenue were to install privacy protection, with no corresponding reaction from the publishers, then the revenue can drop by 30%.", "title": "" } ]
[ { "docid": "0d774f86bb45f2e3e04814dd84cb4490", "text": "Crop yield estimation is an important task in apple orchard management. The current manual sampling-based yield estimation is time-consuming, labor-intensive and inaccurate. To deal with this challenge, we develop and deploy a computer vision system for automated, rapid and accurate yield estimation. The system uses a two-camera stereo rig for image acquisition. It works at nighttime with controlled artificial lighting to reduce the variance of natural illumination. An autonomous orchard vehicle is used as the support platform for automated data collection. The system scans the both sides of each tree row in orchards. A computer vision algorithm is developed to detect and register apples from acquired sequential images, and then generate apple counts as crop yield estimation. We deployed the yield estimation system in Washington state in September, 2011. The results show that the developed system works well with both red and green apples in the tall-spindle planting system. The errors of crop yield estimation are -3.2% for a red apple block with about 480 trees, and 1.2% for a green apple block with about 670 trees.", "title": "" }, { "docid": "55fd332aa38c3240813e5947c65c867d", "text": "Skin detection is an important process in many of computer vision algorithms. It usually is a process that starts at a pixel-level, and that involves a pre-process of colorspace transformation followed by a classification process. A colorspace transformation is assumed to increase separability between skin and non-skin classes, to increase similarity among different skin tones, and to bring a robust performance under varying illumination conditions, without any sound reasonings. In this work, we examine if the colorspace transformation does bring those benefits by measuring four separability measurements on a large dataset of 805 images with different skin tones and illumination. Surprising results indicate that most of the colorspace transformations do not bring the benefits which have been assumed.", "title": "" }, { "docid": "c388c22f5d97fc172187ba1fd352cef0", "text": "Analysis of a driver's head behavior is an integral part of a driver monitoring system. In particular, the head pose and dynamics are strong indicators of a driver's focus of attention. Many existing state-of-the-art head dynamic analyzers are, however, limited to single-camera perspectives, which are susceptible to occlusion of facial features from spatially large head movements away from the frontal pose. Nonfrontal glances away from the road ahead, however, are of special interest since interesting events, which are critical to driver safety, occur during those times. In this paper, we present a distributed camera framework for head movement analysis, with emphasis on the ability to robustly and continuously operate even during large head movements. The proposed system tracks facial features and analyzes their geometric configuration to estimate the head pose using a 3-D model. We present two such solutions that additionally exploit the constraints that are present in a driving context and video data to improve tracking accuracy and computation time. Furthermore, we conduct a thorough comparative study with different camera configurations. For experimental evaluations, we collected a novel head pose data set from naturalistic on-road driving in urban streets and freeways, with particular emphasis on events inducing spatially large head movements (e.g., merge and lane change). Our analyses show promising results.", "title": "" }, { "docid": "556a7bd39da4d352642ea3c556a3cebf", "text": "Merger and Acquisition (M&A) has been a critical practice about corporate restructuring. Previous studies are mostly devoted to evaluating the suitability of M&A between a pair of investor and target company, or a target company for its propensity of being acquired. This paper focuses on the dual problem of predicting an investor’s prospective M&A based on its activities and firmographics. We propose to use a mutually-exciting point process with a regression prior to quantify the investor’s M&A behavior. Our model is motivated by the so-called contagious ‘wave-like’ M&A phenomenon, which has been well-recognized by the economics and management communities. A tailored model learning algorithm is devised that incorporates both static profile covariates and past M&A activities. Results on CrunchBase suggest the superiority of our model. The collected dataset and code will be released together with the paper.", "title": "" }, { "docid": "a208464e315fd86b626bafa14a27b7f6", "text": "Adaptive autonomy enables agents operating in an environment to change, or adapt, their autonomy levels by relying on tasks executed by others. Moreover, tasks could be delegated between agents, and as a result decision-making concerning them could also be delegated. In this work, adaptive autonomy is modeled through the willingness of agents to cooperate in order to complete abstract tasks, the latter with varying levels of dependencies between them. Furthermore, it is sustained that adaptive autonomy should be considered at an agent’s architectural level. Thus the aim of this paper is two-fold. Firstly, the initial concept of an agent architecture is proposed and discussed from an agent interaction perspective. Secondly, the relations between static values of willingness to help, dependencies between tasks and overall usefulness of the agents’ population are analysed. The results show that a unselfish population will complete more tasks than a selfish one for low dependency degrees. However, as the latter increases more tasks are dropped, and consequently the utility of the population degrades. Utility is measured by the number of tasks that the population completes during run-time. Finally, it is shown that agents are able to finish more tasks by dynamically changing their willingness to cooperate.", "title": "" }, { "docid": "54b88e4c9e0bc31667e720f5f04c7f83", "text": "In clean ocean water, the performance of a underwater optical communication system is limited mainly by oceanic turbulence, which is defined as the fluctuations in the index of refraction resulting from temperature and salinity fluctuations. In this paper, using the refractive index spectrum of oceanic turbulence under weak turbulence conditions, we carry out, for a horizontally propagating plane wave and spherical wave, analysis of the aperture-averaged scintillation index, the associated probability of fade, mean signal-to-noise ratio, and mean bit error rate. Our theoretical results show that for various values of the rate of dissipation of mean squared temperature and the temperature-salinity balance parameter, the large-aperture receiver leads to a remarkable decrease of scintillation and consequently a significant improvement on the system performance. Such an effect is more noticeable in the plane wave case than in the spherical wave case.", "title": "" }, { "docid": "e743bfe8c4f19f1f9a233106919c99a7", "text": "We propose a general framework called Network Dissection for quantifying the interpretability of latent representations of CNNs by evaluating the alignment between individual hidden units and a set of semantic concepts. Given any CNN model, the proposed method draws on a data set of concepts to score the semantics of hidden units at each intermediate convolutional layer. The units with semantics are labeled across a broad range of visual concepts including objects, parts, scenes, textures, materials, and colors. We use the proposed method to test the hypothesis that interpretability is an axis-independent property of the representation space, then we apply the method to compare the latent representations of various networks when trained to solve different classification problems. We further analyze the effect of training iterations, compare networks trained with different initializations, and measure the effect of dropout and batch normalization on the interpretability of deep visual representations. We demonstrate that the proposed method can shed light on characteristics of CNN models and training methods that go beyond measurements of their discriminative power.", "title": "" }, { "docid": "de03cbaa0b7fd8474f1729fe57ecc8a0", "text": "Cloud computing is an emerging paradigm that allows users to conveniently access computing resources as pay-per-use services. Whereas cloud offerings such as Amazon’s Elastic Compute Cloud and Google Apps are rapidly gaining a large user base, enterprise software’s migration towards the cloud is still in its infancy. For software vendors the move towardscloud solutions implies profound changes in their value-creation logic. Not only are they forced to deliver fully web-enabled solutions and to replace their license model with service fees, they also need to build the competencies to host and manage business-critical applications for their customers. This motivates our research, which investigates cloud computing’s implications for enterprise software vendors’ business models. From multiple case studies covering traditional and pure cloud providers, we find that moving from on-premise software to cloud services affects all business model components, that is, the customer value proposition, resource base, value configuration, and financial flows. It thus underpins cloud computing’s disruptive nature in the enterprise software domain. By deriving two alternative business model configurations, SaaS and SaaS+PaaS, our research synthesizes the strategic choices for enterprise software vendors and provides guidelines for designing viable business models.", "title": "" }, { "docid": "f55cd152f6c9e32ed33e4cca1a91cf2e", "text": "This study investigated whether being charged with a child pornography offense is a valid diagnostic indicator of pedophilia, as represented by an index of phallometrically assessed sexual arousal to children. The sample of 685 male patients was referred between 1995 and 2004 for a sexological assessment of their sexual interests and behavior. As a group, child pornography offenders showed greater sexual arousal to children than to adults and differed from groups of sex offenders against children, sex offenders against adults, and general sexology patients. The results suggest child pornography offending is a stronger diagnostic indicator of pedophilia than is sexually offending against child victims. Theoretical and clinical implications are discussed.", "title": "" }, { "docid": "1af028a0cf88d0ac5c52e84019554d51", "text": "Robots exhibit life-like behavior by performing intelligent actions. To enhance human-robot interaction it is necessary to investigate and understand how end-users perceive such animate behavior. In this paper, we report an experiment to investigate how people perceived different robot embodiments in terms of animacy and intelligence. iCat and Robovie II were used as the two embodiments in this experiment. We conducted a between-subject experiment where robot type was the independent variable, and perceived animacy and intelligence of the robot were the dependent variables. Our findings suggest that a robots perceived intelligence is significantly correlated with animacy. The correlation between the intelligence and the animacy of a robot was observed to be stronger in the case of the iCat embodiment. Our results also indicate that the more animated the face of the robot, the more likely it is to attract the attention of a user. We also discuss the possible and probable explanations of the results obtained.", "title": "" }, { "docid": "9cb85fdefedf43fb4b3a57472c1f3a87", "text": "An all-electrical, low-cost, wideband chip-to-chip link on a multi-mode dielectric waveguide is proposed. The signal is coupled from the silicon chip to the fundamental and polarization-orthogonal degenerate Ex11 and Ey11 waveguide modes using planar electric and slot dipole antennas, respectively. This approach doubles the capacity of a single line without sacrificing robustness or adding implementation cost and complexity. Two independent ultra-wideband 30GHz channels, each from 90 GHz to 120 GHz, are demonstrated. The large available bandwidth will be channelized in frequency for optimal overall efficiency with a CMOS transceiver. Various design aspects of the structure are examined and discussed. The proposed waveguide offers a solution for Terabit-per-second (Tbps) electrical wireline links.", "title": "" }, { "docid": "be749af7661631abc9dfd2ae57f05e46", "text": "Traditional Medicines derived from medicinal plants are used by about 60% of the world's population. This review focuses on Indian Herbal drugs and plants used in the treatment of diabetes, especially in India. Diabetes is an important human ailment afflicting many from various walks of life in different countries. In India it is proving to be a major health problem, especially in the urban areas. Though there are various approaches to reduce the ill effects of diabetes and its secondary complications, herbal formulations are preferred due to lesser side effects and low cost. A list of medicinal plants with proven antidiabetic and related beneficial effects and of herbal drugs used in treatment of diabetes is compiled. These include, Allium sativum, Eugenia jambolana, Momordica charantia Ocimum sanctum, Phyllanthus amarus, Pterocarpus marsupium, Tinospora cordifolia, Trigonella foenum graecum and Withania somnifera. One of the etiologic factors implicated in the development of diabetes and its complications is the damage induced by free radicals and hence an antidiabetic compound with antioxidant properties would be more beneficial. Therefore information on antioxidant effects of these medicinal plants is also included.", "title": "" }, { "docid": "84ae85ee51ce3dd26e077dcd183e0b60", "text": "Deep Learning (DL) methods show very good performance when trained on large, balanced data sets. However, many practical problems involve imbalanced data sets, or/and classes with a small number of training samples. The performance of DL methods as well as more traditional classifiers drops significantly in such settings. Most of the existing solutions for imbalanced problems focus on customizing the data for training. A more principled solution is to use mixed Hinge-Minimax risk [19] specifically designed to solve binary problems with imbalanced training sets. Here we propose a Latent Hinge Minimax (LHM) risk and a training algorithm that generalizes this paradigm to an ensemble of hyperplanes that can form arbitrary complex, piecewise linear boundaries. To extract good features, we combine LHM model with CNN via transfer learning. To solve multi-class problem we map pre-trained categoryspecific LHM classifiers to a multi-class neural network and adjust the weights with very fast tuning. LHM classifier enables the use of unlabeled data in its training and the mapping allows for multi-class inference, resulting in a classifier that performs better than alternatives when trained on a small number of training samples.", "title": "" }, { "docid": "af3b81357bcb908c290e78412940e2ea", "text": "Ambient occlusion and directional (spherical harmonic) occlusion have become a staple of production rendering because they capture many visually important qualities of global illumination while being reusable across multiple artistic lighting iterations. However, ray-traced solutions for hemispherical occlusion require many rays per shading point (typically 256-1024) due to the full hemispherical angular domain. Moreover, each ray can be expensive in scenes with moderate to high geometric complexity. However, many nearby rays sample similar areas, and the final occlusion result is often low frequency. We give a frequency analysis of shadow light fields using distant illumination with a general BRDF and normal mapping, allowing us to share ray information even among complex receivers. We also present a new rotationally-invariant filter that easily handles samples spread over a large angular domain. Our method can deliver 4x speed up for scenes that are computationally bound by ray tracing costs.", "title": "" }, { "docid": "96d219755d8b0065526fb1fd0932fc04", "text": "Air transportation has an important place among transportation systems and it is indispensable for the flights to perform their voyages in scheduled time in order to ensure the comfort of passengers and controllability of operational costs. There are several reasons for flight delays like weather conditions, excessive intensity in air traffic, accidents or closed airfields, conditions that will lead to an increase in distances between planes and operational delays in ground services. In this study, using the data collected from the sensors located in the airport and the information about the flight, the goal is develop a machine learning model to estimate departure delays of flights using artificial neural networks.", "title": "" }, { "docid": "2fb6392a161cf64b1fe009dd8db99857", "text": "Humans have an incredible capacity to learn properties of objects by pure tactile exploration with their two hands. With robots moving into human-centred environment, tactile exploration becomes more and more important as vision may be occluded easily by obstacles or fail because of different illumination conditions. In this paper, we present our first results on bimanual compliant tactile exploration, with the goal to identify objects and grasp them. An exploration strategy is proposed to guide the motion of the two arms and fingers along the object. From this tactile exploration, a point cloud is obtained for each object. As the point cloud is intrinsically noisy and un-uniformly distributed, a filter based on Gaussian Processes is proposed to smooth the data. This data is used at runtime for object identification. Experiments on an iCub humanoid robot have been conducted to validate our approach.", "title": "" }, { "docid": "0ef58b9966c7d3b4e905e8306aad3359", "text": "Agriculture is the back bone of India. To make the sustainable agriculture, this system is proposed. In this system ARM 9 processor is used to control and monitor the irrigation system. Different kinds of sensors are used. This paper presents a fully automated drip irrigation system which is controlled and monitored by using ARM9 processor. PH content and the nitrogen content of the soil are frequently monitored. For the purpose of monitoring and controlling, GSM module is implemented. The system informs user about any abnormal conditions like less moisture content and temperature rise, even concentration of CO2 via SMS through the GSM module.", "title": "" }, { "docid": "e380fee1d044c15a5e5ba12436b8f511", "text": "Modern resolver-to-digital converters (RDCs) are typically implemented using DSP techniques to reduce hardware footprint and enhance system accuracy. However, in such implementations, both resolver sensor and ADC channel unbalances introduce significant errors, particularly in the speed output of the tracking loop. The frequency spectrum of the output error is variable depending on the resolver mechanical velocity. This paper presents the design of an autotuning output filter based on the interpolation of precomputed filters for a DSP-based RDC with a type-II tracking loop. A fourth-order peak and a second-order high-pass filter are designed and tested for an experimental RDC. The experimental results demonstrate significant reduction of the peak-to-peak error in the estimated speed.", "title": "" }, { "docid": "9dbea5d01d446bd829085e445f11c5a7", "text": "We present the results of a large-scale, end-to-end human evaluation of various sentiment summarization models. The evaluation shows that users have a strong preference for summarizers that model sentiment over non-sentiment baselines, but have no broad overall preference between any of the sentiment-based models. However, an analysis of the human judgments suggests that there are identifiable situations where one summarizer is generally preferred over the others. We exploit this fact to build a new summarizer by training a ranking SVM model over the set of human preference judgments that were collected during the evaluation, which results in a 30% relative reduction in error over the previous best summarizer.", "title": "" }, { "docid": "96ea7f2a0fd0a630df87d22d846d1575", "text": "BACKGROUND\nRecent years have seen an explosion in the availability of data in the chemistry domain. With this information explosion, however, retrieving relevant results from the available information, and organising those results, become even harder problems. Computational processing is essential to filter and organise the available resources so as to better facilitate the work of scientists. Ontologies encode expert domain knowledge in a hierarchically organised machine-processable format. One such ontology for the chemical domain is ChEBI. ChEBI provides a classification of chemicals based on their structural features and a role or activity-based classification. An example of a structure-based class is 'pentacyclic compound' (compounds containing five-ring structures), while an example of a role-based class is 'analgesic', since many different chemicals can act as analgesics without sharing structural features. Structure-based classification in chemistry exploits elegant regularities and symmetries in the underlying chemical domain. As yet, there has been neither a systematic analysis of the types of structural classification in use in chemistry nor a comparison to the capabilities of available technologies.\n\n\nRESULTS\nWe analyze the different categories of structural classes in chemistry, presenting a list of patterns for features found in class definitions. We compare these patterns of class definition to tools which allow for automation of hierarchy construction within cheminformatics and within logic-based ontology technology, going into detail in the latter case with respect to the expressive capabilities of the Web Ontology Language and recent extensions for modelling structured objects. Finally we discuss the relationships and interactions between cheminformatics approaches and logic-based approaches.\n\n\nCONCLUSION\nSystems that perform intelligent reasoning tasks on chemistry data require a diverse set of underlying computational utilities including algorithmic, statistical and logic-based tools. For the task of automatic structure-based classification of chemical entities, essential to managing the vast swathes of chemical data being brought online, systems which are capable of hybrid reasoning combining several different approaches are crucial. We provide a thorough review of the available tools and methodologies, and identify areas of open research.", "title": "" } ]
scidocsrr
1fb2020d50c3431d79a881ab8be753f5
EEG-based estimation of mental fatigue by using KPCA-HMM and complexity parameters
[ { "docid": "17c12cc27cd66d0289fe3baa9ab4124d", "text": "In this paper we review classification algorithms used to design brain-computer interface (BCI) systems based on electroencephalography (EEG). We briefly present the commonly employed algorithms and describe their critical properties. Based on the literature, we compare them in terms of performance and provide guidelines to choose the suitable classification algorithm(s) for a specific BCI.", "title": "" } ]
[ { "docid": "976f16e21505277525fa697876b8fe96", "text": "A general technique for obtaining intermediate-band crystal filters from prototype low-pass (LP) networks which are neither symmetric nor antimetric is presented. This immediately enables us to now realize the class of low-transient responses. The bandpass (BP) filter appears as a cascade of symmetric lattice sections, obtained by partitioning the LP prototype filter, inserting constant reactances where necessary, and then applying the LP to BP frequency transformation. Manuscript received January 7, 1974; revised October 9, 1974. The author is with the Systems Development Division, Westinghouse Electric Corporation, Baltimore, Md. The cascade is composed of only two fundamental sections. Finally, the method introduced is illustrated with an example.", "title": "" }, { "docid": "d5c0950e12e76c5c63b92ef7cd002782", "text": "In recent years, machine learning approaches have been successfully applied for analysis of neuroimaging data, to help in the context of disease diagnosis. We provide, in this paper, an overview of recent support vector machine-based methods developed and applied in psychiatric neuroimaging for the investigation of schizophrenia. In particular, we focus on the algorithms implemented by our group, which have been applied to classify subjects affected by schizophrenia and healthy controls, comparing them in terms of accuracy results with other recently published studies. First we give a description of the basic terminology used in pattern recognition and machine learning. Then we separately summarize and explain each study, highlighting the main features that characterize each method. Finally, as an outcome of the comparison of the results obtained applying the described different techniques, conclusions are drawn in order to understand how much automatic classification approaches can be considered a useful tool in understanding the biological underpinnings of schizophrenia. We then conclude by discussing the main implications achievable by the application of these methods into clinical practice.", "title": "" }, { "docid": "0868f1ccd67db523026f1650b03311ba", "text": "Conflict with humans over livestock and crops seriously undermines the conservation prospects of India's large and potentially dangerous mammals such as the tiger (Panthera tigris) and elephant (Elephas maximus). This study, carried out in Bhadra Tiger Reserve in south India, estimates the extent of material and monetary loss incurred by resident villagers between 1996 and 1999 in conflicts with large felines and elephants, describes the spatiotemporal patterns of animal damage, and evaluates the success of compensation schemes that have formed the mainstay of loss-alleviation measures. Annually each household lost an estimated 12% (0.9 head) of their total holding to large felines, and approximately 11% of their annual grain production (0.82 tonnes per family) to elephants. Compensations awarded offset only 5% of the livestock loss and 14% of crop losses and were accompanied by protracted delays in the processing of claims. Although the compensation scheme has largely failed to achieve its objective of alleviating loss, its implementation requires urgent improvement if reprisal against large wild mammals is to be minimized. Furthermore, innovative schemes of livestock and crop insurance need to be tested as alternatives to compensations.", "title": "" }, { "docid": "b988525d515588da8becc18c2aa21e82", "text": "Numerical optimization has been used as an extension of vehicle dynamics simulation in order to reproduce trajectories and driving techniques used by expert race drivers and investigate the effects of several vehicle parameters in the stability limit operation of the vehicle. In this work we investigate how different race-driving techniques may be reproduced by considering different optimization cost functions. We introduce a bicycle model with suspension dynamics and study the role of the longitudinal load transfer in limit vehicle operation, i.e., when the tires operate at the adhesion limit. Finally we demonstrate that for certain vehicle configurations the optimal trajectory may include large slip angles (drifting), which matches the techniques used by rally-race drivers.", "title": "" }, { "docid": "06f4ec7c6425164ee7fc38a8b26b8437", "text": "In this paper we present a decomposition strategy for solving large scheduling problems using mathematical programming methods. Instead of formulating one huge and unsolvable MILP problem, we propose a decomposition scheme that generates smaller programs that can often be solved to global optimality. The original problem is split into subproblems in a natural way using the special features of steel making and avoiding the need for expressing the highly complex rules as explicit constraints. We present a small illustrative example problem, and several real-world problems to demonstrate the capabilities of the proposed strategy, and the fact that the solutions typically lie within 1-3% of the global optimum.", "title": "" }, { "docid": "cb6223183d3602d2e67aafc0b835a405", "text": "Electrocardiogram is widely used to diagnose the congestive heart failure (CHF). It is the primary noninvasive diagnostic tool that can guide in the management and follow-up of patients with CHF. Heart rate variability (HRV) signals which are nonlinear in nature possess the hidden signatures of various cardiac diseases. Therefore, this paper proposes a nonlinear methodology, empirical mode decomposition (EMD), for an automated identification and classification of normal and CHF using HRV signals. In this work, HRV signals are subjected to EMD to obtain intrinsic mode functions (IMFs). From these IMFs, thirteen nonlinear features such as approximate entropy $$ (E_{\\text{ap}}^{x} ) $$ ( E ap x ) , sample entropy $$ (E_{\\text{s}}^{x} ) $$ ( E s x ) , Tsallis entropy $$ (E_{\\text{ts}}^{x} ) $$ ( E ts x ) , fuzzy entropy $$ (E_{\\text{f}}^{x} ) $$ ( E f x ) , Kolmogorov Sinai entropy $$ (E_{\\text{ks}}^{x} ) $$ ( E ks x ) , modified multiscale entropy $$ (E_{{{\\text{mms}}_{y} }}^{x} ) $$ ( E mms y x ) , permutation entropy $$ (E_{\\text{p}}^{x} ) $$ ( E p x ) , Renyi entropy $$ (E_{\\text{r}}^{x} ) $$ ( E r x ) , Shannon entropy $$ (E_{\\text{sh}}^{x} ) $$ ( E sh x ) , wavelet entropy $$ (E_{\\text{w}}^{x} ) $$ ( E w x ) , signal activity $$ (S_{\\text{a}}^{x} ) $$ ( S a x ) , Hjorth mobility $$ (H_{\\text{m}}^{x} ) $$ ( H m x ) , and Hjorth complexity $$ (H_{\\text{c}}^{x} ) $$ ( H c x ) are extracted. Then, different ranking methods are used to rank these extracted features, and later, probabilistic neural network and support vector machine are used for differentiating the highly ranked nonlinear features into normal and CHF classes. We have obtained an accuracy, sensitivity, and specificity of 97.64, 97.01, and 98.24 %, respectively, in identifying the CHF. The proposed automated technique is able to identify the person having CHF alarming (alerting) the clinicians to respond quickly with proper treatment action. Thus, this method may act as a valuable tool for increasing the survival rate of many cardiac patients.", "title": "" }, { "docid": "f6ac111d3ece47f9881a4f1b0ce6d4be", "text": "An Enterprise Framework (EF) is a software architecture. Such frameworks expose a rich set of semantics and modeling paradigms for developing and extending enterprise applications. EFs are, by design, the cornerstone of an organization’s systems development activities. EFs offer a streamlined and flexible alternative to traditional tools and applications which feature numerous point solutions integrated into complex and often inflexible environments. Enterprise Frameworks play an important role since they allow reuse of design knowledge and offer techniques for creating reference models and scalable architectures for enterprise integration. These models and architectures are sufficiently flexible and powerful to be used at multiple levels, e.g. from the integration of the planning systems of geographically distributed factories, to generate a global virtual factory, down to the monitoring and control system for a single production cell. These frameworks implement or enforce well-documented standards for component integration and collaboration. The architecture of an Enterprise framework provides for ready integration with new or existing components. It defines how these components must interact with the framework and how objects will collaborate. In addition, it defines how developers' work together to develop and extend enterprise applications based on the framework. Therefore, the goal of an Enterprise framework is to reduce complexity and lifecycle costs of enterprise systems, while ensuring flexibility.", "title": "" }, { "docid": "6514ddb39c465a8ca207e24e60071e7f", "text": "The psychometric properties and clinical utility of the Separation Anxiety Avoidance Inventory, child and parent version (SAAI-C/P) were examined in two studies. The aim of the SAAI, a self- and parent-report measure, is to evaluate the avoidance relating to separation anxiety disorder (SAD) situations. In the first study, a school sample of 384 children and their parents (n = 279) participated. In the second study, 102 children with SAD and 35 children with other anxiety disorders (AD) were investigated. In addition, 93 parents of children with SAD, and 35 parents of children with other AD participated. A two-factor structure was confirmed by confirmatory factor analysis. The SAAI-C and SAAI-P demonstrated good internal consistency, test-retest reliability, as well as construct and discriminant validity. Furthermore, the SAAI was sensitive to treatment change. The parent-child agreement was substantial. Overall, these results provide support for the use of the SAAI-C/P version in clinical and research settings.", "title": "" }, { "docid": "7fed6f57ba2e17db5986d47742dc1a9c", "text": "Partial Least Squares Regression (PLSR) is a linear regression technique developed to deal with high-dimensional regressors and one or several response variables. In this paper we introduce robustified versions of the SIMPLS algorithm being the leading PLSR algorithm because of its speed and efficiency. Because SIMPLS is based on the empirical cross-covariance matrix between the response variables and the regressors and on linear least squares regression, the results are affected by abnormal observations in the data set. Two robust methods, RSIMCD and RSIMPLS, are constructed from a robust covariance matrix for high-dimensional data and robust linear regression. We introduce robust RMSECV and RMSEP values for model calibration and model validation. Diagnostic plots are constructed to visualize and classify the outliers. Several simulation results and the analysis of real data sets show the effectiveness and the robustness of the new approaches. Because RSIMPLS is roughly twice as fast as RSIMCD, it stands out as the overall best method.", "title": "" }, { "docid": "08e121203b159b7d59f17d65a33580f4", "text": "Coded structured light is an optical technique based on active stereovision that obtains the shape of objects. One shot techniques are based on projecting a unique light pattern with an LCD projector so that grabbing an image with a camera, a large number of correspondences can be obtained. Then, a 3D reconstruction of the illuminated object can be recovered by means of triangulation. The most used strategy to encode one-shot patterns is based on De Bruijn sequences. In This work a new way to design patterns using this type of sequences is presented. The new coding strategy minimises the number of required colours and maximises both the resolution and the accuracy.", "title": "" }, { "docid": "38438e6a0bd03ad5f076daa1f248d001", "text": "In recent years, research on reading-compr question and answering has drawn intense attention in Language Processing. However, it is still a key issue to the high-level semantic vector representation of quest paragraph. Drawing inspiration from DrQA [1], wh question and answering system proposed by Facebook, tl proposes an attention-based question and answering 11 adds the binary representation of the paragraph, the par; attention to the question, and the question's attentioi paragraph. Meanwhile, a self-attention calculation m proposed to enhance the question semantic vector reption. Besides, it uses a multi-layer bidirectional Lon: Term Memory(BiLSTM) networks to calculate the h semantic vector representations of paragraphs and q Finally, bilinear functions are used to calculate the pr of the answer's position in the paragraph. The expe results on the Stanford Question Answering Dataset(SQl development set show that the F1 score is 80.1% and tl 71.4%, which demonstrates that the performance of the is better than that of the model of DrQA, since they inc 2% and 1.3% respectively.", "title": "" }, { "docid": "a27660db1d7d2a6724ce5fd8991479f7", "text": "An electromyographic (EMG) activity pattern for individual muscles in the gait cycle exhibits a great deal of intersubject, intermuscle and context-dependent variability. Here we examined the issue of common underlying patterns by applying factor analysis to the set of EMG records obtained at different walking speeds and gravitational loads. To this end healthy subjects were asked to walk on a treadmill at speeds of 1, 2, 3 and 5 kmh(-1) as well as when 35-95% of the body weight was supported using a harness. We recorded from 12-16 ipsilateral leg and trunk muscles using both surface and intramuscular recording and determined the average, normalized EMG of each record for 10-15 consecutive step cycles. We identified five basic underlying factors or component waveforms that can account for about 90% of the total waveform variance across different muscles during normal gait. Furthermore, while activation patterns of individual muscles could vary dramatically with speed and gravitational load, both the limb kinematics and the basic EMG components displayed only limited changes. Thus, we found a systematic phase shift of all five factors with speed in the same direction as the shift in the onset of the swing phase. This tendency for the factors to be timed according to the lift-off event supports the idea that the origin of the gait cycle generation is the propulsion rather than heel strike event. The basic invariance of the factors with walking speed and with body weight unloading implies that a few oscillating circuits drive the active muscles to produce the locomotion kinematics. A flexible and dynamic distribution of these basic components to the muscles may result from various descending and proprioceptive signals that depend on the kinematic and kinetic demands of the movements.", "title": "" }, { "docid": "b6a8f45bd10c30040ed476b9d11aa908", "text": "PixelCNNs are a recently proposed class of powerful generative models with tractable likelihood. Here we discuss our implementation of PixelCNNs which we make available at https://github.com/openai/pixel-cnn. Our implementation contains a number of modifications to the original model that both simplify its structure and improve its performance. 1) We use a discretized logistic mixture likelihood on the pixels, rather than a 256-way softmax, which we find to speed up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels, simplifying the model structure. 3) We use downsampling to efficiently capture structure at multiple resolutions. 4) We introduce additional short-cut connections to further speed up optimization. 5) We regularize the model using dropout. Finally, we present state-of-the-art log likelihood results on CIFAR-10 to demonstrate the usefulness of these modifications.", "title": "" }, { "docid": "39e332a58625a12ef3e14c1a547a8cad", "text": "This paper presents an overview of the recent achievements in the held of substrate integrated waveguides (SIW) technology, with particular emphasis on the modeling strategy and design considerations of millimeter-wave integrated circuits as well as the physical interpretation of the operation principles and loss mechanisms of these structures. The most common numerical methods for modeling both SIW interconnects and circuits are presented. Some considerations and guidelines for designing SIW structures, interconnects and circuits are discussed, along with the physical interpretation of the major issues related to radiation leakage and losses. Examples of SIW circuits and components operating in the microwave and millimeter wave bands are also reported, with numerical and experimental results.", "title": "" }, { "docid": "49517920ddecf10a384dc3e98e39459b", "text": "Machine learning models are vulnerable to adversarial examples: small changes to images can cause computer vision models to make mistakes such as identifying a school bus as an ostrich. However, it is still an open question whether humans are prone to similar mistakes. Here, we address this question by leveraging recent techniques that transfer adversarial examples from computer vision models with known parameters and architecture to other models with unknown parameters and architecture, and by matching the initial processing of the human visual system. We find that adversarial examples that strongly transfer across computer vision models influence the classifications made by time-limited human observers.", "title": "" }, { "docid": "5f8ac79ad733d031ecaff19a748666e2", "text": "Decision making techniques used to help evaluate current suppliers should aim at classifying performance of individual suppliers against desired levels of performance so as to devise suitable action plans to increase suppliers' performance and capabilities. Moreover, decision making related to what course of action to take for a particular supplier depends on the evaluation of short and long term factors of performance, as well as on the type of item to be supplied. However, most of the propositions found in the literature do not consider the type of supplied item and are more suitable for ordering suppliers rather than categorizing them. To deal with this limitation, this paper presents a new approach based on fuzzy inference combined with the simple fuzzy grid method to help decisionmaking in the supplier evaluation for development. This approach follows a procedure for pattern classification based on decision rules to categorize supplier performance according to the item category so as to indicate strengths and weaknesses of current suppliers, helping decision makers review supplier development action plans. Applying the method to a company in the automotive sector shows that it brings objectivity and consistency to supplier evaluation, supporting consensus building through the decision making process. Critical items can be identified which aim at proposing directives for managing and developing suppliers for leverage, bottleneck and strategic items. It also helps to identify suppliers in need of attention or suppliers that should be replaced. & 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "d09b5d295fb78756cc6141471a2415a3", "text": "One-point (or n-point) crossover has the property that schemata exhibited by both parents are ‘respected’transferred to the offspring without disruption. In addition, new schemata may, potentially, be created by combination of the genes on which the parents differ. Some argue that the preservation of similarity is the important aspect of crossover, and that the combination of differences (key to the building-block hypothesis) is unlikely to be valuable. In this paper, we discuss the operation of recombination on a hierarchical buildingblock problem. Uniform crossover, which preserves similarity, fails on this problem. Whereas, one-point crossover, that both preserves similarity and combines differences, succeeds. In fact, a somewhat perverse recombination operator, that combines differences but destroys schemata that are common to both parents, also succeeds. Thus, in this problem, combination of schemata from dissimilar parents is required, and preserving similarity is not required. The test problem represents an extreme case, but it serves to illustrate the different aspects of recombination that are available in regular operators such as one-point crossover.", "title": "" }, { "docid": "0d0fae25e045c730b68d63e2df1dfc7f", "text": "It is very difficult to over-emphasize the benefits of accurate data. Errors in data are generally the most expensive aspect of data entry, costing the users even much more compared to the original data entry. Unfortunately, these costs are intangibles or difficult to measure. If errors are detected at an early stage then it requires little cost to remove the errors. Incorrect and misleading data lead to all sorts of unpleasant and unnecessary expenses. Unluckily, it would be very expensive to correct the errors after the data has been processed, particularly when the processed data has been converted into the knowledge for decision making. No doubt a stitch in time saves nine i.e. a timely effort will prevent more work at later stage. Moreover, time spent in processing errors can also have a significant cost. One of the major problems with automated data entry systems are errors. In this paper we discuss many well known techniques to minimize errors, different cleansing approaches and, suggest how we can improve accuracy rate. Framework available for data cleansing offer the fundamental services such as attribute selection, formation of tokens, selection of clustering algorithms, selection of eliminator functions etc.", "title": "" }, { "docid": "75233d6d94fec1f43fa02e8043470d4d", "text": "Out-of-autoclave (OoA) prepreg materials and methods have gained acceptance over the past decade because of the ability to produce autoclave-quality components under vacuum-bag-only (VBO) cure. To achieve low porosity and tight dimensional tolerances, VBO prepregs rely on specific microstructural features and processing techniques. Furthermore, successful cure is contingent upon appropriate material property and process parameter selection. In this article, we review the existing literature on VBO prepreg processing to summarize and synthesize knowledge on these issues. First, the context, development, and defining properties of VBO prepregs are presented. The key processing phenomena and the influence on quality are subsequently described. Finally, cost and environmental performance are considered. Throughout, we highlight key considerations for VBO prepreg processing and identify areas where further study is required.", "title": "" }, { "docid": "81e49c8763f390e4b86968ff91214b5a", "text": "Choreographies allow business and service architects to specify with a global perspective the requirements of applications built over distributed and interacting software entities. While being a standard for the abstract specification of business workflows and collaboration between services, the Business Process Modeling Notation (BPMN) has only been recently extended into BPMN 2.0 to support an interaction model of choreography, which, as opposed to interconnected interface models, is better suited to top-down development processes. An important issue with choreographies is real-izability, i.e., whether peers obtained via projection from a choreography interact as prescribed in the choreography requirements. In this work, we propose a realizability checking approach for BPMN 2.0 choreographies. Our approach is formally grounded on a model transformation into the LOTOS NT process algebra and the use of equivalence checking. It is also completely tool-supported through interaction with the Eclipse BPMN 2.0 editor and the CADP process algebraic toolbox.", "title": "" } ]
scidocsrr
ce75749e2f558ac953323ec5541b7b67
Analysis of the 802.11i 4-way handshake
[ { "docid": "8dcb99721a06752168075e6d45ee64c7", "text": "The convenience of 802.11-based wireless access networks has led to widespread deployment in the consumer, industrial and military sectors. However, this use is predicated on an implicit assumption of confidentiality and availability. While the secu­ rity flaws in 802.11’s basic confidentially mechanisms have been widely publicized, the threats to network availability are far less widely appreciated. In fact, it has been suggested that 802.11 is highly suscepti­ ble to malicious denial-of-service (DoS) attacks tar­ geting its management and media access protocols. This paper provides an experimental analysis of such 802.11-specific attacks – their practicality, their ef­ ficacy and potential low-overhead implementation changes to mitigate the underlying vulnerabilities.", "title": "" } ]
[ { "docid": "3653e29e71d70965317eb4c450bc28da", "text": "This paper comprises an overview of different aspects for wire tension control devices and algorithms according to the state of industrial use and state of research. Based on a typical winding task of an orthocyclic winding scheme, possible new principles for an alternative piezo-electric actuator and an electromechanical tension control will be derived and presented.", "title": "" }, { "docid": "3eebecff1cb89f5490602f43717902b7", "text": "Radiation therapy (RT) is an integral part of prostate cancer treatment across all stages and risk groups. Immunotherapy using a live, attenuated, Listeria monocytogenes-based vaccines have been shown previously to be highly efficient in stimulating anti-tumor responses to impact on the growth of established tumors in different tumor models. Here, we evaluated the combination of RT and immunotherapy using Listeria monocytogenes-based vaccine (ADXS31-142) in a mouse model of prostate cancer. Mice bearing PSA-expressing TPSA23 tumor were divided to 5 groups receiving no treatment, ADXS31-142, RT (10 Gy), control Listeria vector and combination of ADXS31-142 and RT. Tumor growth curve was generated by measuring the tumor volume biweekly. Tumor tissue, spleen, and sera were harvested from each group for IFN-γ ELISpot, intracellular cytokine assay, tetramer analysis, and immunofluorescence staining. There was a significant tumor growth delay in mice that received combined ADXS31-142 and RT treatment as compared with mice of other cohorts and this combined treatment causes complete regression of their established tumors in 60 % of the mice. ELISpot and immunohistochemistry of CD8+ cytotoxic T Lymphocytes (CTL) showed a significant increase in IFN-γ production in mice with combined treatment. Tetramer analysis showed a fourfold and a greater than 16-fold increase in PSA-specific CTLs in animals receiving ADXS31-142 alone and combination treatment, respectively. A similar increase in infiltration of CTLs was observed in the tumor tissues. Combination therapy with RT and Listeria PSA vaccine causes significant tumor regression by augmenting PSA-specific immune response and it could serve as a potential treatment regimen for prostate cancer.", "title": "" }, { "docid": "89fd46da8542a8ed285afb0cde9cc236", "text": "Collaborative Filtering with Implicit Feedbacks (e.g., browsing or clicking records), named as CF-IF, is demonstrated to be an effective way in recommender systems. Existing works of CF-IF can be mainly classified into two categories, i.e., point-wise regression based and pairwise ranking based, where the latter one relaxes assumption and usually obtains better performance in empirical studies. In real applications, implicit feedback is often very sparse, causing CF-IF based methods to degrade significantly in recommendation performance. In this case, side information (e.g., item content) is usually introduced and utilized to address the data sparsity problem. Nevertheless, the latent feature representation learned from side information by topic model may not be very effective when the data is too sparse. To address this problem, we propose collaborative deep ranking (CDR), a hybrid pair-wise approach with implicit feedback, which leverages deep feature representation of item content into Bayesian framework of pair-wise ranking model in this paper. The experimental analysis on a real-world dataset shows CDR outperforms three state-of-art methods in terms of recall metric under different sparsity level.", "title": "" }, { "docid": "06cc255e124702878e2106bf0e8eb47c", "text": "Agent technology has been recognized as a promising paradigm for next generation manufacturing systems. Researchers have attempted to apply agent technology to manufacturing enterprise integration, enterprise collaboration (including supply chain management and virtual enterprises), manufacturing process planning and scheduling, shop floor control, and to holonic manufacturing as an implementation methodology. This paper provides an update review on the recent achievements in these areas, and discusses some key issues in implementing agent-based manufacturing systems such as agent encapsulation, agent organization, agent coordination and negotiation, system dynamics, learning, optimization, security and privacy, tools and standards. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f2492c40f98e3cccc3ac3ab7accf4af7", "text": "Accurate detection of single-trial event-related potentials (ERPs) in the electroencephalogram (EEG) is a difficult problem that requires efficient signal processing and machine learning techniques. Supervised spatial filtering methods that enhance the discriminative information in EEG data are commonly used to improve single-trial ERP detection. We propose a convolutional neural network (CNN) with a layer dedicated to spatial filtering for the detection of ERPs and with training based on the maximization of the area under the receiver operating characteristic curve (AUC). The CNN is compared with three common classifiers: 1) Bayesian linear discriminant analysis; 2) multilayer perceptron (MLP); and 3) support vector machines. Prior to classification, the data were spatially filtered with xDAWN (for the maximization of the signal-to-signal-plus-noise ratio), common spatial pattern, or not spatially filtered. The 12 analytical techniques were tested on EEG data recorded in three rapid serial visual presentation experiments that required the observer to discriminate rare target stimuli from frequent nontarget stimuli. Classification performance discriminating targets from nontargets depended on both the spatial filtering method and the classifier. In addition, the nonlinear classifier MLP outperformed the linear methods. Finally, training based AUC maximization provided better performance than training based on the minimization of the mean square error. The results support the conclusion that the choice of the systems architecture is critical and both spatial filtering and classification must be considered together.", "title": "" }, { "docid": "25e50a3e98b58f833e1dd47aec94db21", "text": "Sharing knowledge for multiple related machine learning tasks is an effective strategy to improve the generalization performance. In this paper, we investigate knowledge sharing across categories for action recognition in videos. The motivation is that many action categories are related, where common motion pattern are shared among them (e.g. diving and high jump share the jump motion). We propose a new multi-task learning method to learn latent tasks shared across categories, and reconstruct a classifier for each category from these latent tasks. Compared to previous methods, our approach has two advantages: (1) The learned latent tasks correspond to basic motion patterns instead of full actions, thus enhancing discrimination power of the classifiers. (2) Categories are selected to share information with a sparsity regularizer, avoiding falsely forcing all categories to share knowledge. Experimental results on multiple public data sets show that the proposed approach can effectively transfer knowledge between different action categories to improve the performance of conventional single task learning methods.", "title": "" }, { "docid": "3467f4be08c4b8d6cd556f04f324ce67", "text": "Round robin arbiter (RRA) is a critical block in nowadays designs. It is widely found in System-on-chips and Network-on-chips. The need of an efficient RRA has increased extensively as it is a limiting performance block. In this paper, we deliver a comparative review between different RRA architectures found in literature. We also propose a novel efficient RRA architecture. The FPGA implementation results of the previous RRA architectures and our proposed one are given, that show the improvements of the proposed RRA.", "title": "" }, { "docid": "c69e002a71132641947d8e30bb2e74f7", "text": "In this paper, we investigate a new stealthy attack simultaneously compromising actuators and sensors. This attack is referred to as coordinated attack. We show that the coordinated attack is capable of deriving the system states far away from the desired without being detected. Furthermore, designing such an attack practically does not require knowledge on target systems, which makes the attack much more dangerous compared to the other known attacks. Also, we present a method to detect the coordinated attack. To validate the effect of the proposed attack, we carry out experiments using a quadrotor.", "title": "" }, { "docid": "7f68d6a6432f55684ad79a4f79406dab", "text": "Half of patients with heart failure (HF) have a preserved left ventricular ejection fraction (HFpEF). Morbidity and mortality in HFpEF are similar to values observed in patients with HF and reduced EF, yet no effective treatment has been identified. While early research focused on the importance of diastolic dysfunction in the pathophysiology of HFpEF, recent studies have revealed that multiple non-diastolic abnormalities in cardiovascular function also contribute. Diagnosis of HFpEF is frequently challenging and relies upon careful clinical evaluation, echo-Doppler cardiography, and invasive haemodynamic assessment. In this review, the principal mechanisms, diagnostic approaches, and clinical trials are reviewed, along with a discussion of novel treatment strategies that are currently under investigation or hold promise for the future.", "title": "" }, { "docid": "3edf5d1cce2a26fbf5c2cc773649629b", "text": "We conducted three experiments to investigate the mental images associated with idiomatic phrases in English. Our hypothesis was that people should have strong conventional images for many idioms and that the regularity in people's knowledge of their images for idioms is due to the conceptual metaphors motivating the figurative meanings of idioms. In the first study, subjects were asked to form and describe their mental images for different idiomatic expressions. Subjects were then asked a series of detailed questions about their images regarding the causes and effects of different events within their images. We found high consistency in subjects' images of idioms with similar figurative meanings despite differences in their surface forms (e.g., spill the beans and let the cat out of the bag). Subjects' responses to detailed questions about their images also showed a high degree of similarity in their answers. Further examination of subjects' imagery protocols supports the idea that the conventional images and knowledge associated with idioms are constrained by the conceptual metaphors (e.g., the MIND IS A CONTAINER and IDEAS ARE ENTITIES) which motivate the figurative meanings of idioms. The results of two control studies showed that the conventional images associated with idioms are not solely based on their figurative meanings (Experiment 2) and that the images associated with literal phrases (e.g., spill the peas) were quite varied and unlikely to be constrained by conceptual metaphor (Experiment 3). These findings support the view that idioms are not \"dead\" metaphors with their meanings being arbitrarily determined. Rather, the meanings of many idioms are motivated by speakers' tacit knowledge of the conceptual metaphors underlying the meanings of these figurative phrases.", "title": "" }, { "docid": "69ced55a44876f7cc4e57f597fcd5654", "text": "A wideband circularly polarized (CP) antenna with a conical radiation pattern is investigated. It consists of a feeding probe and parasitic dielectric parallelepiped elements that surround the probe. Since the structure of the antenna looks like a bird nest, it is named as bird-nest antenna. The probe, which protrudes from a circular ground plane, operates in its fundamental monopole mode that generates omnidirectional linearly polarized (LP) fields. The dielectric parallelepipeds constitute a wave polarizer that converts omnidirectional LP fields of the probe into omnidirectional CP fields. To verify the design, a prototype operating in C band was fabricated and measured. The reflection coefficient, axial ratio (AR), radiation pattern, and antenna gain are studied, and reasonable agreement between the measured and simulated results is observed. The prototype has a 10-dB impedance bandwidth of 41.0% and a 3-dB AR bandwidth of as wide as 54.9%. A parametric study was carried out to characterize the proposed antenna. Also, a design guideline is given to facilitate designs of the antenna.", "title": "" }, { "docid": "db3abbca12b7a1c4e611aa3707f65563", "text": "This paper describes the background and methods for the prod uction of CIDOC-CRM compliant data sets from diverse collec tions of source data. The construction of such data sets is based on data in column format, typically exported for databases, as well as free text, typically created through scanning and OCR proce ssing or transcription.", "title": "" }, { "docid": "7db5807fc15aeb8dfe4669a8208a8978", "text": "This document is an output from a project funded by the UK Department for International Development (DFID) for the benefit of developing countries. The views expressed are not necessarily those of DFID. Contents Contents i List of tables ii List of figures ii List of boxes ii Acronyms iii Acknowledgements iv Summary 1 1. Introduction: why worry about disasters? 7 Objectives of this Study 7 Global disaster trends 7 Why donors should be concerned 9 What donors can do 9 2. What makes a disaster? 11 Characteristics of a disaster 11 Disaster risk reduction 12 The diversity of hazards 12 Vulnerability and capacity, coping and adaptation 15 Resilience 16 Poverty and vulnerability: links and differences 16 'The disaster management cycle' 17 3. Why should disasters be a development concern? 19 3.1 Disasters hold back development 19 Disasters undermine efforts to achieve the Millennium Development Goals 19 Macroeconomic impacts of disasters 21 Reallocation of resources from development to emergency assistance 22 Disaster impact on communities and livelihoods 23 3.2 Disasters are rooted in development failures 25 Dominant development models and risk 25 Development can lead to disaster 26 Poorly planned attempts to reduce risk can make matters worse 29 Disaster responses can themselves exacerbate risk 30 3.3 'Disaster-proofing' development: what are the gains? 31 From 'vicious spirals' of failed development and disaster risk… 31 … to 'virtuous spirals' of risk reduction 32 Disaster risk reduction can help achieve the Millennium Development Goals 33 … and can be cost-effective 33 4. Why does development tend to overlook disaster risk? 36 4.1 Introduction 36 4.2 Incentive, institutional and funding structures 36 Political incentives and governance in disaster prone countries 36 Government-donor relations and moral hazard 37 Donors and multilateral agencies 38 NGOs 41 4.3 Lack of exposure to and information on disaster issues 41 4.4 Assumptions about the risk-reducing capacity of development 43 ii 5. Tools for better integrating disaster risk reduction into development 45 Introduction 45 Poverty Reduction Strategy Papers (PRSPs) 45 UN Development Assistance Frameworks (UNDAFs) 47 Country assistance plans 47 National Adaptation Programmes of Action (NAPAs) 48 Partnership agreements with implementing agencies and governments 49 Programme and project appraisal guidelines 49 Early warning and information systems 49 Risk transfer mechanisms 51 International initiatives and policy forums 51 Risk reduction performance targets and indicators for donors 52 6. Conclusions and recommendations 53 6.1 Main conclusions 53 6.2 Recommendations 54 Core recommendation …", "title": "" }, { "docid": "4a9a53444a74f7125faa99d58a5b0321", "text": "The new transformed read-write Web has resulted in a rapid growth of user generated content on the Web resulting into a huge volume of unstructured data. A substantial part of this data is unstructured text such as reviews and blogs. Opinion mining and sentiment analysis (OMSA) as a research discipline has emerged during last 15 years and provides a methodology to computationally process the unstructured data mainly to extract opinions and identify their sentiments. The relatively new but fast growing research discipline has changed a lot during these years. This paper presents a scientometric analysis of research work done on OMSA during 20 0 0–2016. For the scientometric mapping, research publications indexed in Web of Science (WoS) database are used as input data. The publication data is analyzed computationally to identify year-wise publication pattern, rate of growth of publications, types of authorship of papers on OMSA, collaboration patterns in publications on OMSA, most productive countries, institutions, journals and authors, citation patterns and an year-wise citation reference network, and theme density plots and keyword bursts in OMSA publications during the period. A somewhat detailed manual analysis of the data is also performed to identify popular approaches (machine learning and lexicon-based) used in these publications, levels (document, sentence or aspect-level) of sentiment analysis work done and major application areas of OMSA. The paper presents a detailed analytical mapping of OMSA research work and charts the progress of discipline on various useful parameters. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "abc160fc578bb40935afa7aea93cf6ca", "text": "This study investigates the effect of leader and follower behavior on employee voice, team task responsibility and team effectiveness. This study distinguishes itself by including both leader and follower behavior as predictors of team effectiveness. In addition, employee voice and team task responsibility are tested as potential mediators of the relationship between task-oriented behaviors (informing, directing, verifying) and team effectiveness as well as the relationship between relation-oriented behaviors (positive feedback, intellectual stimulation, individual consideration) and team effectiveness. This cross-sectional exploratory study includes four methods: 1) inter-reliable coding of leader and follower behavior during staff meetings; 2) surveys of 57 leaders; 3) surveys of643 followers; 4) survey of 56 lean coaches. Regression analyses showed that both leaders and followers display more task-oriented behaviors opposed to relation-oriented behaviors during staff meetings. Contrary to the hypotheses, none of the observed leader behaviors positively influences employee voice, team task responsibility or team effectiveness. However, all three task-oriented follower behaviors indirectly influence team effectiveness. The findings from this research illustrate that follower behaviors has more influence on team effectiveness compared to leader behavior. Practical implications, strengths and limitations of the research are discussed. Moreover, future research directions including the mediating role of culture and psychological safety are proposed as well.", "title": "" }, { "docid": "e97c0bbb74534a16c41b4a717eed87d5", "text": "This paper is discussing about the road accident severity survey using data mining, where different approaches have been considered. We have collected research work carried out by different researchers based on road accidents. Article describing the review work in context of road accident case’s using data mining approach. The article is consisting of collections of methods in different scenario with the aim to resolve the road accident. Every method is somewhere seeming to productive in some ways to decrease the no of causality. It will give a better edge to different country where the no of accidents is leading to fatality of life.", "title": "" }, { "docid": "7539a738cad3a36336dc7019e2aabb21", "text": "In this paper a compact antenna for ultrawideband applications is presented. The antenna is based on the biconical antenna design and has two identical elements. Each element is composed of a cone extended with a ring and an inner cylinder. The modification of the well-known biconical structure is made in order to reduce the influence of the radiation of the feeding cable. To obtain the optimum parameters leading to a less impact of the cable effect on the antenna performance, during the optimization process the antenna was coupled with a feeding coaxial cable. The proposed antenna covers the frequency range from 1.5 to 41 GHz with voltage standing wave ratio below 2 and has an omnidirectional radiation pattern. The realized total efficiency is above 85 % which indicates a good performance.", "title": "" }, { "docid": "a87ba6d076c3c05578a6f6d9da22ac79", "text": "Here we review and extend a new unitary model for the pathophysiology of involutional osteoporosis that identifies estrogen (E) as the key hormone for maintaining bone mass and E deficiency as the major cause of age-related bone loss in both sexes. Also, both E and testosterone (T) are key regulators of skeletal growth and maturation, and E, together with GH and IGF-I, initiate a 3- to 4-yr pubertal growth spurt that doubles skeletal mass. Although E is required for the attainment of maximal peak bone mass in both sexes, the additional action of T on stimulating periosteal apposition accounts for the larger size and thicker cortices of the adult male skeleton. Aging women undergo two phases of bone loss, whereas aging men undergo only one. In women, the menopause initiates an accelerated phase of predominantly cancellous bone loss that declines rapidly over 4-8 yr to become asymptotic with a subsequent slow phase that continues indefinitely. The accelerated phase results from the loss of the direct restraining effects of E on bone turnover, an action mediated by E receptors in both osteoblasts and osteoclasts. In the ensuing slow phase, the rate of cancellous bone loss is reduced, but the rate of cortical bone loss is unchanged or increased. This phase is mediated largely by secondary hyperparathyroidism that results from the loss of E actions on extraskeletal calcium metabolism. The resultant external calcium losses increase the level of dietary calcium intake that is required to maintain bone balance. Impaired osteoblast function due to E deficiency, aging, or both also contributes to the slow phase of bone loss. Although both serum bioavailable (Bio) E and Bio T decline in aging men, Bio E is the major predictor of their bone loss. Thus, both sex steroids are important for developing peak bone mass, but E deficiency is the major determinant of age-related bone loss in both sexes.", "title": "" }, { "docid": "296705d6bfc09f58c8e732a469b17871", "text": "Computer security incident response teams (CSIRTs) respond to a computer security incident when the need arises. Failure of these teams can have far-reaching effects for the economy and national security. CSIRTs often have to work on an ad hoc basis, in close cooperation with other teams, and in time constrained environments. It could be argued that under these working conditions CSIRTs would be likely to encounter problems. A needs assessment was done to see to which extent this argument holds true. We constructed an incident response needs model to assist in identifying areas that require improvement. We envisioned a model consisting of four assessment categories: Organization, Team, Individual and Instrumental. Central to this is the idea that both problems and needs can have an organizational, team, individual, or technical origin or a combination of these levels. To gather data we conducted a literature review. This resulted in a comprehensive list of challenges and needs that could hinder or improve, respectively, the performance of CSIRTs. Then, semi-structured in depth interviews were held with team coordinators and team members of five public and private sector Dutch CSIRTs to ground these findings in practice and to identify gaps between current and desired incident handling practices. This paper presents the findings of our needs assessment and ends with a discussion of potential solutions to problems with performance in incident response.", "title": "" }, { "docid": "ac57fab046cfd02efa1ece262b07492f", "text": "Interactive Narrative is an approach to interactive entertainment that enables the player to make decisions that directly affect the direction and/or outcome of the narrative experience being delivered by the computer system. Interactive narrative requires two seemingly conflicting requirements: coherent narrative and user agency. We present an interactive narrative system that uses a combination of narrative control and autonomous believable character agents to augment a story world simulation in which the user has a high degree of agency with narrative plot control. A drama manager called the Automated Story Director gives plot-based guidance to believable agents. The believable agents are endowed with the autonomy necessary to carry out directives in the most believable fashion possible. Agents also handle interaction with the user. When the user performs actions that change the world in such a way that the Automated Story Director can no longer drive the intended narrative forward, it is able to adapt the plot to incorporate the user’s changes and still achieve", "title": "" } ]
scidocsrr
e0458ea6464048855c2b65819e927bb8
Towards correct network virtualization
[ { "docid": "6dc1a6c032196a748e005ce49d735752", "text": "Network virtualization is a powerful way to run multiple architectures or experiments simultaneously on a shared infrastructure. However, making efficient use of the underlying resources requires effective techniques for virtual network embedding--mapping each virtual network to specific nodes and links in the substrate network. Since the general embedding problem is computationally intractable, past research restricted the problem space to allow efficient solutions, or focused on designing heuristic algorithms. In this paper, we advocate a different approach: rethinking the design of the substrate network to enable simpler embedding algorithms and more efficient use of resources, without restricting the problem space. In particular, we simplify virtual link embedding by: i) allowing the substrate network to split a virtual link over multiple substrate paths and ii) employing path migration to periodically re-optimize the utilization of the substrate network. We also explore node-mapping algorithms that are customized to common classes of virtual-network topologies. Our simulation experiments show that path splitting, path migration,and customized embedding algorithms enable a substrate network to satisfy a much larger mix of virtual networks", "title": "" } ]
[ { "docid": "5d44349955d07a212bc11f6edfaec8b0", "text": "This investigation develops an innovative algorithm for multiple autonomous unmanned aerial vehicle (UAV) mission routing. The concept of a UAV Swarm Routing Problem (SRP) as a new combinatorics problem, is developed as a variant of the Vehicle Routing Problem with Time Windows (VRPTW). Solutions of SRP problem model result in route assignments per vehicle that successfully track to all targets, on time, within distance constraints. A complexity analysis and multi-objective formulation of the VRPTW indicates the necessity of a stochastic solution approach leading to a multi-objective evolutionary algorithm. A full problem definition of the SRP as well as a multi-objective formulation parallels that of the VRPTW method. Benchmark problems for the VRPTW are modified in order to create SRP benchmarks. The solutions show the SRP solutions are comparable or better than the same VRPTW solutions, while also representing a more realistic UAV swarm routing solution.", "title": "" }, { "docid": "f850321173db137674eb74a0dd2afc30", "text": "The relational data model has been dominant and widely used since 1970. However, as the need to deal with big data grows, new data models, such as Hadoop and NoSQL, were developed to address the limitation of the traditional relational data model. As a result, determining which data model is suitable for applications has become a challenge. The purpose of this paper is to provide insight into choosing the suitable data model by conducting a benchmark using Yahoo! Cloud Serving Benchmark (YCSB) on three different database systems: (1) MySQL for relational data model, (2) MongoDB for NoSQL data model, and (3) HBase for Hadoop framework. The benchmark was conducted by running four different workloads. Each workload is executed using a different increasing operation and thread count, while observing how their change respectively affects throughput, latency, and runtime.", "title": "" }, { "docid": "6ebb0bccba167e4b093e7832621e3e23", "text": "Bump-less Cu/adhesive hybrid bonding is a promising technology for 2.5D/3D integration. The remaining issues of this technology include high Cu–Cu bonding temperature, long thermal-compression time (low throughput), and large thermal stress. In this paper, we investigate a Cu-first hybrid bonding process in hydrogen(H)-containing formic acid (HCOOH) vapor ambient, lowering the bonding temperature to 180 °C and shortening the thermal-compression time to 600 s. We find that the H-containing HCOOH vapor pre-bonding treatment is effective for Cu surface activation and friendly to adhesives at treatment temperature of 160–200 °C. The effects of surface activation (temperature and time) on Cu–Cu bonding and cyclo-olefin polymer (COP) adhesive bonding are studied by shear tests, fracture surface observations, and interfacial observations. Cu/adhesive hybrid bonding was successfully demonstrated at a bonding temperature of 180 °C with post-bonding adhesive curing at 200 °C.", "title": "" }, { "docid": "1683cf711705b78b9465d8053a94b473", "text": "In this paper, we investigate the problem of counting rosette leaves from an RGB image, an important task in plant phenotyping. We propose a data-driven approach for this task generalized over different plant species and imaging setups. To accomplish this task, we use state-of-the-art deep learning architectures: a deconvolutional network for initial segmentation and a convolutional network for leaf counting. Evaluation is performed on the leaf counting challenge dataset at CVPPP-2017. Despite the small number of training samples in this dataset, as compared to typical deep learning image sets, we obtain satisfactory performance on segmenting leaves from the background as a whole and counting the number of leaves using simple data augmentation strategies. Comparative analysis is provided against methods evaluated on the previous competition datasets. Our framework achieves mean and standard deviation of absolute count difference of 1.62 and 2.30 averaged over all five test datasets.", "title": "" }, { "docid": "eaa6daff2f28ea7f02861e8c67b9c72b", "text": "The demand of fused magnesium furnaces (FMFs) refers to the average value of the power of the FMFs over a fixed period of time before the current time. The demand is an indicator of the electricity consumption of high energy-consuming FMFs. When the demand exceeds the limit of the Peak Demand (a predetermined maximum demand), the power supply of some FMF will be cut off to ensure that the demand is no more than Peak Demand. But the power cutoff will destroy the heat balance, reduce the quality and yield of the product. The composition change of magnesite in FMFs will cause demand spike occasionally, which a sudden increase in demand exceeds the limit and then drops below the limit. As a result, demand spike cause the power cutoff. In order to avoid the power cutoff at the moment of demand spike, the demand of FMFs needs to be forecasted. This paper analyzes the dynamic model of the demand of FMFs, using the power data, presents a data-driven demand forecasting method. This method consists of the following: PACF based decision module for the number of the input variables of the forecasting model, RBF neural network (RBFNN) based power variation rate forecasting model and demand forecasting model. Simulations based on actual data and industrial experiments at a fused magnesia plant show the effectiveness of the proposed method.", "title": "" }, { "docid": "3fdd81a3e2c86f43152f72e159735a42", "text": "Class imbalance learning tackles supervised learning problems where some classes have significantly more examples than others. Most of the existing research focused only on binary-class cases. In this paper, we study multiclass imbalance problems and propose a dynamic sampling method (DyS) for multilayer perceptrons (MLP). In DyS, for each epoch of the training process, every example is fed to the current MLP and then the probability of it being selected for training the MLP is estimated. DyS dynamically selects informative data to train the MLP. In order to evaluate DyS and understand its strength and weakness, comprehensive experimental studies have been carried out. Results on 20 multiclass imbalanced data sets show that DyS can outperform the compared methods, including pre-sample methods, active learning methods, cost-sensitive methods, and boosting-type methods.", "title": "" }, { "docid": "90241619360fe97b83e2777438a6c4f8", "text": "Although K-means clustering algorithm is simple and popular, it has a fundamental drawback of falling into local optima that depend on the randomly generated initial centroid values. Optimization algorithms are well known for their ability to guide iterative computation in searching for global optima. They also speed up the clustering process by achieving early convergence. Contemporary optimization algorithms inspired by biology, including the Wolf, Firefly, Cuckoo, Bat and Ant algorithms, simulate swarm behavior in which peers are attracted while steering towards a global objective. It is found that these bio-inspired algorithms have their own virtues and could be logically integrated into K-means clustering to avoid local optima during iteration to convergence. In this paper, the constructs of the integration of bio-inspired optimization methods into K-means clustering are presented. The extended versions of clustering algorithms integrated with bio-inspired optimization methods produce improved results. Experiments are conducted to validate the benefits of the proposed approach.", "title": "" }, { "docid": "ef0c5454b9b7854866712e897c29a198", "text": "This paper presents a new online clustering algorithm called SAFN which is used to learn continuously evolving clusters from non-stationary data. The SAFN uses a fast adaptive learning procedure to take into account variations over time. In non-stationary and multi-class environment, the SAFN learning procedure consists of five main stages: creation, adaptation, mergence, split and elimination. Experiments are carried out in three kinds of datasets to illustrate the performance of the SAFN algorithm for online clustering. Compared with SAKM algorithm, SAFN algorithm shows better performance in accuracy of clustering and multi-class high-dimension data.", "title": "" }, { "docid": "e66ae650db7c4c75a88ee6cf1ea8694d", "text": "Traditional recommender systems minimize prediction error with respect to users' choices. Recent studies have shown that recommender systems have a positive effect on the provider's revenue.\n In this paper we show that by providing a set of recommendations different than the one perceived best according to user acceptance rate, the recommendation system can further increase the business' utility (e.g. revenue), without any significant drop in user satisfaction. Indeed, the recommendation system designer should have in mind both the user, whose taste we need to reveal, and the business, which wants to promote specific content.\n We performed a large body of experiments comparing a commercial state-of-the-art recommendation engine with a modified recommendation list, which takes into account the utility (or revenue) which the business obtains from each suggestion that is accepted by the user. We show that the modified recommendation list is more desirable for the business, as the end result gives the business a higher utility (or revenue). To study possible reduce in satisfaction by providing the user worse suggestions, we asked the users how they perceive the list of recommendation that they received. Differences in user satisfaction between the lists is negligible, and not statistically significant.\n We also uncover a phenomenon where movie consumers prefer watching and even paying for movies that they have already seen in the past than movies that are new to them.", "title": "" }, { "docid": "13897df01d4c03191dd015a04c3a5394", "text": "Medical or Health related search queries constitute a significant portion of the total number of queries searched everyday on the web. For health queries, the authenticity or authoritativeness of search results is of utmost importance besides relevance. So far, research in automatic detection of authoritative sources on the web has mainly focused on a) link structure based approaches and b) supervised approaches for predicting trustworthiness. However, the aforementioned approaches have some inherent limitations. For example, several content farm and low quality sites artificially boost their link-based authority rankings by forming a syndicate of highly interlinked domains and content which is algorithmically hard to detect. Moreover, the number of positively labeled training samples available for learning trustworthiness is also limited when compared to the size of the web. In this paper, we propose a novel unsupervised approach to detect and promote authoritative domains in health segment using click-through data. We argue that standard IR metrics such as NDCG are relevance-centric and hence are not suitable for evaluating authority. We propose a new authority-centric evaluation metric based on side-by-side judgment of results. Using real world search query sets, we evaluate our approach both quantitatively and qualitatively and show that it succeeds in significantly improving the authoritativeness of results when compared to a standard web ranking baseline. ∗Corresponding Author", "title": "" }, { "docid": "3bba36e8f3d3a490681e82c8c3a10b11", "text": "This paper describes the design and implementation of programmable AXI bus Interface modules in Verilog Hardware Description Language (HDL) and implementation in Xilinx Spartan 3E FPGA. All the interface modules are reconfigurable with the data size, burst type, number of transfers in a burst. Multiple masters can communicate with different slave memory locations concurrently. An arbiter controls the burst grant to different bus masters based on Round Robin algorithm. Separate decoder modules are implemented for write address channel, write data channel, write response channel, read address channel, read data channel. The design can support a maximum of 16 masters. All the RTL simulations are performed using Modelsim RTL Simulator. Each independent module is synthesized in XC3S250EPQ208-5 FPGA and the maximum speed is found to be 298.958 MHz. All the design modules can be integrated to create a soft IP for the AXI BUS system.", "title": "" }, { "docid": "86aaee95a4d878b53fd9ee8b0735e208", "text": "The tensegrity concept has long been considered as a basis for lightweight and compact packaging deployable structures, but very few studies are available. This paper presents a complete design study of a deployable tensegrity mast with all the steps involved: initial formfinding, structural analysis, manufacturing and deployment. Closed-form solutions are used for the formfinding. A manufacturing procedure in which the cables forming the outer envelope of the mast are constructed by two-dimensional weaving is used. The deployment of the mast is achieved through the use of self-locking hinges. A stiffness comparison between the tensegrity mast and an articulated truss mast shows that the tensegrity mast is weak in bending.", "title": "" }, { "docid": "b0e94a0fdaf280d9e1942befdc4ac660", "text": "In SCARA robots, which are often used in industrial applications, all joint axes are parallel, covering three degrees of freedom in translation and one degree of freedom in rotation. Therefore, conventional approaches for the hand-eye calibration of articulated robots cannot be used for SCARA robots. In this paper, we present a new linear method that is based on dual quaternions and extends the work of Daniilid is 1999 (IJRR) for SCARA robots. To improve the accuracy, a subsequent nonlinear optimization is proposed. We address several practical implementation issues and show the effectiveness of the method by evaluating it on synthetic and real data.", "title": "" }, { "docid": "73f8a5e5e162cc9b1ed45e13a06e78a5", "text": "Two major projects in the U.S. and Europe have joined in a collaboration to work toward achieving interoperability among language resources. In the U.S., the project, Sustainable Interoperability for Language Technology (SILT) has been funded by the National Science Foundation under the INTEROP program, and in Europe, FLaReNet, Fostering Language Resources Network, has been funded by the European Commission under the eContentPlus framework. This international collaborative effort involves members of the language processing community and others working in related areas to build consensus regarding the sharing of data and technologies for language resources and applications, to work towards interoperability of existing data, and, where possible, to promote standards for annotation and resource building. This paper focuses on the results of a recent workshop whose goal was to arrive at operational definitions for interoperability over four thematic areas, including metadata for describing language resources, data categories and their semantics, resource publication requirements, and software sharing.", "title": "" }, { "docid": "ff67f2bbf20f5ad2bef6641e8e7e3deb", "text": "An observation one can make when reviewing the literature on physical activity is that health-enhancing exercise habits tend to wear off as soon as individuals enter adolescence. Therefore, exercise habits should be promoted and preserved early in life. This article focuses on the formation of physical exercise habits. First, the literature on motivational determinants of habitual exercise and related behaviours is discussed, and the concept of habit is further explored. Based on this literature, a theoretical model of exercise habit formation is proposed. More specifically, expanding on the idea that habits are the result of automated cognitive processes, it is argued that physical exercise habits are capable of being automatically activated by the situational features that normally precede these behaviours. These habits may enhance health as a result of consistent performance over a long period of time. Subsequently, obstacles to the formation of exercise habits are discussed and interventions that may anticipate these obstacles are presented. Finally, implications for theory and practice are briefly discussed.", "title": "" }, { "docid": "861b170e5da6941e2cf55d8b7d9799b6", "text": "Scaling wireless charging to power levels suitable for heavy duty passenger vehicles and mass transit bus requires indepth assessment of wireless power transfer (WPT) architectures, component sizing and stress, package size, electrical insulation requirements, parasitic loss elements, and cost minimization. It is demonstrated through an architecture comparison that the voltage rating of the power inverter semiconductors will be higher for inductor-capacitor-capacitor (LCC) than for a more conventional Series-Parallel (S-P) tuning. Higher voltage at the source inverter dc bus facilitates better utilization of the semiconductors, hence lower cost. Electrical and thermal stress factors of the passive components are explored, in particular the compensating capacitors and coupling coils. Experimental results are presented for a prototype, precommercial, 10 kW wireless charger designed for heavy duty (HD) vehicle application. Results are in good agreement with theory and validate a design that minimizes component stress.", "title": "" }, { "docid": "6a0c54fcac95f86df54a0508588aee61", "text": "Liveness detection (often referred to as presentation attack detection) is the ability to detect artificial objects presented to a biometric device with an intention to subvert the recognition system. This paper presents the database of iris printout images with a controlled quality, and its fundamental application, namely development of liveness detection method for iris recognition. The database gathers images of only those printouts that were accepted by an example commercial camera, i.e. the iris template calculated for an artefact was matched to the corresponding iris reference of the living eye. This means that the quality of the employed imitations is not accidental and precisely controlled. The database consists of 729 printout images for 243 different eyes, and 1274 images of the authentic eyes, corresponding to imitations. It may thus serve as a good benchmark for at least two challenges: a) assessment of the liveness detection algorithms, and b) assessment of the eagerness of matching real and fake samples by iris recognition methods. To our best knowledge, the iris printout database of such properties is the first worldwide published as of today. In its second part, the paper presents an example application of this database, i.e. the development of liveness detection method based on iris image frequency analysis. We discuss how to select frequency windows and regions of interest to make the method sensitive to “alien frequencies” resulting from the printing process. The proposed method shows a very promising results, since it may be configured to achieve no false alarms when the rate of accepting the iris printouts is approximately 5% (i.e. 95% of presentation attack trials are correctly identified). This favorable compares to the results of commercial equipment used in the database development, as this device accepted all the printouts used. The method employs the same image as used in iris recognition process, hence no investments into the capture devices is required, and may be applied also to other carriers for printed iris patterns, e.g. contact lens.", "title": "" }, { "docid": "43b9753d934d2e7598d6342a81f21bed", "text": "A system has been developed which is capable of inducing brain injuries of graded severity from mild concussion to instantaneous death. A pneumatic shock tester subjects a monkey to a non-impact controlled single sagittal rotation which displaces the head 60 degrees in 10-20 msec. Results derived from 53 experiments show that a good correlation exists between acceleration delivered to the head, the resultant neurological status and the brain pathology. A simple experimental trauma severity (ETS) scale is offered based on changes in the heart rate, respiratory rate, corneal reflex and survivability. ETS grades 1 and 2 show heart rate or respiratory changes but no behavioral or pathological abnormality. ETS grades 3 and 4 have temporary corneal reflex abolition, behavioral unconsciousness, and post-traumatic behavioral abnormalities. Occasional subdural haematomas are seen. Larger forces cause death (ETS 5) from primary apnea or from large subdural haematomas. At the extreme range, instantaneous death (ETS 6) occurs because of pontomedullary lacerations. This model and the ETS scale offer the ability to study a broad spectrum of types of experimental head injury and underscore the importance of angular acceleration as a mechanism of head injury.", "title": "" }, { "docid": "b4409a8e8a47bc07d20cebbfaccb83fd", "text": "We evaluate two decades of proposals to replace text passwords for general-purpose user authentication on the web using a broad set of twenty-five usability, deployability and security benefits that an ideal scheme might provide. The scope of proposals we survey is also extensive, including password management software, federated login protocols, graphical password schemes, cognitive authentication schemes, one-time passwords, hardware tokens, phone-aided schemes and biometrics. Our comprehensive approach leads to key insights about the difficulty of replacing passwords. Not only does no known scheme come close to providing all desired benefits: none even retains the full set of benefits that legacy passwords already provide. In particular, there is a wide range from schemes offering minor security benefits beyond legacy passwords, to those offering significant security benefits in return for being more costly to deploy or more difficult to use. We conclude that many academic proposals have failed to gain traction because researchers rarely consider a sufficiently wide range of real-world constraints. Beyond our analysis of current schemes, our framework provides an evaluation methodology and benchmark for future web authentication proposals.", "title": "" } ]
scidocsrr
16880cc223b10e55afce93c0630e34b5
Scheduling techniques for hybrid circuit/packet networks
[ { "docid": "8fcc8c61dd99281cfda27bbad4b7623a", "text": "Modern data centers are massive, and support a range of distributed applications across potentially hundreds of server racks. As their utilization and bandwidth needs continue to grow, traditional methods of augmenting bandwidth have proven complex and costly in time and resources. Recent measurements show that data center traffic is often limited by congestion loss caused by short traffic bursts. Thus an attractive alternative to adding physical bandwidth is to augment wired links with wireless links in the 60 GHz band.\n We address two limitations with current 60 GHz wireless proposals. First, 60 GHz wireless links are limited by line-of-sight, and can be blocked by even small obstacles. Second, even beamforming links leak power, and potential interference will severely limit concurrent transmissions in dense data centers. We propose and evaluate a new wireless primitive for data centers, 3D beamforming, where 60 GHz signals bounce off data center ceilings, thus establishing indirect line-of-sight between any two racks in a data center. We build a small 3D beamforming testbed to demonstrate its ability to address both link blockage and link interference, thus improving link range and number of concurrent transmissions in the data center. In addition, we propose a simple link scheduler and use traffic simulations to show that these 3D links significantly expand wireless capacity compared to their 2D counterparts.", "title": "" } ]
[ { "docid": "5b6daefbefd44eea4e317e673ad91da3", "text": "A three-dimensional (3-D) thermogram can provide spatial information; however, it is rarely applied because it lacks an accurate method in obtaining the intrinsic and extrinsic parameters of an infrared (IR) camera. Conventional methods cannot be used for such calibration because an IR camera cannot capture visible calibration patterns. Therefore, in the current study, a trinocular vision system composed of two visible cameras and an IR camera is constructed and a calibration board with miniature bulbs is designed. The two visible cameras compose a binocular vision system that obtains 3-D information from the miniature bulbs while the IR camera captures the calibration board to obtain the two dimensional subpixel coordinates of miniature bulbs. The corresponding algorithm is proposed to calibrate the IR camera based on the gathered information. Experimental results show that the proposed calibration can accurately obtain the intrinsic and extrinsic parameters of the IR camera, and meet the requirements of its application.", "title": "" }, { "docid": "74f674ddfd04959303bb89bd6ef22b66", "text": "Ethernet is the survivor of the LAN wars. It is hard to find an IP packet that has not passed over an Ethernet segment. One important reason for this is Ethernet's simplicity and ease of configuration. However, Ethernet has always been known to be an insecure technology. Recent successful malware attacks and the move towards cloud computing in data centers demand that attention be paid to the security aspects of Ethernet. In this paper, we present known Ethernet related threats and discuss existing solutions from business, hacker, and academic communities. Major issues, like insecurities related to Address Resolution Protocol and to self-configurability, are discussed. The solutions fall roughly into three categories: accepting Ethernet's insecurity and circling it with firewalls; creating a logical separation between the switches and end hosts; and centralized cryptography based schemes. However, none of the above provides the perfect combination of simplicity and security befitting Ethernet.", "title": "" }, { "docid": "9868b2a338911071e5e0553d6aa87eb7", "text": "This paper reports on a workshop in June 2007 on the topic of the insider threat. Attendees represented academia and research institutions, consulting firms, industry—especially the financial services sector, and government. Most participants were from the United States. Conventional wisdom asserts that insiders account for roughly a third of the computer security loss. Unfortunately, there is currently no way to validate or refute that assertion, because data on the insider threat problem is meager at best. Part of the reason so little data exists on the insider threat problem is that the concepts of insider and insider threat are not consistently defined. Consequently, it is hard to compare even the few pieces of insider threat data that do exist. Monitoring is a means of addressing the insider threat, although it is more successful to verify a case of suspected insider attack than it is to identify insider attacks. Monitoring has (negative) implications for personal privacy. However, companies generally have wide leeway to monitor the activity of their employees. Psychological profiling of potential insider attackers is appealing but may be hard to accomplish. More productive may be using psychological tools to promote positive behavior on the part of employees.", "title": "" }, { "docid": "b200836d9046e79b61627122419d93c4", "text": "Digital evidence plays a vital role in determining legal case admissibility in electronic- and cyber-oriented crimes. Considering the complicated level of the Internet of Things (IoT) technology, performing the needed forensic investigation will be definitely faced by a number of challenges and obstacles, especially in digital evidence acquisition and analysis phases. Based on the currently available network forensic methods and tools, the performance of IoT forensic will be producing a deteriorated digital evidence trail due to the sophisticated nature of IoT connectivity and data exchangeability via the “things”. In this paper, a revision of IoT digital evidence acquisition procedure is provided. In addition, an improved theoretical framework for IoT forensic model that copes with evidence acquisition issues is proposed and discussed.", "title": "" }, { "docid": "e13b4b92c639a5b697356466e00e05c3", "text": "In fashion retailing, the display of product inventory at the store is important to capture consumers’ attention. Higher inventory levels might allow more attractive displays and thus increase sales, in addition to avoiding stock-outs. We develop a choice model where product demand is indeed affected by inventory, and controls for product and store heterogeneity, seasonality, promotions and potential unobservable shocks in each market. We empirically test the model with daily traffic, inventory and sales data from a large retailer, at the store-day-product level. We find that the impact of inventory level on sales is positive and highly significant, even in situations of extremely high service level. The magnitude of this effect is large: each 1% increase in product-level inventory at the store increases sales of 0.58% on average. This supports the idea that inventory has a strong role in helping customers choose a particular product within the assortment. We finally describe how a retailer should optimally decide its inventory levels within a category and describe the properties of the optimal solution. Applying such optimization to our data set yields consistent and significant revenue improvements, of more than 10% for any date and store compared to current practices. Submitted: April 6, 2016. Revised: May 17, 2017", "title": "" }, { "docid": "cc8e52fdb69a9c9f3111287905f02bfc", "text": "We present a new methodology for exploring and analyzing navigation patterns on a web site. The patterns that can be analyzed consist of sequences of URL categories traversed by users. In our approach, we first partition site users into clusters such that users with similar navigation paths through the site are placed into the same cluster. Then, for each cluster, we display these paths for users within that cluster. The clustering approach we employ is model-based (as opposed to distance-based) and partitions users according to the order in which they request web pages. In particular, we cluster users by learning a mixture of first-order Markov models using the Expectation-Maximization algorithm. The runtime of our algorithm scales linearly with the number of clusters and with the size of the data; and our implementation easily handles hundreds of thousands of user sessions in memory. In the paper, we describe the details of our method and a visualization tool based on it called WebCANVAS. We illustrate the use of our approach on user-traffic data from msnbc.com.", "title": "" }, { "docid": "acab6a0a8b5e268cd0a5416bd00b4f55", "text": "We propose SocialFilter, a trust-aware collaborative spam mitigation system. Our proposal enables nodes with no email classification functionality to query the network on whether a host is a spammer. It employs Sybil-resilient trust inference to weigh the reports concerning spamming hosts that collaborating spam-detecting nodes (reporters) submit to the system. It weighs the spam reports according to the trustworthiness of their reporters to derive a measure of the system's belief that a host is a spammer. SocialFilter is the first collaborative unwanted traffic mitigation system that assesses the trustworthiness of spam reporters by both auditing their reports and by leveraging the social network of the reporters' administrators. The design and evaluation of our proposal offers us the following lessons: a) it is plausible to introduce Sybil-resilient Online-Social-Network-based trust inference mechanisms to improve the reliability and the attack-resistance of collaborative spam mitigation; b) using social links to obtain the trustworthiness of reports concerning spammers can result in comparable spam-blocking effectiveness with approaches that use social links to rate-limit spam (e.g., Ostra [27]); c) unlike Ostra, in the absence of reports that incriminate benign email senders, SocialFilter yields no false positives.", "title": "" }, { "docid": "dfc383a057aa4124dfc4237e607c321a", "text": "Obfuscation is applied to large quantities of benign and malicious JavaScript throughout the web. In situations where JavaScript source code is being submitted for widespread use, such as in a gallery of browser extensions (e.g., Firefox), it is valuable to require that the code submitted is not obfuscated and to check for that property. In this paper, we describe NOFUS, a static, automatic classifier that distinguishes obfuscated and non-obfuscated JavaScript with high precision. Using a collection of examples of both obfuscated and non-obfuscated JavaScript, we train NOFUS to distinguish between the two and show that the classifier has both a low false positive rate (about 1%) and low false negative rate (about 5%). Applying NOFUS to collections of deployed JavaScript, we show it correctly identifies obfuscated JavaScript files from Alexa top 50 websites. While prior work conflates obfuscation with maliciousness (assuming that detecting obfuscation implies maliciousness), we show that the correlation is weak. Yes, much malware is hidden using obfuscation, but so is benign JavaScript. Further, applying NOFUS to known JavaScript malware, we show our classifier finds 15% of the files are unobfuscated, showing that not all malware is obfuscated.", "title": "" }, { "docid": "6b3db3006f8314559bbbe41620466c6e", "text": "Segmentation of anatomical structures in medical images is often based on a voxel/pixel classification approach. Deep learning systems, such as convolutional neural networks (CNNs), can infer a hierarchical representation of images that fosters categorization. We propose a novel system for voxel classification integrating three 2D CNNs, which have a one-to-one association with the xy, yz and zx planes of 3D image, respectively. We applied our method to the segmentation of tibial cartilage in low field knee MRI scans and tested it on 114 unseen scans. Although our method uses only 2D features at a single scale, it performs better than a state-of-the-art method using 3D multi-scale features. In the latter approach, the features and the classifier have been carefully adapted to the problem at hand. That we were able to get better results by a deep learning architecture that autonomously learns the features from the images is the main insight of this study.", "title": "" }, { "docid": "e120320dbe8fa0e2475b96a0b07adec8", "text": "BACKGROUND\nProne hip extension (PHE) is a common and widely accepted test used for assessment of the lumbo-pelvic movement pattern. Considerable increased in lumbar lordosis during this test has been considered as impairment of movement patterns in lumbo-pelvic region. The purpose of this study was to investigate the change of lumbar lordosis in PHE test in subjects with and without low back pain (LBP).\n\n\nMETHOD\nA two-way mixed design with repeated measurements was used to investigate the lumbar lordosis changes during PHE in two groups of subjects with and without LBP. An equal number of subjects (N = 30) were allocated to each group. A standard flexible ruler was used to measure the size of lumbar lordosis in prone-relaxed position and PHE test in each group.\n\n\nRESULT\nThe result of two-way mixed-design analysis of variance revealed significant health status by position interaction effect for lumbar lordosis (P < 0.001). The main effect of test position on lumbar lordosis was statistically significant (P < 0.001). The lumbar lordosis was significantly greater in the PHE compared to prone-relaxed position in both subjects with and without LBP. The amount of difference in positions was statistically significant between two groups (P < 0.001) and greater change in lumbar lordosis was found in the healthy group compared to the subjects with LBP.\n\n\nCONCLUSIONS\nGreater change in lumbar lordosis during this test may be due to more stiffness in lumbopelvic muscles in the individuals with LBP.", "title": "" }, { "docid": "a3e8a50b38e276d19dc301fcf8818ea1", "text": "Automated diagnosis of skin cancer is an active area of research with different classification methods proposed so far. However, classification models based on insufficient labeled training data can badly influence the diagnosis process if there is no self-advising and semi supervising capability in the model. This paper presents a semi supervised, self-advised learning model for automated recognition of melanoma using dermoscopic images. Deep belief architecture is constructed using labeled data together with unlabeled data, and fine tuning done by an exponential loss function in order to maximize separation of labeled data. In parallel a self-advised SVM algorithm is used to enhance classification results by counteracting the effect of misclassified data. To increase generalization capability and redundancy of the model, polynomial and radial basis function based SA-SVMs and Deep network are trained using training samples randomly chosen via a bootstrap technique. Then the results are aggregated using least square estimation weighting. The proposed model is tested on a collection of 100 dermoscopic images. The variation in classification error is analyzed with respect to the ratio of labeled and unlabeled data used in the training phase. The classification performance is compared with some popular classification methods and the proposed model using the deep neural processing outperforms most of the popular techniques including KNN, ANN, SVM and semi supervised algorithms like Expectation maximization and transductive SVM.", "title": "" }, { "docid": "4ee6894fade929db82af9cb62fecc0f9", "text": "Federated learning is a recent advance in privacy protection. In this context, a trusted curator aggregates parameters optimized in decentralized fashion by multiple clients. The resulting model is then distributed back to all clients, ultimately converging to a joint representative model without explicitly having to share the data. However, the protocol is vulnerable to differential attacks, which could originate from any party contributing during federated optimization. In such an attack, a client’s contribution during training and information about their data set is revealed through analyzing the distributed model. We tackle this problem and propose an algorithm for client sided differential privacy preserving federated optimization. The aim is to hide clients’ contributions during training, balancing the trade-off between privacy loss and model performance. Empirical studies suggest that given a sufficiently large number of participating clients, our proposed procedure can maintain client-level differential privacy at only a minor cost in model performance.", "title": "" }, { "docid": "48d778934127343947b494fe51f56a33", "text": "In this paper, we present a simple method for animating natural phenomena such as erosion, sedimentation, and acidic corrosion. We discretize the appropriate physical or chemical equations using finite differences, and we use the results to modify the shape of a solid body. We remove mass from an object by treating its surface as a level set and advecting it inward, and we deposit the chemical and physical byproducts into simulated fluid. Similarly, our technique deposits sediment onto a surface by advecting the level set outward. Our idea can be used for off-line high quality animations as well as interactive applications such as games, and we demonstrate both in this paper.", "title": "" }, { "docid": "a07472c2f086332bf0f97806255cb9d5", "text": "The Learning Analytics Dashboard (LAD) is an application to show students’ online behavior patterns in a virtual learning environment. This supporting tool works by tracking students’ log-files, mining massive amounts of data to find meaning, and visualizing the results so they can be comprehended at a glance. This paper reviews previously developed applications to analyze their features. Based on the implications from the review of previous studies as well as a preliminary investigation on the need for such tools, an early version of the LAD was designed and developed. Also, in order to improve the LAD, a usability test incorporating a stimulus recall interview was conducted with 38 college students in two blended learning classes. Evaluation of this tool was performed in an experimental research setting with a control group and additional surveys were conducted asking students’ about perceived usefulness, conformity, level of understanding of graphs, and their behavioral changes. The results indicated that this newly developed learning analytics tool did not significantly impact on their learning achievement. However, lessons learned from the usability and pilot tests support that visualized information impacts on students’ understanding level; and the overall satisfaction with dashboard plays as a covariant that impacts on both the degree of understanding and students’ perceived change of behavior. Taking in the results of the tests and students’ openended responses, a scaffolding strategy to help them understand the meaning of the information displayed was included in each sub section of the dashboard. Finally, this paper discusses future directions in regard to improving LAD so that it better supports students’ learning performance, which might be helpful for those who develop learning analytics applications for students.", "title": "" }, { "docid": "67b5bd59689c325365ac765a17886169", "text": "L-Systems have traditionally been used as a popular method for the modelling of spacefilling curves, biological systems and morphogenesis. In this paper, we adapt string rewriting grammars based on L-Systems into a system for music composition. Representation of pitch, duration and timbre are encoded as grammar symbols, upon which a series of re-writing rules are applied. Parametric extensions to the grammar allow the specification of continuous data for the purposes of modulation and control. Such continuous data is also under control of the grammar. Using non-deterministic grammars with context sensitivity allows the simulation of Nth-order Markov models with a more economical representation than transition matrices and greater flexibility than previous composition models based on finite state automata or Petri nets. Using symbols in the grammar to represent relationships between notes, (rather than absolute notes) in combination with a hierarchical grammar representation, permits the emergence of complex music compositions from a relatively simple grammars.", "title": "" }, { "docid": "81ca5239dbd60a988e7457076aac05d7", "text": "OBJECTIVE\nFrontline health professionals need a \"red flag\" tool to aid their decision making about whether to make a referral for a full diagnostic assessment for an autism spectrum condition (ASC) in children and adults. The aim was to identify 10 items on the Autism Spectrum Quotient (AQ) (Adult, Adolescent, and Child versions) and on the Quantitative Checklist for Autism in Toddlers (Q-CHAT) with good test accuracy.\n\n\nMETHOD\nA case sample of more than 1,000 individuals with ASC (449 adults, 162 adolescents, 432 children and 126 toddlers) and a control sample of 3,000 controls (838 adults, 475 adolescents, 940 children, and 754 toddlers) with no ASC diagnosis participated. Case participants were recruited from the Autism Research Centre's database of volunteers. The control samples were recruited through a variety of sources. Participants completed full-length versions of the measures. The 10 best items were selected on each instrument to produce short versions.\n\n\nRESULTS\nAt a cut-point of 6 on the AQ-10 adult, sensitivity was 0.88, specificity was 0.91, and positive predictive value (PPV) was 0.85. At a cut-point of 6 on the AQ-10 adolescent, sensitivity was 0.93, specificity was 0.95, and PPV was 0.86. At a cut-point of 6 on the AQ-10 child, sensitivity was 0.95, specificity was 0.97, and PPV was 0.94. At a cut-point of 3 on the Q-CHAT-10, sensitivity was 0.91, specificity was 0.89, and PPV was 0.58. Internal consistency was >0.85 on all measures.\n\n\nCONCLUSIONS\nThe short measures have potential to aid referral decision making for specialist assessment and should be further evaluated.", "title": "" }, { "docid": "99a4fc6540802ff820fef9ca312cdc1c", "text": "Problem diagnosis is one crucial aspect in the cloud operation that is becoming increasingly challenging. On the one hand, the volume of logs generated in today's cloud is overwhelmingly large. On the other hand, cloud architecture becomes more distributed and complex, which makes it more difficult to troubleshoot failures. In order to address these challenges, we have developed a tool, called LOGAN, that enables operators to quickly identify the log entries that potentially lead to the root cause of a problem. It constructs behavioral reference models from logs that represent the normal patterns. When problem occurs, our tool enables operators to inspect the divergence of current logs from the reference model and highlight logs likely to contain the hints to the root cause. To support these capabilities we have designed and developed several mechanisms. First, we developed log correlation algorithms using various IDs embedded in logs to help identify and isolate log entries that belong to the failed request. Second, we provide efficient log comparison to help understand the differences between different executions. Finally we designed mechanisms to highlight critical log entries that are likely to contain information pertaining to the root cause of the problem. We have implemented the proposed approach in a popular cloud management system, OpenStack, and through case studies, we demonstrate this tool can help operators perform problem diagnosis quickly and effectively.", "title": "" }, { "docid": "211037c38a50ff4169f3538c3b6af224", "text": "In this paper we present a method to obtain a depth map from a single image of a scene by exploiting both image content and user interaction. Assuming that regions with low gradients will have similar depth values, we formulate the problem as an optimization process across a graph, where pixels are considered as nodes and edges between neighbouring pixels are assigned weights based on the image gradient. Starting from a number of userdefined constraints, depth values are propagated between highly connected nodes i.e. with small gradients. Such constraints include, for example, depth equalities and inequalities between pairs of pixels, and may include some information about perspective. This framework provides a depth map of the scene, which is useful for a number of applications.", "title": "" }, { "docid": "5d0a77058d6b184cb3c77c05363c02e0", "text": "For two-class discrimination, Ref. [1] claimed that, when covariance matrices of the two classes were unequal, a (class) unbalanced dataset had a negative effect on the performance of linear discriminant analysis (LDA). Through re-balancing 10 realworld datasets, Ref. [1] provided empirical evidence to support the claim using AUC (Area Under the receiver operating characteristic Curve) as the performance metric. We suggest that such a claim is vague if not misleading, there is no solid theoretical analysis presented in [1], and AUC can lead to a quite different conclusion from that led to by misclassification error rate (ER) on the discrimination performance of LDA for unbalanced datasets. Our empirical and simulation studies suggest that, for LDA, the increase of the median of AUC (and thus the improvement of performance of LDA) from re-balancing is relatively small, while, in contrast, the increase of the median of ER (and thus the decline in performance of LDA) from re-balancing is relatively large. Therefore, from our study, there is no reliable empirical evidence to support the claim that a (class) unbalanced data set has a negative effect on the performance of LDA. In addition, re-balancing affects the performance of LDA for datasets with either equal or unequal covariance matrices, indicating that having unequal covariance matrices is not a key reason for the difference in performance between original and re-balanced data.", "title": "" }, { "docid": "dfd88750bc1d42e8cc798d2097426910", "text": "Melanoma is one of the most lethal forms of skin cancer. It occurs on the skin surface and develops from cells known as melanocytes. The same cells are also responsible for benign lesions commonly known as moles, which are visually similar to melanoma in its early stage. If melanoma is treated correctly, it is very often curable. Currently, much research is concentrated on the automated recognition of melanomas. In this paper, we propose an automated melanoma recognition system, which is based on deep learning method combined with so called hand-crafted RSurf features and Local Binary Patterns. The experimental evaluation on a large publicly available dataset demonstrates high classification accuracy, sensitivity, and specificity of our proposed approach when it is compared with other classifiers on the same dataset.", "title": "" } ]
scidocsrr
8b3cd10016f047266b9fd8f9d1a2f111
SAFE: A clean-slate architecture for secure systems
[ { "docid": "0f8bf207201692ad4905e28a2993ef29", "text": "Bluespec System Verilog is an EDL toolset for ASIC and FPGA design offering significantly higher productivity via a radically different approach to high-level synthesis. Many other attempts at high-level synthesis have tried to move the design language towards a more software-like specification of the behavior of the intended hardware. By means of code samples, demonstrations and measured results, we illustrate how Bluespec System Verilog, in an environment familiar to hardware designers, can significantly improve productivity without compromising generated hardware quality.", "title": "" }, { "docid": "60ade549a5d58da43824ba0ddf7ab242", "text": "Existing designs for fine-grained, dynamic information-flow control assume that it is acceptable to terminate the entire system when an incorrect flow is detected-i.e, they give up availability for the sake of confidentiality and integrity. This is an unrealistic limitation for systems such as long-running servers. We identify public labels and delayed exceptions as crucial ingredients for making information-flow errors recoverable in a sound and usable language, and we propose two new error-handling mechanisms that make all errors recoverable. The first mechanism builds directly on these basic ingredients, using not-a-values (NaVs) and data flow to propagate errors. The second mechanism adapts the standard exception model to satisfy the extra constraints arising from information flow control, converting thrown exceptions to delayed ones at certain points. We prove that both mechanisms enjoy the fundamental soundness property of non-interference. Finally, we describe a prototype implementation of a full-scale language with NaVs and report on our experience building robust software components in this setting.", "title": "" } ]
[ { "docid": "b9f0d1d80ba7f8c304a601d179730951", "text": "A critical part of developing a reliable software system is testing its recovery code. This code is traditionally difficult to test in the lab, and, in the field, it rarely gets to run; yet, when it does run, it must execute flawlessly in order to recover the system from failure. In this article, we present a library-level fault injection engine that enables the productive use of fault injection for software testing. We describe automated techniques for reliably identifying errors that applications may encounter when interacting with their environment, for automatically identifying high-value injection targets in program binaries, and for producing efficient injection test scenarios. We present a framework for writing precise triggers that inject desired faults, in the form of error return codes and corresponding side effects, at the boundary between applications and libraries. These techniques are embodied in LFI, a new fault injection engine we are distributing http://lfi.epfl.ch. This article includes a report of our initial experience using LFI. Most notably, LFI found 12 serious, previously unreported bugs in the MySQL database server, Git version control system, BIND name server, Pidgin IM client, and PBFT replication system with no developer assistance and no access to source code. LFI also increased recovery-code coverage from virtually zero up to 60% entirely automatically without requiring new tests or human involvement.", "title": "" }, { "docid": "ce8cabea6fff858da1fb9894860f7c2d", "text": "This thesis investigates artificial agents learning to make strategic decisions in imperfect-information games. In particular, we introduce a novel approach to reinforcement learning from self-play. We introduce Smooth UCT, which combines the game-theoretic notion of fictitious play with Monte Carlo Tree Search (MCTS). Smooth UCT outperformed a classic MCTS method in several imperfect-information poker games and won three silver medals in the 2014 Annual Computer Poker Competition. We develop Extensive-Form Fictitious Play (XFP) that is entirely implemented in sequential strategies, thus extending this prominent game-theoretic model of learning to sequential games. XFP provides a principled foundation for self-play reinforcement learning in imperfect-information games. We introduce Fictitious Self-Play (FSP), a class of sample-based reinforcement learning algorithms that approximate XFP. We instantiate FSP with neuralnetwork function approximation and deep learning techniques, producing Neural FSP (NFSP). We demonstrate that (approximate) Nash equilibria and their representations (abstractions) can be learned using NFSP end to end, i.e. interfacing with the raw inputs and outputs of the domain. NFSP approached the performance of state-of-the-art, superhuman algorithms in Limit Texas Hold’em an imperfect-information game at the absolute limit of tractability using massive computational resources. This is the first time that any reinforcement learning algorithm, learning solely from game outcomes without prior domain knowledge, achieved such a feat.", "title": "" }, { "docid": "be66c05a023ea123a6f32614d2a8af93", "text": "During the past three decades, the issue of processing spectral phase has been largely neglected in speech applications. There is no doubt that the interest of speech processing community towards the use of phase information in a big spectrum of speech technologies, from automatic speech and speaker recognition to speech synthesis, from speech enhancement and source separation to speech coding, is constantly increasing. In this paper, we elaborate on why phase was believed to be unimportant in each application. We provide an overview of advancements in phase-aware signal processing with applications to speech, showing that considering phase-aware speech processing can be beneficial in many cases, while it can complement the possible solutions that magnitude-only methods suggest. Our goal is to show that phase-aware signal processing is an important emerging field with high potential in the current speech communication applications. The paper provides an extended and up-to-date bibliography on the topic of phase aware speech processing aiming at providing the necessary background to the interested readers for following the recent advancements in the area. Our review expands the step initiated by our organized special session and exemplifies the usefulness of spectral phase information in a wide range of speech processing applications. Finally, the overview will provide some future work directions.", "title": "" }, { "docid": "685b1471c334c941507ac12eb6680872", "text": "Purpose – The concept of ‘‘knowledge’’ is presented in diverse and sometimes even controversial ways in the knowledge management (KM) literature. The aim of this paper is to identify the emerging views of knowledge and to develop a framework to illustrate the interrelationships of the different knowledge types. Design/methodology/approach – This paper is a literature review to explore how ‘‘knowledge’’ as a central concept is presented and understood in a selected range of KM publications (1990-2004). Findings – The exploration of the knowledge landscape showed that ‘‘knowledge’’ is viewed in four emerging and complementary ways. The ontological, epistemological, commodity, and community views of knowledge are discussed in this paper. The findings show that KM is still a young discipline and therefore it is natural to have different, sometimes even contradicting views of ‘‘knowledge’’ side by side in the literature. Practical implications – These emerging views of knowledge could be seen as opportunities for researchers to provide new contributions. However, this diversity and complexity call for careful and specific clarification of the researchers’ standpoint, for a clear statement of their views of knowledge. Originality/value – This paper offers a framework as a compass for researchers to help their orientation in the confusing and ever changing landscape of knowledge.", "title": "" }, { "docid": "6ad8da8198b1f61dfe0dc337781322d9", "text": "A model of human speech quality perception has been developed to provide an objective measure for predicting subjective quality assessments. The Virtual Speech Quality Objective Listener (ViSQOL) model is a signal based full reference metric that uses a spectro-temporal measure of similarity between a reference and a test speech signal. This paper describes the algorithm and compares the results with PESQ for common problems in VoIP: clock drift, associated time warping and jitter. The results indicate that ViSQOL is less prone to underestimation of speech quality in both scenarios than the ITU standard.", "title": "" }, { "docid": "f7bddfb1142605fd6c3a784f454f81eb", "text": "Although the interest of a Web page is strictly related to its content and to the subjective readers' cultural background, a measure of the page authority can be provided that only depends on the topological structure of the Web. PageRank is a noticeable way to attach a score to Web pages on the basis of the Web connectivity. In this article, we look inside PageRank to disclose its fundamental properties concerning stability, complexity of computational scheme, and critical role of parameters involved in the computation. Moreover, we introduce a circuit analysis that allows us to understand the distribution of the page score, the way different Web communities interact each other, the role of dangling pages (pages with no outlinks), and the secrets for promotion of Web pages.", "title": "" }, { "docid": "43bf765a516109b885db5b6d1b873c33", "text": "The attention economy motivates participation in peer-produced sites on the Web like YouTube and Wikipedia. However, this economy appears to break down at work. We studied a large internal corporate blogging community using log files and interviews and found that employees expected to receive attention when they contributed to blogs, but these expectations often went unmet. Like in the external blogosphere, a few people received most of the attention, and many people received little or none. Employees expressed frustration if they invested time and received little or no perceived return on investment. While many corporations are looking to adopt Web-based communication tools like blogs, wikis, and forums, these efforts will fail unless employees are motivated to participate and contribute content. We identify where the attention economy breaks down in a corporate blog community and suggest mechanisms for improvement.", "title": "" }, { "docid": "ea55fffd5ed53588ba874780d9c5083a", "text": "Representation learning is a central challenge across a range of machine learning areas. In reinforcement learning, effective and functional representations have the potential to tremendously accelerate learning progress and solve more challenging problems. Most prior work on representation learning has focused on generative approaches, learning representations that capture all underlying factors of variation in the observation space in a more disentangled or well-ordered manner. In this paper, we instead aim to learn functionally salient representations: representations that are not necessarily complete in terms of capturing all factors of variation in the observation space, but rather aim to capture those factors of variation that are important for decision making – that are “actionable.” These representations are aware of the dynamics of the environment, and capture only the elements of the observation that are necessary for decision making rather than all factors of variation, without explicit reconstruction of the observation. We show how these representations can be useful to improve exploration for sparse reward problems, to enable long horizon hierarchical reinforcement learning, and as a state representation for learning policies for downstream tasks. We evaluate our method on a number of simulated environments, and compare it to prior methods for representation learning, exploration, and hierarchical reinforcement learning.", "title": "" }, { "docid": "0d75a194abf88a0cbf478869dc171794", "text": "As a promising way for heterogeneous data analytics, consensus clustering has attracted increasing attention in recent decades. Among various excellent solutions, the co-association matrix based methods form a landmark, which redefines consensus clustering as a graph partition problem. Nevertheless, the relatively high time and space complexities preclude it from wide real-life applications. We, therefore, propose Spectral Ensemble Clustering (SEC) to leverage the advantages of co-association matrix in information integration but run more efficiently. We disclose the theoretical equivalence between SEC and weighted K-means clustering, which dramatically reduces the algorithmic complexity. We also derive the latent consensus function of SEC, which to our best knowledge is the first to bridge co-association matrix based methods to the methods with explicit global objective functions. Further, we prove in theory that SEC holds the robustness, generalizability, and convergence properties. We finally extend SEC to meet the challenge arising from incomplete basic partitions, based on which a row-segmentation scheme for big data clustering is proposed. Experiments on various real-world data sets in both ensemble and multi-view clustering scenarios demonstrate the superiority of SEC to some state-of-the-art methods. In particular, SEC seems to be a promising candidate for big data clustering.", "title": "" }, { "docid": "d75d453181293c92ec9bab800029e366", "text": "For a majority of applications implemented today, the Intermediate Bus Architecture (IBA) has been the preferred power architecture. This power architecture has led to the development of the isolated, semi-regulated DC/DC converter known as the Intermediate Bus Converter (IBC). Fixed ratio Bus Converters that employ a new power topology known as the Sine Amplitude Converter (SAC) offer dramatic improvements in power density, noise reduction, and efficiency over the existing IBC products. As electronic systems continue to trend toward lower voltages with higher currents and as the speed of contemporary loads - such as state-of-the-art processors and memory - continues to increase, the power systems designer is challenged to provide small, cost effective and efficient solutions that offer the requisite performance. Traditional power architectures cannot, in the long run, provide the required performance. Vicor's Factorized Power Architecture (FPA), and the implementation of V·I Chips, provides a revolutionary new and optimal power conversion solution that addresses the challenge in every respect. The technology behind these power conversion engines used in the IBC and V·I Chips is analyzed and contextualized in a system perspective.", "title": "" }, { "docid": "5c349687e507074d8f5653fc0e338cda", "text": "a r t i c l e i n f o This study analyzes the perceptions which induce customers to purchase over the Internet, testing the moderating effect of e-purchasing experience. We distinguish between two groups: (1) potential e-customers, who are considering making their first e-purchase, and (2) experienced e-customers, who have made at least one e-purchase and are thinking about continuing to do so. The perceptions that induce individuals to purchase online for the first time may not be the same as those that produce repurchasing behavior. Our findings demonstrate that customer behavior does not remain stable because the experience acquired from past e-purchases means that perceptions evolve. The relationships between perceptions of e-commerce change with purchasing experience, whilst the influence of Internet experience is stable for all users. The implications are especially interesting for e-commerce providers whose business models depend on e-customer behavior. The analysis of consumer behavior is a key aspect for the success of an e-business. However, the behavior of consumers in the Internet market changes as they acquire e-purchasing experience (Gefen et al., 2003; Yu et al., 2005). The perceptions which induce them to make an initial e-purchase may have different effects on their subsequent decisions or repurchasing behavior because the use of the information technology (IT) may modify certain perceptions and attitudes (Thompson et al. Despite these differences, very little research carried out in the e-commerce field has conducted a separate analysis of the perceptions related to the adoption and to the \" post-adoption \" decisions (Karahanna et al., 1999; Vijayasarathy, 2004). Moreover, hardly any researchers have analyzed the behavior of e-customers as they gain experience (as Taylor and Todd, 1995; Vijayasarathy, 2004 state). Most studies have considered that the low level of development of this new channel meant that the differences between the two decisions were not yet significant, and their principal objective was, therefore, to determine the perceptions which led consumers to adopt the Internet as an alternative shopping channel to the offline market (Chen et al., 2002; Verhagen et al., 2006). Nevertheless, the growth of e-commerce has made it clear that customer behavior has evolved. As in other types of purchase situations (Sheth, 1968; Heilman et al., 2000), customer behavior does not necessarily remain stable over time since the experience acquired from past purchases means that perceptions change (Taylor and Todd, 1995; Yu et al., 2005). When customers repeat their behavior …", "title": "" }, { "docid": "39fdfa5258c2cb22ed2d7f1f5b2afeaf", "text": "Calling for research on automatic oversight for artificial intelligence systems.", "title": "" }, { "docid": "9c887109d71605053ecb1732a1989a35", "text": "In this paper, we develop a new approach called DeepText for text region proposal generation and text detection in natural images via a fully convolutional neural network (CNN). First, we propose the novel inception region proposal network (Inception-RPN), which slides an inception network with multi-scale windows over the top of convolutional feature maps and associates a set of text characteristic prior bounding boxes with each sliding position to generate high recall word region proposals. Next, we present a powerful text detection network that embeds ambiguous text category (ATC) information and multi-level region-of-interest pooling (MLRP) for text and non-text classification and accurate localization refinement. Our approach achieves an F-measure of 0.83 and 0.85 on the ICDAR 2011 and 2013 robust text detection benchmarks, outperforming previous state-of-the-art results.", "title": "" }, { "docid": "6ecf5cb70cca991fbefafb739a0a44c9", "text": "Reasoning about objects, relations, and physics is central to human intelligence, and 1 a key goal of artificial intelligence. Here we introduce the interaction network, a 2 model which can reason about how objects in complex systems interact, supporting 3 dynamical predictions, as well as inferences about the abstract properties of the 4 system. Our model takes graphs as input, performs objectand relation-centric 5 reasoning in a way that is analogous to a simulation, and is implemented using 6 deep neural networks. We evaluate its ability to reason about several challenging 7 physical domains: n-body problems, rigid-body collision, and non-rigid dynamics. 8 Our results show it can be trained to accurately simulate the physical trajectories of 9 dozens of objects over thousands of time steps, estimate abstract quantities such 10 as energy, and generalize automatically to systems with different numbers and 11 configurations of objects and relations. Our interaction network implementation 12 is the first general-purpose, learnable physics engine, and a powerful general 13 framework for reasoning about object and relations in a wide variety of complex 14 real-world domains. 15", "title": "" }, { "docid": "24c2877aff9c4e8441dbbbd4481370b6", "text": "Ramp merging is a critical maneuver for road safety and traffic efficiency. Most of the current automated driving systems developed by multiple automobile manufacturers and suppliers are typically limited to restricted access freeways only. Extending the automated mode to ramp merging zones presents substantial challenges. One is that the automated vehicle needs to incorporate a future objective (e.g. a successful and smooth merge) and optimize a long-term reward that is impacted by subsequent actions when executing the current action. Furthermore, the merging process involves interaction between the merging vehicle and its surrounding vehicles whose behavior may be cooperative or adversarial, leading to distinct merging countermeasures that are crucial to successfully complete the merge. In place of the conventional rule-based approaches, we propose to apply reinforcement learning algorithm on the automated vehicle agent to find an optimal driving policy by maximizing the long-term reward in an interactive driving environment. Most importantly, in contrast to most reinforcement learning applications in which the action space is resolved as discrete, our approach treats the action space as well as the state space as continuous without incurring additional computational costs. Our unique contribution is the design of the Q-function approximation whose format is structured as a quadratic function, by which simple but effective neural networks are used to estimate its coefficients. The results obtained through the implementation of our training platform demonstrate that the vehicle agent is able to learn a safe, smooth and timely merging policy, indicating the effectiveness and practicality of our approach.", "title": "" }, { "docid": "0b191398f6458d8516ff65c74550bd68", "text": "It is now recognized that gut microbiota contributes indispensable roles in safeguarding host health. Shrimp is being threatened by newly emerging diseases globally; thus, understanding the driving factors that govern its gut microbiota would facilitate an initial step to reestablish and maintain a “healthy” gut microbiota. This review summarizes the factors that assemble the shrimp gut microbiota, which focuses on the current progresses of knowledge linking the gut microbiota and shrimp health status. In particular, I propose the exploration of shrimp disease pathogenesis and incidence based on the interplay between dysbiosis in the gut microbiota and disease severity. An updated research on shrimp disease toward an ecological perspective is discussed, including host–bacterial colonization, identification of polymicrobial pathogens and diagnosing disease incidence. Further, a simple conceptual model is offered to summarize the interplay among the gut microbiota, external factors, and shrimp disease. Finally, based on the review, current limitations are raised and future studies directed at solving these concerns are proposed. This review is timely given the increased interest in the role of gut microbiota in disease pathogenesis and the advent of novel diagnosis strategies.", "title": "" }, { "docid": "4ea68c4cb9250418853084b60a45582b", "text": "Facial reconstruction is employed in the context of forensic investigation and for creating three-dimensional portraits of people from the past, from ancient Egyptian mummies and bog bodies to digital animations of J. S. Bach. This paper considers a facial reconstruction method (commonly known as the Manchester method) associated with the depiction and identification of the deceased from skeletal remains. Issues of artistic licence and scientific rigour, in relation to soft tissue reconstruction, anatomical variation and skeletal assessment, are discussed. The need for artistic interpretation is greatest where only skeletal material is available, particularly for the morphology of the ears and mouth, and with the skin for an ageing adult. The greatest accuracy is possible when information is available from preserved soft tissue, from a portrait, or from a pathological condition or healed injury.", "title": "" }, { "docid": "a8ff7afc96f0bf65ce80131617d5e156", "text": "This paper presents a new algorithm for force directed graph layout on the GPU. The algorithm, whose goal is to compute layouts accurately and quickly, has two contributions. The first contribution is proposing a general multi-level scheme, which is based on spectral partitioning. The second contribution is computing the layout on the GPU. Since the GPU requires a data parallel programming model, the challenge is devising a mapping of a naturally unstructured graph into a well-partitioned structured one. This is done by computing a balanced partitioning of a general graph. This algorithm provides a general multi-level scheme, which has the potential to be used not only for computation on the GPU, but also on emerging multi-core architectures. The algorithm manages to compute high quality layouts of large graphs in a fraction of the time required by existing algorithms of similar quality. An application for visualization of the topologies of ISP (Internet service provider) networks is presented.", "title": "" }, { "docid": "5b5e69bd93f6b809c29596a54c1565fc", "text": "Variety and veracity are two distinct characteristics of large-scale and heterogeneous data. It has been a great challenge to efficiently represent and process big data with a unified scheme. In this paper, a unified tensor model is proposed to represent the unstructured, semistructured, and structured data. With tensor extension operator, various types of data are represented as subtensors and then are merged to a unified tensor. In order to extract the core tensor which is small but contains valuable information, an incremental high order singular value decomposition (IHOSVD) method is presented. By recursively applying the incremental matrix decomposition algorithm, IHOSVD is able to update the orthogonal bases and compute the new core tensor. Analyzes in terms of time complexity, memory usage, and approximation accuracy of the proposed method are provided in this paper. A case study illustrates that approximate data reconstructed from the core set containing 18% elements can guarantee 93% accuracy in general. Theoretical analyzes and experimental results demonstrate that the proposed unified tensor model and IHOSVD method are efficient for big data representation and dimensionality reduction.", "title": "" }, { "docid": "58cce77789fc7b5970f3b387ce89e8c4", "text": "We propose a series of recurrent and contextual neural network models for multiple choice visual question answering on the Visual7W dataset. Motivated by divergent trends in model complexities in the literature, we explore the balance between model expressiveness and simplicity by studying incrementally more complex architectures. We start with LSTM-encoding of input questions and answers; build on this with context generation by LSTM-encodings of neural image and question representations and attention over images; and evaluate the diversity and predictive power of our models and the ensemble thereof. All models are evaluated against a simple baseline inspired by the current state-of-the-art, consisting of involving simple concatenation of bag-of-words and CNN representations for the text and images, respectively. Generally, we observe marked variation in image-reasoning performance between our models not obvious from their overall performance, as well as evidence of dataset bias. Our standalone models achieve accuracies up to 64.6%, while the ensemble of all models achieves the best accuracy of 66.67%, within 0.5% of the current state-of-the-art for Visual7W.", "title": "" } ]
scidocsrr
3c1eba378ff8fc42ac299f89cbc581b1
Collaboratively Improving Topic Discovery and Word Embeddings by Coordinating Global and Local Contexts
[ { "docid": "62e386315d2f4b8ed5ca3bcce71c4e83", "text": "Continuous space word embeddings learned from large, unstructured corpora have been shown to be effective at capturing semantic regularities in language. In this paper we replace LDA’s parameterization of “topics” as categorical distributions over opaque word types with multivariate Gaussian distributions on the embedding space. This encourages the model to group words that are a priori known to be semantically related into topics. To perform inference, we introduce a fast collapsed Gibbs sampling algorithm based on Cholesky decompositions of covariance matrices of the posterior predictive distributions. We further derive a scalable algorithm that draws samples from stale posterior predictive distributions and corrects them with a Metropolis–Hastings step. Using vectors learned from a domain-general corpus (English Wikipedia), we report results on two document collections (20-newsgroups and NIPS). Qualitatively, Gaussian LDA infers different (but still very sensible) topics relative to standard LDA. Quantitatively, our technique outperforms existing models at dealing with OOV words in held-out documents.", "title": "" } ]
[ { "docid": "b8b16474ba00399b44b83a28893d5f71", "text": "PURPOSE\nTo compare the aqueous humor levels of proinflammatory and angiogenic factors of diabetic patients with and without retinopathy.\n\n\nMETHODS\nAqueous humor was collected at the start of cataract surgery from diabetic subjects and non-diabetic controls. The presence and severity of diabetic retinopathy were graded with fundus examination. Levels of 22 different inflammatory and angiogenic cytokines and chemokines were compared.\n\n\nRESULTS\nAqueous humor samples from 47 diabetic patients (20 without retinopathy, 27 with retinopathy) and 24 non-diabetic controls were included. Interleukin (IL)-2, IL-10, IL-12, interferon-alpha (IFN-α), and tumor necrosis factor (TNF)-α were measurable in significantly fewer diabetic samples, and where measurable, were at lower levels than in non-diabetic controls. IL-6 was measurable in significantly more diabetic samples, and the median levels were significantly higher in the eyes with retinopathy than the eyes without retinopathy and the non-diabetic eyes. The vascular endothelial growth factor (VEGF) level was significantly higher in the diabetic eyes with and without retinopathy compared to the non-diabetic controls. The IL-6 level positively correlated with the monocyte chemotactic protein-1 (CCL2) and interleukin-8 (CXCL8) levels and negatively with the TNF-α level. The VEGF level negatively correlated with the IL-12 and TNF-α levels.\n\n\nCONCLUSIONS\nThe aqueous humor cytokine profile of diabetic patients without retinopathy was similar to that of patients with diabetic retinopathy. These cytokines may be useful biomarkers for early detection and prognosis of diabetic retinopathy. Compared to diabetic patients without retinopathy, only the IL-6 and VEGF levels were significantly higher in diabetic patients with retinopathy.", "title": "" }, { "docid": "fd2d04af3b259a433eb565a41b11ffbd", "text": "OVERVIEW • We develop novel orthogonality regularizations on training deep CNNs, by borrowing ideas and tools from sparse optimization. • These plug-and-play regularizations can be conveniently incorporated into training almost any CNN without extra hassle. • The proposed regularizations can consistently improve the performances of baseline deep networks on CIFAR-10/100, ImageNet and SVHN datasets, based on intensive empirical experiments, as well as accelerate/stabilize the training curves. • The proposed orthogonal regularizations outperform existing competitors.", "title": "" }, { "docid": "0d5b27e9a3ff01b796dc194c51b067f7", "text": "Automatic speech recognition (ASR) on video data naturally has access to two modalities: audio and video. In previous work, audio-visual ASR, which leverages visual features to help ASR, has been explored on restricted domains of videos. This paper aims to extend this idea to open-domain videos, for example videos uploaded to YouTube. We achieve this by adopting a unified deep learning approach. First, for the visual features, we propose to apply segment(utterance-) level features, instead of highly restrictive frame-level features. These visual features are extracted using deep learning architectures which have been pre-trained on computer vision tasks, e.g., object recognition and scene labeling. Second, the visual features are incorporated into ASR under deep learning based acoustic modeling. In addition to simple feature concatenation, we also apply an adaptive training framework to incorporate visual features in a more flexible way. On a challenging video transcribing task, audio-visual ASR using our proposed approach gets notable improvements in terms of word error rates (WERs), compared to ASR merely using speech features.", "title": "" }, { "docid": "6226b650540d812b6c40939a582331ef", "text": "With an increasingly mobile society and the worldwide deployment of mobile and wireless networks, the wireless infrastructure can support many current and emerging healthcare applications. This could fulfill the vision of “Pervasive Healthcare” or healthcare to anyone, anytime, and anywhere by removing locational, time and other restraints while increasing both the coverage and the quality. In this paper, we present applications and requirements of pervasive healthcare, wireless networking solutions and several important research problems. The pervasive healthcare applications include pervasive health monitoring, intelligent emergency management system, pervasive healthcare data access, and ubiquitous mobile telemedicine. One major application in pervasive healthcare, termed comprehensive health monitoring is presented in significant details using wireless networking solutions of wireless LANs, ad hoc wireless networks, and, cellular/GSM/3G infrastructureoriented networks.Many interesting challenges of comprehensive wireless health monitoring, including context-awareness, reliability, and, autonomous and adaptable operation are also presented along with several high-level solutions. Several interesting research problems have been identified and presented for future research.", "title": "" }, { "docid": "756b25456494b3ece9b240ba3957f91c", "text": "In this paper we introduce the task of fact checking, i.e. the assessment of the truthfulness of a claim. The task is commonly performed manually by journalists verifying the claims made by public figures. Furthermore, ordinary citizens need to assess the truthfulness of the increasing volume of statements they consume. Thus, developing fact checking systems is likely to be of use to various members of society. We first define the task and detail the construction of a publicly available dataset using statements fact-checked by journalists available online. Then, we discuss baseline approaches for the task and the challenges that need to be addressed. Finally, we discuss how fact checking relates to mainstream natural language processing tasks and can stimulate further research.", "title": "" }, { "docid": "6bc2837d4d1da3344f901a6d7d8502b5", "text": "Many researchers and professionals have reported nonsubstance addiction to online entertainments in adolescents. However, very few scales have been designed to assess problem Internet use in this population, in spite of their high exposure and obvious vulnerability. The aim of this study was to review the currently available scales for assessing problematic Internet use and to validate a new scale of this kind for use, specifically in this age group, the Problematic Internet Entertainment Use Scale for Adolescents. The research was carried out in Spain in a gender-balanced sample of 1131 high school students aged between 12 and 18 years. Psychometric analyses showed the scale to be unidimensional, with excellent internal consistency (Cronbach's alpha of 0.92), good construct validity, and positive associations with alternative measures of maladaptive Internet use. This self-administered scale can rapidly measure the presence of symptoms of behavioral addiction to online videogames and social networking sites, as well as their degree of severity. The results estimate the prevalence of this problematic behavior in Spanish adolescents to be around 5 percent.", "title": "" }, { "docid": "66044816ca1af0198acd27d22e0e347e", "text": "BACKGROUND\nThe Close Kinetic Chain Upper Extremity Stability Test (CKCUES test) is a low cost shoulder functional test that could be considered as a complementary and objective clinical outcome for shoulder performance evaluation. However, its reliability was tested only in recreational athletes' males and there are no studies comparing scores between sedentary and active samples. The purpose was to examine inter and intrasession reliability of CKCUES Test for samples of sedentary male and female with (SIS), for samples of sedentary healthy male and female, and for male and female samples of healthy upper extremity sport specific recreational athletes. Other purpose was to compare scores within sedentary and within recreational athletes samples of same gender.\n\n\nMETHODS\nA sample of 108 subjects with and without SIS was recruited. Subjects were tested twice, seven days apart. Each subject performed four test repetitions, with 45 seconds of rest between them. The last three repetitions were averaged and used to statistical analysis. Intraclass Correlation Coefficient ICC2,1 was used to assess intrasession reliability of number of touches score and ICC2,3 was used to assess intersession reliability of number of touches, normalized score, and power score. Test scores within groups of same gender also were compared. Measurement error was determined by calculating the Standard Error of the Measurement (SEM) and Minimum detectable change (MDC) for all scores.\n\n\nRESULTS\nThe CKCUES Test showed excellent intersession reliability for scores in all samples. Results also showed excellent intrasession reliability of number of touches for all samples. Scores were greater in active compared to sedentary, with exception of power score. All scores were greater in active compared to sedentary and SIS males and females. SEM ranged from 1.45 to 2.76 touches (based on a 95% CI) and MDC ranged from 2.05 to 3.91(based on a 95% CI) in subjects with and without SIS. At least three touches are needed to be considered a real improvement on CKCUES Test scores.\n\n\nCONCLUSION\nResults suggest CKCUES Test is a reliable tool to evaluate upper extremity functional performance for sedentary, for upper extremity sport specific recreational, and for sedentary males and females with SIS.", "title": "" }, { "docid": "eb9e13fdc7a5a673e5a39b996e4f05db", "text": "This paper studies a multiple-input dc-dc converter realized from a standard single ended primary inductor converter topology. The proposed configuration allows the integration of different distributed generation sources into a common DC main bus. This work discusses the converter topology, basic dynamic equations, and explains its basic operation. Stability characteristics are also explored with linear methods. Theoretical analyses are verified with simulations and experimental results.", "title": "" }, { "docid": "3ed15a3d4ebb0589f5bedec8a13ca6a0", "text": "Discovering the structure underlying observed data is a recurring problem in machine learning with important applications in neuroscience. It is also a primary function of the brain. When data can be actively collected in the context of a closed action-perception loop, behavior becomes a critical determinant of learning efficiency. Psychologists studying exploration and curiosity in humans and animals have long argued that learning itself is a primary motivator of behavior. However, the theoretical basis of learning-driven behavior is not well understood. Previous computational studies of behavior have largely focused on the control problem of maximizing acquisition of rewards and have treated learning the structure of data as a secondary objective. Here, we study exploration in the absence of external reward feedback. Instead, we take the quality of an agent's learned internal model to be the primary objective. In a simple probabilistic framework, we derive a Bayesian estimate for the amount of information about the environment an agent can expect to receive by taking an action, a measure we term the predicted information gain (PIG). We develop exploration strategies that approximately maximize PIG. One strategy based on value-iteration consistently learns faster than previously developed reward-free exploration strategies across a diverse range of environments. Psychologists believe the evolutionary advantage of learning-driven exploration lies in the generalized utility of an accurate internal model. Consistent with this hypothesis, we demonstrate that agents which learn more efficiently during exploration are later better able to accomplish a range of goal-directed tasks. We will conclude by discussing how our work elucidates the explorative behaviors of animals and humans, its relationship to other computational models of behavior, and its potential application to experimental design, such as in closed-loop neurophysiology studies.", "title": "" }, { "docid": "e9d853505b1769992f6d7ffe5a0523e5", "text": "In this article we present Tuio, a simple yet versatile protocol designed specifically to meet the requirements of table-top tangible user interfaces. Inspired by the idea of interconnecting various existing table interfaces such as the reacTable* [1], being developed in Barcelona and the tDesk [2] from Bielefeld, this protocol defines common properties of controller objects on the table surface as well as of finger and hand gestures performed by the user. Currently this protocol has been implemented within a fiducial marker-based computer vision engine developed for the reacTable* project. This fast and robust computer vision engine is based on the original d-touch concept [3], which is also included as an alternative to the newer fiducial tracking engine. The computer vision framework has been implemented on various standard platforms and can be extended with additional sensor components. We are currently working on the tracking of finger-tips for gestural control within the table interface. The Tuio protocol has been implemented using OpenSound Control [4] and is therefore usable on any platform supporting this protocol. At the moment we have working implementations for Java, C++, PureData, Max/MSP, SuperCollider and Flash. 1 General Observations This protocol definition is an attempt to provide a general and versatile communication interface between tangible table-top controller interfaces and underlying application layers. It was designed to meet the needs of table-top interactive surfaces, where the user is able to manipulate a set of objects. These objects are tracked by a sensor system and can be identified and located in position and orientation on the table surface. Additionally we defined a special cursor object, which doesn’t have a unique ID and doesn’t provide rotation information. The protocol’s flexible design offers methods for selecting which information will be sent. This flexibility is provided without affecting existing interfaces, or requiring re-implementation to maintain compatibility. 2 Implementation Details The Tuio protocol defines two main classes of messages: set messages and alive messages. Set messages are used to communicate information about an object’s state such as position, orientation, and other recognized states. Alive messages indicate the current set of objects present on the surface using a list of unique session IDs. To avoid possible errors evolving out of packet loss, no explicit add or remove messages are included in the Tuio-protocol. The receiver deduces object lifetimes by examining the difference between sequential alive messages. In addition to set and alive messages, fseq messages are defined to uniquely tag each update step with a unique frame sequence ID. To summarize: – object parameters are sent after state change using a set message – on object removal an alive message is sent – the client deduces object addition and removal from set and alive messages – fseq messages associate a unique frame id with a set of set and alive messages 2.1 Efficiency & Reliability In order to provide low latency communication our implementation of the Tuio protocol uses UDP transport. When using UDP the possibility exists that some packets will be lost. Therefore, our implementation of the Tuio protocol includes redundant information to correct possible lost packets, while maintaining an efficient usage of the channel. An alternative TCP connection would assure the secure transport but at the cost of higher latency. For efficiency reasons set messages are packed into a bundle to completely use the space provided by a UDP packet. Each bundle also includes a redundant alive message to allow for the possibility of packet loss. For larger object sets a series of packets, each including an alive message are transmitted. When the surface is quiescent, emphalive messages are sent at a fixed rate dependent on the channel quality, for example once every second, to ensure that the receiver eventually acquires a consistent view of the set of alive objects. The state of each alive but unchanged object is periodically resent with additional set messages. This redundant information is resent at a lower rate, and includes only a subset of the unchanged objects at each update. The subset is continuously cycled so that each object is periodically addressed. Finally, each packet is marked with a frame sequence ID (fseq) message: an increasing number which is the same for all packets containing data acquired at the same time. This allows the client to maintain consistency by identifying and dropping out-of-order packets . To summarize: – set messages are bundled to fully utilize UDP packets – each bundle of set messages includes an alive message containing the session IDs of all currently alive tangible objects – when the surface is quiescent the alive message is resent periodically – the state of a cycling subset of alive but unchanged objects is continuously resent via redundant set messages – each bundle contains a frame sequence (fseq) message It should be noted that the retransmission semantics described here are only one possible interpretation of the protocol. Other possible methods include: (1) weighting the frequency of retransmission according to recency of value changes using a logarithmic back-off scheme and, (2) trimming the set of values to be retransmitted using asynchronous acknowledgments from the client.", "title": "" }, { "docid": "fa68493c999a154dfc8638aa27255e93", "text": "We develop a kernel density estimation method for estimating the density of points on a network and implement the method in the GIS environment. This method could be applied to, for instance, finding 'hot spots' of traffic accidents, street crimes or leakages in gas and oil pipe lines. We first show that the application of the ordinary two-dimensional kernel method to density estimation on a network produces biased estimates. Second, we formulate a 'natural' extension of the univariate kernel method to density estimation on a network, and prove that its estimator is biased; in particular, it overestimates the densities around nodes. Third, we formulate an unbiased discontinuous kernel function on a network, and fourth, an unbiased continuous kernel function on a network. Fifth, we develop computational methods for these kernels and derive their computational complexity. We also develop a plug-in tool for operating these methods in the GIS environment. Sixth, an application of the proposed methods to the density estimation of bag-snatches on streets is illustrated. Lastly, we summarize the major results and describe some suggestions for the practical use of the proposed methods.", "title": "" }, { "docid": "91d3008dcd6c351d6cc0187c59cad8df", "text": "Peer-to-peer markets such as eBay, Uber, and Airbnb allow small suppliers to compete with traditional providers of goods or services. We view the primary function of these markets as making it easy for buyers to …nd sellers and engage in convenient, trustworthy transactions. We discuss elements of market design that make this possible, including search and matching algorithms, pricing, and reputation systems. We then develop a simple model of how these markets enable entry by small or ‡exible suppliers, and the resulting impact on existing …rms. Finally, we consider the regulation of peer-to-peer markets, and the economic arguments for di¤erent approaches to licensing and certi…cation, data and employment regulation. We appreciate support from the National Science Foundation, the Stanford Institute for Economic Policy Research, the Toulouse Network on Information Technology, and the Alfred P. Sloan Foundation. yEinav and Levin: Department of Economics, Stanford University and NBER. Farronato: Harvard Business School. Email: leinav@stanford.edu, chiarafarronato@gmail.com, jdlevin@stanford.edu.", "title": "" }, { "docid": "6886b42b7624d2a47466d7356973f26c", "text": "Conventional on-off keyed signals, such as return-to-zero (RZ) and nonreturn-to-zero (NRZ) signals are susceptible to cross-gain modulation (XGM) in semiconductor optical amplifiers (SOAs) due to pattern effect. In this letter, XGM effect of Manchester-duobinary, RZ differential phase-shift keying (RZ-DPSK), NRZ-DPSK, RZ, and NRZ signals in SOAs were compared. The experimental results confirmed the reduction of crosstalk penalty in SOAs by using Manchester-duobinary signals", "title": "" }, { "docid": "89f0034e6ba61fde368087773dc2f922", "text": "The importance of reflection and reflective practice are frequently noted in the literature; indeed, reflective capacity is regarded by many as an essential characteristic for professional competence. Educators assert that the emergence of reflective practice is part of a change that acknowledges the need for students to act and to think professionally as an integral part of learning throughout their courses of study, integrating theory and practice from the outset. Activities to promote reflection are now being incorporated into undergraduate, postgraduate and continuing medical education, and across a variety of health professions. The evidence to support and inform these curricular interventions and innovations remains largely theoretical. Further, the literature is dispersed across several fields, and it is unclear which approaches may have efficacy or impact. We, therefore, designed a literature review to evaluate the existing evidence about reflection and reflective practice and their utility in health professional education. Our aim was to understand the key variables influencing this educational process, identify gaps in the evidence, and to explore any implications for educational practice and research.", "title": "" }, { "docid": "43db7c431cac1afd33f48774ee0dbc61", "text": "We present a diff algorithm for XML data. This work is motivated by the support for change control in the context of the Xyleme project that is investigating dynamic warehouses capable of storing massive volume of XML data. Because of the context, our algorithm has to be very efficient in terms of speed and memory space even at the cost of some loss of “quality”. Also, it considers, besides insertions, deletions and updates (standard in diffs), a move operation on subtrees that is essential in the context of XML. Intuitively, our diff algorithm uses signatures to match (large) subtrees that were left unchanged between the old and new versions. Such exact matchings are then possibly propagated to ancestors and descendants to obtain more matchings. It also uses XML specific information such as ID attributes. We provide a performance analysis of the algorithm. We show that it runs in average in linear time vs. quadratic time for previous algorithms. We present experiments on synthetic data that confirm the analysis. Since this problem is NPhard, the linear time is obtained by trading some quality. We present experiments (again on synthetic data) that show that the output of our algorithm is reasonably close to the “optimal” in terms of quality. Finally we present experiments on a small sample of XML pages found on the Web.", "title": "" }, { "docid": "a5fae52eeb8ca38d99091d72c91e1153", "text": "Machine learning is a popular approach to signatureless malware detection because it can generalize to never-beforeseen malware families and polymorphic strains. This has resulted in its practical use for either primary detection engines or supplementary heuristic detections by anti-malware vendors. Recent work in adversarial machine learning has shown that models are susceptible to gradient-based and other attacks. In this whitepaper, we summarize the various attacks that have been proposed for machine learning models in information security, each which require the adversary to have some degree of knowledge about the model under attack. Importantly, even when applied to attacking machine learning malware classifier based on static features for Windows portable executable (PE) files, these attacks, previous attack methodologies may break the format or functionality of the malware. We investigate a more general framework for attacking static PE anti-malware engines based on reinforcement learning, which models more realistic attacker conditions, and subsequently has provides much more modest evasion rates. A reinforcement learning (RL) agent is equipped with a set of functionality-preserving operations that it may perform on the PE file. It learns through a series of games played against the anti-malware engine which sequence of operations is most likely to result in evasion for a given malware sample. Given the general framework, it is not surprising that the evasion rates are modest. However, the resulting RL agent can succinctly summarize blind spots of the anti-malware model. Additionally, evasive variants generated by the agent may be used to harden machine learning anti-malware engine via adversarial training.", "title": "" }, { "docid": "0bc40c2f559a8daa37fbf2026db2f411", "text": "A novel algorithm for calculating the QR decomposition (QRD) of polynomial matrix is proposed. The algorithm operates by applying a series of polynomial Givens rotations to transform a polynomial matrix into an upper-triangular polynomial matrix and, therefore, amounts to a generalisation of the conventional Givens method for formulating the QRD of a scalar matrix. A simple example is given to demonstrate the algorithm, but also illustrates two clear advantages of this algorithm when compared to an existing method for formulating the decomposition. Firstly, it does not demonstrate the same unstable behaviour that is sometimes observed with the existing algorithm and secondly, it typically requires less iterations to converge. The potential application of the decomposition is highlighted in terms of broadband multi-input multi-output (MIMO) channel equalisation.", "title": "" }, { "docid": "469d83dd9996ca27217907362f44304c", "text": "Although cells in many brain regions respond to reward, the cortical-basal ganglia circuit is at the heart of the reward system. The key structures in this network are the anterior cingulate cortex, the orbital prefrontal cortex, the ventral striatum, the ventral pallidum, and the midbrain dopamine neurons. In addition, other structures, including the dorsal prefrontal cortex, amygdala, hippocampus, thalamus, and lateral habenular nucleus, and specific brainstem structures such as the pedunculopontine nucleus, and the raphe nucleus, are key components in regulating the reward circuit. Connectivity between these areas forms a complex neural network that mediates different aspects of reward processing. Advances in neuroimaging techniques allow better spatial and temporal resolution. These studies now demonstrate that human functional and structural imaging results map increasingly close to primate anatomy.", "title": "" }, { "docid": "a38e20a392e7f03509e29839196628d5", "text": "We investigate the hypothesis that the combination of three related innovations—1) information technology (IT), 2) complementary workplace reorganization, and 3) new products and services—constitute a significant skill-biased technical change affecting labor demand in the United States. Using detailed firm-level data, we find evidence of complementarities among all three of these innovations in factor demand and productivity regressions. In addition, firms that adopt these innovations tend to use more skilled labor. The effects of IT on labor demand are greater when IT is combined with the particular organizational investments we identify, highlighting the importance of IT-enabled organizational change. Disciplines Business Administration, Management, and Operations | Economics | Labor Economics | Other Business | Technology and Innovation This journal article is available at ScholarlyCommons: http://repository.upenn.edu/oid_papers/108 For more information, ebusiness@mit.edu or 617-253-7054 please visit our website at http://ebusiness.mit.edu or contact the Center directly at A research and education initiative at the MIT Sloan School of Management Information Technology, Workplace Organization, and the Demand for Skilled Labor: Firm-level Evidence", "title": "" }, { "docid": "e303eddacfdce272b8e71dc30a507020", "text": "As new media are becoming daily fare, Internet addiction appears as a potential problem in adolescents. From the reported negative consequences, it appears that Internet addiction can have a variety of detrimental outcomes for young people that may require professional intervention. Researchers have now identified a number of activities and personality traits associated with Internet addiction. This study aimed to synthesise previous findings by (i) assessing the prevalence of potential Internet addiction in a large sample of adolescents, and (ii) investigating the interactions between personality traits and the usage of particular Internet applications as risk factors for Internet addiction. A total of 3,105 adolescents in the Netherlands filled out a self-report questionnaire including the Compulsive Internet Use Scale and the Quick Big Five Scale. Results indicate that 3.7% of the sample were classified as potentially being addicted to the Internet. The use of online gaming and social applications (online social networking sites and Twitter) increased the risk for Internet addiction, whereas agreeableness and resourcefulness appeared as protective factors in high frequency online gamers. The findings support the inclusion of ‘Internet addiction’ in the DSM-V. Vulnerability and resilience appear as significant aspects that require consideration in", "title": "" } ]
scidocsrr
d93d4abe5083259bf8f398a2c19cac31
Design Guidelines for Spatial Modulation
[ { "docid": "c74b93fff768f024b921fac7f192102d", "text": "Motivated by information-theoretic considerations, we pr opose a signalling scheme, unitary spacetime modulation, for multiple-antenna communication links. This modulati on s ideally suited for Rayleigh fast-fading environments, since it does not require the rec iv r to know or learn the propagation coefficients. Unitary space-time modulation uses constellations of T M space-time signals f `; ` = 1; : : : ; Lg, whereT represents the coherence interval during which the fading i s approximately constant, and M < T is the number of transmitter antennas. The columns of each ` are orthonormal. When the receiver does not know the propagation coefficients, which between pa irs of transmitter and receiver antennas are modeled as statistically independent, this modulation per forms very well either when the SNR is high or whenT M . We design some multiple-antenna signal constellations and simulate their effectiveness as measured by bit error probability with maximum likelihood decoding. We demonstrate that two antennas have a 6 dB diversity gain over one antenna at 15 dB SNR. Index Terms —Multi-element antenna arrays, wireless communications, channel coding, fading channels, transmitter and receiver diversity, space-time modu lation", "title": "" } ]
[ { "docid": "d27735fc52e407e4b5e1b3fd7296ff8e", "text": "The ACL Anthology Network (AAN)1 is a comprehensive manually curated networked database of citations and collaborations in the field of Computational Linguistics. Each citation edge in AAN is associated with one or more citing sentences. A citing sentence is one that appears in a scientific article and contains an explicit reference to another article. In this paper, we shed the light on the usefulness of AAN citing sentences for understanding research trends and summarizing previous discoveries and contributions. We also propose and motivate several different uses and applications of citing sentences.", "title": "" }, { "docid": "d3c811ec795c04005fb04cdf6eec6b0e", "text": "We present a new replay-based method of continual classification learning that we term \"conditional replay\" which generates samples and labels together by sampling from a distribution conditioned on the class. We compare conditional replay to another replay-based continual learning paradigm (which we term \"marginal replay\") that generates samples independently of their class and assigns labels in a separate step. The main improvement in conditional replay is that labels for generated samples need not be inferred, which reduces the margin for error in complex continual classification learning tasks. We demonstrate the effectiveness of this approach using novel and standard benchmarks constructed from MNIST and Fashion MNIST data, and compare to the regularization-based EWC method (Kirkpatrick et al., 2016; Shin et al., 2017).", "title": "" }, { "docid": "2519ef6995b6345d2131053619d5fc81", "text": "A power and area efficient continuous-time inputfeedforward delta-sigma modulator (DSM) structure is proposed. The coefficients are optimized to increase the input range and reduce the power. The feedforward paths and the summer are embedded into the quantizer, hence the circuit is simplified, and the power consumption and area are reduced. The prototype chip, fabricated in a 0.13-µm CMOS technology, achieves a 68-dB DR (Dynamic Range) and 66.1-dB SNDR (signal-to-noise-and-distortion ratio) over a 1.25-MHz signal bandwidth with a 160-MHz clock. The power consumption of the modulator is 2.7 mW under a 1.2-V supply, and the chip core area is 0.082mm2.", "title": "" }, { "docid": "704cad33eed2b81125f856c4efbff4fa", "text": "In order to realize missile real-time change flight trajectory, three-loop autopilot is setting up. The structure characteristics, autopilot model, and control parameters design method were researched. Firstly, this paper introduced the 11th order three-loop autopilot model. With the principle of systems reduce model order, the 5th order model was deduced. On that basis, open-loop frequency characteristic and closed-loop frequency characteristic were analyzed. The variables of velocity ratio, dynamic pressure ratio and elevator efficiency ratio were leading to correct system nonlinear. And then autopilot gains design method were induced. System flight simulations were done, and result shows that autopilot gains played a good job in the flight trajectory, autopilot satisfied the flight index.", "title": "" }, { "docid": "a470aa1ba955cdb395b122daf2a17b6a", "text": "Many real-world sequential decision making problems are partially observable by nature, and the environment model is typically unknown. Consequently, there is great need for reinforcement learning methods that can tackle such problems given only a stream of rewards and incomplete and noisy observations. In this paper, we propose deep variational reinforcement learning (DVRL), which introduces an inductive bias that allows an agent to learn a generative model of the environment and perform inference in that model to effectively aggregate the available information. We develop an n-step approximation to the evidence lower bound (ELBO), allowing the model to be trained jointly with the policy. This ensures that the latent state representation is suitable for the control task. In experiments on Mountain Hike and flickering Atari we show that our method outperforms previous approaches relying on recurrent neural networks to encode the past.", "title": "" }, { "docid": "6718b56c63e1f7cc73b38c620f5953d7", "text": "The design of a CMOS 22-29-GHz pulse-radar receiver (RX) front-end for ultra-wideband automotive radar sensors is presented. The chip includes a low-noise amplifier, in-phase/quadrature mixers, a quadrature voltage-controlled oscillator (QVCO), pulse formers, and baseband variable-gain amplifiers. Fabricated in a 0.18-mum CMOS process, the RX front-end chip occupies a die area of 3 mm2. On-wafer measurements show a conversion gain of 35-38.1 dB, a noise figure of 5.5-7.4 dB, and an input return loss less than -14.5 dB in the 22-29-GHz automotive radar band. The phase noise of the constituent QVCO is -107 dBc/Hz at 1-MHz offset from a center frequency of 26.5 GHz. The total dc power dissipation of the RX including output buffers is 131 mW.", "title": "" }, { "docid": "27a11e4334850cde5600fc1fde98cfa3", "text": "Active learning techniques have gained popularity to reduce human effort in labeling data instances for inducing a classifier. When faced with large amounts of unlabeled data, such algorithms automatically identify the exemplar instances for manual annotation. More recently, there have been attempts towards a batch mode form of active learning, where a batch of data points is simultaneously selected from an unlabeled set. In this paper, we propose two novel batch mode active learning (BMAL) algorithms: BatchRank and BatchRand. We first formulate the batch selection task as an NP-hard optimization problem; we then propose two convex relaxations, one based on linear programming and the other based on semi-definite programming to solve the batch selection problem. Finally, a deterministic bound is derived on the solution quality for the first relaxation and a probabilistic bound for the second. To the best of our knowledge, this is the first research effort to derive mathematical guarantees on the solution quality of the BMAL problem. Our extensive empirical studies on 15 binary, multi-class and multi-label challenging datasets corroborate that the proposed algorithms perform at par with the state-of-the-art techniques, deliver high quality solutions and are robust to real-world issues like label noise and class imbalance.", "title": "" }, { "docid": "71100c87c7ce1fd246f7924ff8690583", "text": "Predicting acute hypotensive episode (AHE) in patients in emergency rooms and in intensive care units (ICU) is a difficult challenge. As it is well accepted that physiological compensatory adaptations to circulatory shock involve blood flow redistribution and increase in sympathetic stimulation, we recently investigated if galvanic skin response (GSR) or electro-dermal activity (EDA), a measure of sympathetic stimulation, could give information about the impending danger of acute hypotensive episode or circulatory collapse (Subramanya and Mudol, 2012). In this current study, a low-cost wearable device was developed and tested to help progress towards a system for predicting blood pressure (BP) and cardiovascular dynamics. In a pilot study, we examined hypotheses about the relation between GSR values and four BP indexes (systolic BP, diastolic BP, mean arterial pressure and pulse pressure) in apparently healthy human volunteers before and immediately after treadmill exercise. All four BP indexes had significant relationship with GSR, with pulse pressure possibly the strongest predictor of variations in the GSR and vice-versa. This paper opens up opportunities for future investigations to evaluate the utility of continuous monitoring of GSR to forecast imminent cardiovascular collapse, AHE and shock, and could have far-reaching implications for ICU, trauma and critical care management.", "title": "" }, { "docid": "327269bae688715cafb872c1f3c6f1e9", "text": "The modified Ashworth scale (MAS) is the most widely used measurement technique to assess levels of spasticity. In MAS, the evaluator graduates spasticity considering his/her subjective analysis of the muscular endurance during passive stretching. Therefore, it is a subjective scale. Mechanomyography (MMG) allows registering the vibrations generated by muscle contraction and stretching events that propagate through the tissue until the surface of the skin. With this in mind, this study aimed to investigate possible correlations between MMG signal and muscle spasticity levels determined by MAS. We evaluated 34 limbs considered spastic by MAS, including upper and lower limbs of 22 individuals of both sexes. Simultaneously, the MMG signals of the spastic muscle group (agonists) were acquired. The features investigated involved, in the time domain, the median energy (MMGME) of the MMG Z-axis (perpendicular to the muscle fibers) and, in the frequency domain, the median frequency (MMGmf). The Kruskal-Wallis test (p<;0.001) determined that there were significant differences between intergroup MAS spasticity levels for MMGme. There was a high linear correlation between the MMGme and MAS (R2=0.9557) and also a high correlation as indicated by Spearman test (ρ=0.9856; p<;0.001). In spectral analysis, the Kruskal-Wallis test (p = 0.0059) showed that MMGmf did not present significant differences between MAS spasticity levels. There was moderate linear correlation between MAS and MMGmf (R2=0.4883 and Spearman test [ρ = 0.4590; p <; 0.001]). Between the two investigated features, we conclude that the median energy is the most viable feature to evaluate spasticity due to strong correlations with the MAS.", "title": "" }, { "docid": "59193c85b2763629c6258927afe0e90f", "text": "The techniques used in fault diagnosis of automotive engine oils are discussed. The importance of Oil change at the right time and the effect of parameters like water contamination, particle contamination, oxidation, viscosity, fuel content in oil are also discussed. Analysis is carried out on MATLAB with reference to the variation of Dielectric constant of lubrication oil over the use period. The program is designed to display the values of Iron content (particles), water content, density and Acid value at particular instant and display the condition of oil in terms of parameter.", "title": "" }, { "docid": "90fc941f6db85dd24b47fa06dd0bb0aa", "text": "Recent debate has centered on the relative promise of focusinguser-interface research on developing new metaphors and tools thatenhance users abilities to directly manipulate objects versusdirecting effort toward developing interface agents that provideautomation. In this paper, we review principles that show promisefor allowing engineers to enhance human-computer interactionthrough an elegant coupling of automated services with directmanipulation. Key ideas will be highlighted in terms of the Lookoutsystem for scheduling and meeting management.", "title": "" }, { "docid": "10d90e9e1ef3b2759cd26e90997879bb", "text": "Levels of genetic differentiation between populations can be highly variable across the genome, with divergent selection contributing to such heterogeneous genomic divergence. For example, loci under divergent selection and those tightly physically linked to them may exhibit stronger differentiation than neutral regions with weak or no linkage to such loci. Divergent selection can also increase genome-wide neutral differentiation by reducing gene flow (e.g. by causing ecological speciation), thus promoting divergence via the stochastic effects of genetic drift. These consequences of divergent selection are being reported in recently accumulating studies that identify: (i) 'outlier loci' with higher levels of divergence than expected under neutrality, and (ii) a positive association between the degree of adaptive phenotypic divergence and levels of molecular genetic differentiation across population pairs ['isolation by adaptation' (IBA)]. The latter pattern arises because as adaptive divergence increases, gene flow is reduced (thereby promoting drift) and genetic hitchhiking increased. Here, we review and integrate these previously disconnected concepts and literatures. We find that studies generally report 5-10% of loci to be outliers. These selected regions were often dispersed across the genome, commonly exhibited replicated divergence across different population pairs, and could sometimes be associated with specific ecological variables. IBA was not infrequently observed, even at neutral loci putatively unlinked to those under divergent selection. Overall, we conclude that divergent selection makes diverse contributions to heterogeneous genomic divergence. Nonetheless, the number, size, and distribution of genomic regions affected by selection varied substantially among studies, leading us to discuss the potential role of divergent selection in the growth of regions of differentiation (i.e. genomic islands of divergence), a topic in need of future investigation.", "title": "" }, { "docid": "3a68bf0d9d79a8b7794ea9d5d236eb41", "text": "This paper describes a camera-based observation system for football games that is used for the automatic analysis of football games and reasoning about multi-agent activity. The observation system runs on video streams produced by cameras set up for TV broadcasting. The observation system achieves reliability and accuracy through various mechanisms for adaptation, probabilistic estimation, and exploiting domain constraints. It represents motions compactly and segments them into classified ball actions.", "title": "" }, { "docid": "d2e6aa2ab48cdd1907f3f373e0627fa8", "text": "We address the issue of speeding up the training of convolutional networks. Here we study a distributed method adapted to stochastic gradient descent (SGD). The parallel optimization setup uses several threads, each applying individual gradient descents on a local variable. We propose a new way to share information between different threads inspired by gossip algorithms and showing good consensus convergence properties. Our method called GoSGD has the advantage to be fully asynchronous and decentralized. We compared our method to the recent EASGD in [17] on CIFAR-10 show encouraging results.", "title": "" }, { "docid": "fc9b4cb8c37ffefde9d4a7fa819b9417", "text": "Automatic neural architecture design has shown its potential in discovering powerful neural network architectures. Existing methods, no matter based on reinforcement learning or evolutionary algorithms (EA), conduct architecture search in a discrete space, which is highly inefficient. In this paper, we propose a simple and efficient method to automatic neural architecture design based on continuous optimization. We call this new approach neural architecture optimization (NAO). There are three key components in our proposed approach: (1) An encoder embeds/maps neural network architectures into a continuous space. (2) A predictor takes the continuous representation of a network as input and predicts its accuracy. (3) A decoder maps a continuous representation of a network back to its architecture. The performance predictor and the encoder enable us to perform gradient based optimization in the continuous space to find the embedding of a new architecture with potentially better accuracy. Such a better embedding is then decoded to a network by the decoder. Experiments show that the architecture discovered by our method is very competitive for image classification task on CIFAR-10 and language modeling task on PTB, outperforming or on par with the best results of previous architecture search methods with a significantly reduction of computational resources. Specifically we obtain 2.11% test set error rate for CIFAR-10 image classification task and 56.0 test set perplexity of PTB language modeling task. The best discovered architectures on both tasks are successfully transferred to other tasks such as CIFAR-100 and WikiText-2. Furthermore, combined with the recent proposed weight sharing mechanism, we discover powerful architecture on CIFAR-10 (with error rate 3.53%) and on PTB (with test set perplexity 56.6), with very limited computational resources (less than 10 GPU hours) for both tasks.", "title": "" }, { "docid": "6b52bb06c140e5f55f7094cbbf906769", "text": "A method for tracking and predicting cloud movement using ground based sky imagery is presented. Sequences of partial sky images, with each image taken one second apart with a size of 640 by 480 pixels, were processed to determine the time taken for clouds to reach a user defined region in the image or the Sun. The clouds were first identified by segmenting the image based on the difference between the blue and red colour channels, producing a binary detection image. Good features to track were then located in the image and tracked utilising the Lucas-Kanade method for optical flow. From the trajectory of the tracked features and the binary detection image, cloud signals were generated. The trajectory of the individual features were used to determine the risky cloud signals (signals that pass over the user defined region or Sun). Time to collision estimates were produced based on merging these risky cloud signals. Estimates of times up to 40 seconds were achieved with error in the estimate increasing when the estimated time is larger. The method presented has the potential for tracking clouds travelling in different directions and at different velocities.", "title": "" }, { "docid": "2bed165ccf2bfb3c39e1b47b89e22ecc", "text": "Metaphor has a double life. It can be described as a directional process in which a stable, familiar base domain provides inferential structure to a less clearly specified target. But metaphor is also described as a process of finding commonalities, an inherently symmetric process. In this second view, both concepts may be altered by the metaphorical comparison. Whereas most theories of metaphor capture one of these aspects, we offer a model based on structure-mapping that captures both sides of metaphor processing. This predicts (a) an initial processing stage of symmetric alignment; and (b) a later directional phase in which inferences are projected to the target. To test these claims, we collected comprehensibility judgments for forward (e.g., \"A rumor is a virus\") and reversed (\"A virus is a rumor\") metaphors at early and late stages of processing, using a deadline procedure. We found an advantage for the forward direction late in processing, but no directional preference early in processing. Implications for metaphor theory are discussed.", "title": "" }, { "docid": "c6cdcc4fbcb95ce3938ab9e837daa70d", "text": "In this paper, we study the problem of fractional-order PID controller design for an unstable plant-a laboratory model of a magnetic levitation system. To this end, we apply model based control design. A model of the magnetic lévitation system is obtained by means of a closed-loop experiment. Several stable fractional-order controllers are identified and optimized by considering isolated stability regions. Finally, a nonintrusive controller retuning method is used to incorporate fractional-order dynamics into the existing control loop, thereby enhancing its performance. Experimental results confirm the effectiveness of the proposed approach. Control design methods offered in this paper are general enough to be applicable to a variety of control problems.", "title": "" }, { "docid": "b5347e195b44d5ae6d4674c685398fa3", "text": "The perceptual recognition of objects is conceptualized to be a process in which the image of the input is segmented at regions of deep concavity into an arrangement of simple geometric components, such as blocks, cylinders, wedges, and cones. The fundamental assumption of the proposed theory, recognition-by-components (RBC), is that a modest set of generalized-cone components, called geons (N £ 36), can be derived from contrasts of five readily detectable properties of edges in a two-dimensional image: curvature, collinearity, symmetry, parallelism, and cotermination. The detection of these properties is generally invariant over viewing position an$ image quality and consequently allows robust object perception when the image is projected from a novel viewpoint or is degraded. RBC thus provides a principled account of the heretofore undecided relation between the classic principles of perceptual organization and pattern recognition: The constraints toward regularization (Pragnanz) characterize not the complete object but the object's components. Representational power derives from an allowance of free combinations of the geons. A Principle of Componential Recovery can account for the major phenomena of object recognition: If an arrangement of two or three geons can be recovered from the input, objects can be quickly recognized even when they are occluded, novel, rotated in depth, or extensively degraded. The results from experiments on the perception of briefly presented pictures by human observers provide empirical support for the theory.", "title": "" }, { "docid": "5157063545b7ec7193126951c3bdb850", "text": "This paper presents an integrated system for navigation parameter estimation using sequential aerial images, where navigation parameters represent the position and velocity information of an aircraft for autonomous navigation. The proposed integrated system is composed of two parts: relative position estimation and absolute position estimation. Relative position estimation recursively computes the current position of an aircraft by accumulating relative displacement estimates extracted from two successive aerial images. Simple accumulation of parameter values decreases the reliability of the extracted parameter estimates as an aircraft goes on navigating, resulting in a large position error. Therefore, absolute position estimation is required to compensate for the position error generated in relative position estimation. Absolute position estimation algorithms by image matching and digital elevation model (DEM) matching are presented. In image matching, a robust-oriented Hausdorff measure (ROHM) is employed, whereas in DEM matching the algorithm using multiple image pairs is used. Experiments with four real aerial image sequences show the effectiveness of the proposed integrated position estimation algorithm.", "title": "" } ]
scidocsrr
f9f89d416dbb4afef830b1f35cbb4781
Joint Semantic Segmentation and Depth Estimation with Deep Convolutional Networks
[ { "docid": "aef25b8bc64bb624fb22ce39ad7cad89", "text": "Depth estimation and semantic segmentation are two fundamental problems in image understanding. While the two tasks are strongly correlated and mutually beneficial, they are usually solved separately or sequentially. Motivated by the complementary properties of the two tasks, we propose a unified framework for joint depth and semantic prediction. Given an image, we first use a trained Convolutional Neural Network (CNN) to jointly predict a global layout composed of pixel-wise depth values and semantic labels. By allowing for interactions between the depth and semantic information, the joint network provides more accurate depth prediction than a state-of-the-art CNN trained solely for depth prediction [6]. To further obtain fine-level details, the image is decomposed into local segments for region-level depth and semantic prediction under the guidance of global layout. Utilizing the pixel-wise global prediction and region-wise local prediction, we formulate the inference problem in a two-layer Hierarchical Conditional Random Field (HCRF) to produce the final depth and semantic map. As demonstrated in the experiments, our approach effectively leverages the advantages of both tasks and provides the state-of-the-art results.", "title": "" }, { "docid": "92cc028267bc3f8d44d11035a8212948", "text": "The limitations of current state-of-the-art methods for single-view depth estimation and semantic segmentations are closely tied to the property of perspective geometry, that the perceived size of the objects scales inversely with the distance. In this paper, we show that we can use this property to reduce the learning of a pixel-wise depth classifier to a much simpler classifier predicting only the likelihood of a pixel being at an arbitrarily fixed canonical depth. The likelihoods for any other depths can be obtained by applying the same classifier after appropriate image manipulations. Such transformation of the problem to the canonical depth removes the training data bias towards certain depths and the effect of perspective. The approach can be straight-forwardly generalized to multiple semantic classes, improving both depth estimation and semantic segmentation performance by directly targeting the weaknesses of independent approaches. Conditioning the semantic label on the depth provides a way to align the data to their physical scale, allowing to learn a more discriminative classifier. Conditioning depth on the semantic class helps the classifier to distinguish between ambiguities of the otherwise ill-posed problem. We tested our algorithm on the KITTI road scene dataset and NYU2 indoor dataset and obtained obtained results that significantly outperform current state-of-the-art in both single-view depth and semantic segmentation domain.", "title": "" } ]
[ { "docid": "9e37941d333338babef6a6e9e5ed5392", "text": "--------------------------------------------------------------ABSTRACT------------------------------------------------------Using specialized knowledge and perspectives of a set in decision-makings about issues that are qualitative is very helpful. Delphi technique is a group knowledge acquisition method, which is also used for qualitative issue decision-makings. Delphi technique can be used for qualitative research that is exploratory and identifying the nature and fundamental elements of a phenomenon is a basis for study. It is a structured process for collecting data during the successive rounds and group consensus. Despite over a half century of using Delphi in scientific and academic studies, there are still several ambiguities about it. The main problem in using the Delphi technique is lack of a clear theoretical framework for using this technique. Therefore, this study aimed to present a comprehensive theoretical framework for the application of Delphi technique in qualitative research. In this theoretical framework, the application and consensus principles of Delphi technique in qualitative research were clearly explained.", "title": "" }, { "docid": "49a538fc40d611fceddd589b0c9cb433", "text": "Both intuition and creativity are associated with knowledge creation, yet a clear link between them has not been adequately established. First, the available empirical evidence for an underlying relationship between intuition and creativity is sparse in nature. Further, this evidence is arguable as the concepts are diversely operationalized and the measures adopted are often not validated sufficiently. Combined, these issues make the findings from various studies examining the link between intuition and creativity difficult to replicate. Nevertheless, the role of intuition in creativity should not be neglected as it is often reported to be a core component of the idea generation process, which in conjunction with idea evaluation are crucial phases of creative cognition. We review the prior research findings in respect of idea generation and idea evaluation from the view that intuition can be construed as the gradual accumulation of cues to coherence. Thus, we summarize the literature on what role intuitive processes play in the main stages of the creative problem-solving process and outline a conceptual framework of the interaction between intuition and creativity. Finally, we discuss the main challenges of measuring intuition as well as possible directions for future research.", "title": "" }, { "docid": "c2b1dea961e3be5c4135f4eeba8c3495", "text": "Background: Systematic literature reviews (SLRs) have become an established methodology in software engineering (SE) research however they can be very time consuming and error prone. Aim: The aims of this study are to identify and classify tools that can help to automate part or all of the SLR process within the SE domain. Method: A mapping study was performed using an automated search strategy plus snowballing to locate relevant papers. A set of known papers was used to validate the search string. Results: 14 papers were accepted into the final set. Eight presented text mining tools and six discussed the use of visualisation techniques. The stage most commonly targeted was study selection. Only two papers reported an independent evaluation of the tool presented. The majority were evaluated through small experiments and examples of their use. Conclusions: A variety of tools are available to support the SLR process although many are in the early stages of development and usage.", "title": "" }, { "docid": "48fc7aabdd36ada053ebc2d2a1c795ae", "text": "The Value-Based Software Engineering (VBSE) agenda described in the preceding article has the objectives of integrating value considerations into current and emerging software engineering principles and practices, and of developing an overall framework in which they compatibly reinforce each other. In this paper, we provide a case study illustrating some of the key VBSE practices, and focusing on a particular anomaly in the monitoring and control area: the \"Earned Value Management System.\" This is a most useful technique for monitoring and controlling the cost, schedule, and progress of a complex project. But it has absolutely nothing to say about the stakeholder value of the system being developed. The paper introduces an example order-processing software project, and shows how the use of Benefits Realization Analysis, stake-holder value proposition elicitation and reconciliation, and business case analysis provides a framework for stakeholder-earned-value monitoring and control.", "title": "" }, { "docid": "7bda4b1ef78a70e651f74995b01c3c1e", "text": "Given a graph, how can we extract good features for the nodes? For example, given two large graphs from the same domain, how can we use information in one to do classification in the other (i.e., perform across-network classification or transfer learning on graphs)? Also, if one of the graphs is anonymized, how can we use information in one to de-anonymize the other? The key step in all such graph mining tasks is to find effective node features. We propose ReFeX (Recursive Feature eXtraction), a novel algorithm, that recursively combines local (node-based) features with neighborhood (egonet-based) features; and outputs regional features -- capturing \"behavioral\" information. We demonstrate how these powerful regional features can be used in within-network and across-network classification and de-anonymization tasks -- without relying on homophily, or the availability of class labels. The contributions of our work are as follows: (a) ReFeX is scalable and (b) it is effective, capturing regional (\"behavioral\") information in large graphs. We report experiments on real graphs from various domains with over 1M edges, where ReFeX outperforms its competitors on typical graph mining tasks like network classification and de-anonymization.", "title": "" }, { "docid": "0915e156af3bec6a401ec9bd10ab899f", "text": "The ability to generalize from past experience to solve previously unseen tasks is a key research challenge in reinforcement learning (RL). In this paper, we consider RL tasks defined as a sequence of high-level instructions described by natural language and study two types of generalization: to unseen and longer sequences of previously seen instructions, and to sequences where the instructions themselves were previously not seen. We present a novel hierarchical deep RL architecture that consists of two interacting neural controllers: a meta controller that reads instructions and repeatedly communicates subtasks to a subtask controller that in turn learns to perform such subtasks. To generalize better to unseen instructions, we propose a regularizer that encourages to learn subtask embeddings that capture correspondences between similar subtasks. We also propose a new differentiable neural network architecture in the meta controller that learns temporal abstractions which makes learning more stable under delayed reward. Our architecture is evaluated on a stochastic 2D grid world and a 3D visual environment where the agent should execute a list of instructions. We demonstrate that the proposed architecture is able to generalize well over unseen instructions as well as longer lists of instructions.", "title": "" }, { "docid": "bcda77a0de7423a2a4331ff87ce9e969", "text": "Because of the increasingly competitive nature of the computer manufacturing industry, Compaq Computer Corporation has made some trend-setting changes in the way it does business. One of these changes is the extension of Compaq's call-logging sy ste problem-resolution component that assists customer support personnel in determining the resolution to a customer's questions and problems. Recently, Compaq extended its customer service to provide not only dealer support but also direct end user support; it is also accepting ownership of any Compaq customer's problems in a Banyan, Mi-crosoft, Novell, or SCO UNIX operating environment. One of the tools that makes this feat possible is SMART (support management automated reasoning technology). SMART is part of a Compaq strategy to increase the effectiveness of the customer support staff and reduce overall cost to the organization by retaining problem-solving knowledge and making it available to the entire support staff at the point it is needed.", "title": "" }, { "docid": "25d14017403c96eceeafcbda1cbdfd2c", "text": "We introduce a neural network model that marries together ideas from two prominent strands of research on domain adaptation through representation learning: structural correspondence learning (SCL, (Blitzer et al., 2006)) and autoencoder neural networks (NNs). Our model is a three-layer NN that learns to encode the non-pivot features of an input example into a lowdimensional representation, so that the existence of pivot features (features that are prominent in both domains and convey useful information for the NLP task) in the example can be decoded from that representation. The low-dimensional representation is then employed in a learning algorithm for the task. Moreover, we show how to inject pre-trained word embeddings into our model in order to improve generalization across examples with similar pivot features. We experiment with the task of cross-domain sentiment classification on 16 domain pairs and show substantial improvements over strong baselines.1", "title": "" }, { "docid": "40fef2ba4ae0ecd99644cf26ed8fa37f", "text": "Plant has plenty use in foodstuff, medicine and industry. And it is also vitally important for environmental protection. However, it is an important and difficult task to recognize plant species on earth. Designing a convenient and automatic recognition system of plants is necessary and useful since it can facilitate fast classifying plants, and understanding and managing them. In this paper, a leaf database from different plants is firstly constructed. Then, a new classification method, referred to as move median centers (MMC) hypersphere classifier, for the leaf database based on digital morphological feature is proposed. The proposed method is more robust than the one based on contour features since those significant curvature points are hard to find. Finally, the efficiency and effectiveness of the proposed method in recognizing different plants is demonstrated by experiments. 2006 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "3ce39c23ef5be4dd8fd10152ded95a6e", "text": "Head pose and eye location for gaze estimation have been separately studied in numerous works in the literature. Previous research shows that satisfactory accuracy in head pose and eye location estimation can be achieved in constrained settings. However, in the presence of nonfrontal faces, eye locators are not adequate to accurately locate the center of the eyes. On the other hand, head pose estimation techniques are able to deal with these conditions; hence, they may be suited to enhance the accuracy of eye localization. Therefore, in this paper, a hybrid scheme is proposed to combine head pose and eye location information to obtain enhanced gaze estimation. To this end, the transformation matrix obtained from the head pose is used to normalize the eye regions, and in turn, the transformation matrix generated by the found eye location is used to correct the pose estimation procedure. The scheme is designed to enhance the accuracy of eye location estimations, particularly in low-resolution videos, to extend the operative range of the eye locators, and to improve the accuracy of the head pose tracker. These enhanced estimations are then combined to obtain a novel visual gaze estimation system, which uses both eye location and head information to refine the gaze estimates. From the experimental results, it can be derived that the proposed unified scheme improves the accuracy of eye estimations by 16% to 23%. Furthermore, it considerably extends its operating range by more than 15° by overcoming the problems introduced by extreme head poses. Moreover, the accuracy of the head pose tracker is improved by 12% to 24%. Finally, the experimentation on the proposed combined gaze estimation system shows that it is accurate (with a mean error between 2° and 5°) and that it can be used in cases where classic approaches would fail without imposing restraints on the position of the head.", "title": "" }, { "docid": "c320b38a7a9181e13c07fc4da632cab5", "text": "In this study, the authors provide a global assessment of the performance of different drought indices for monitoring drought impacts on several hydrological, agricultural, and ecological response variables. For this purpose, they compare the performance of several drought indices [the standardized precipitation index (SPI); four versions of the Palmer drought severity index (PDSI); and the standardized precipitation evapotranspiration index (SPEI)] to predict changes in streamflow, soil moisture, forest growth, and crop yield. The authors found a superior capability of the SPEI and the SPI drought * Corresponding author address: Sergio M. Vicente-Serrano, Instituto Pirenaico de Ecologı́a, Consejo Superior de Investigaciones Cientı́ficas (IPE-CSIC), Campus de Aula Dei, P.O. Box 13034, E-50059 Zaragoza, Spain. E-mail address: svicen@ipe.csic.es Earth Interactions d Volume 16 (2012) d Paper No. 10 d Page 1 DOI: 10.1175/2012EI000434.1 Copyright 2012, Paper 16-010; 69313 words, 11 Figures, 0 Animations, 3 Tables. http://EarthInteractions.org indices, which are calculated on different time scales than the Palmer indices to capture the drought impacts on the aforementioned hydrological, agricultural, and ecological variables. They detected small differences in the comparative performance of the SPI and the SPEI indices, but the SPEI was the drought index that best captured the responses of the assessed variables to drought in summer, the season in which more drought-related impacts are recorded and in which drought monitoring is critical. Hence, the SPEI shows improved capability to identify drought impacts as compared with the SPI. In conclusion, it seems reasonable to recommend the use of the SPEI if the responses of the variables of interest to drought are not known a priori.", "title": "" }, { "docid": "da1f5a7c5c39f50c70948eeba5cd9716", "text": "Mushrooms have long been used not only as food but also for the treatment of various ailments. Although at its infancy, accumulated evidence suggested that culinary-medicinal mushrooms may play an important role in the prevention of many age-associated neurological dysfunctions, including Alzheimer's and Parkinson's diseases. Therefore, efforts have been devoted to a search for more mushroom species that may improve memory and cognition functions. Such mushrooms include Hericium erinaceus, Ganoderma lucidum, Sarcodon spp., Antrodia camphorata, Pleurotus giganteus, Lignosus rhinocerotis, Grifola frondosa, and many more. Here, we review over 20 different brain-improving culinary-medicinal mushrooms and at least 80 different bioactive secondary metabolites isolated from them. The mushrooms (either extracts from basidiocarps/mycelia or isolated compounds) reduced beta amyloid-induced neurotoxicity and had anti-acetylcholinesterase, neurite outgrowth stimulation, nerve growth factor (NGF) synthesis, neuroprotective, antioxidant, and anti-(neuro)inflammatory effects. The in vitro and in vivo studies on the molecular mechanisms responsible for the bioactive effects of mushrooms are also discussed. Mushrooms can be considered as useful therapeutic agents in the management and/or treatment of neurodegeneration diseases. However, this review focuses on in vitro evidence and clinical trials with humans are needed.", "title": "" }, { "docid": "dd8fd90b433c3c260a04fe87ae548902", "text": "Power control in a digital handset is practically implemented in a discrete fashion, and usually, such a discrete power control (DPC) scheme is suboptimal. In this paper, we first show that in a Poison-distributed ad hoc network, if DPC is properly designed with a certain condition satisfied, it can strictly work better than no power control (i.e., users use the same constant power) in terms of average signal-to-interference ratio, outage probability, and spatial reuse. This motivates us to propose an N-layer DPC scheme in a wireless clustered ad hoc network, where transmitters and their intended receivers in circular clusters are characterized by a Poisson cluster process on the plane ℝ2. The cluster of each transmitter is tessellated into N-layer annuli with transmit power Pi adopted if the intended receiver is located at the ith layer. Two performance metrics of transmission capacity (TC) and outage-free spatial reuse factor are redefined based on the N-layer DPC. The outage probability of each layer in a cluster is characterized and used to derive the optimal power scaling law Pi ∈ Θ(ηi-(α/2)), with ηi as the probability of selecting power Pi and α as the path loss exponent. Moreover, the specific design approaches to optimize Pi and N based on ηi are also discussed. Simulation results indicate that the proposed optimal N-layer DPC significantly outperforms other existing power control schemes in terms of TC and spatial reuse.", "title": "" }, { "docid": "5552216832bb7315383d1c4f2bfe0635", "text": "Semantic parsing maps sentences to formal meaning representations, enabling question answering, natural language interfaces, and many other applications. However, there is no agreement on what the meaning representation should be, and constructing a sufficiently large corpus of sentence-meaning pairs for learning is extremely challenging. In this paper, we argue that both of these problems can be avoided if we adopt a new notion of semantics. For this, we take advantage of symmetry group theory, a highly developed area of mathematics concerned with transformations of a structure that preserve its key properties. We define a symmetry of a sentence as a syntactic transformation that preserves its meaning. Semantically parsing a sentence then consists of inferring its most probable orbit under the language’s symmetry group, i.e., the set of sentences that it can be transformed into by symmetries in the group. The orbit is an implicit representation of a sentence’s meaning that suffices for most applications. Learning a semantic parser consists of discovering likely symmetries of the language (e.g., paraphrases) from a corpus of sentence pairs with the same meaning. Once discovered, symmetries can be composed in a wide variety of ways, potentially resulting in an unprecedented degree of immunity to syntactic variation.", "title": "" }, { "docid": "8b50b28500a388d9913516e9dd5be719", "text": "Scientific experiments and large-scale simulations produce massive amounts of data. Many of these scientific datasets are arrays, and are stored in file formats such as HDF5 and NetCDF. Although scientific data management systems, such as SciDB, are designed to manipulate arrays, there are challenges in integrating these systems into existing analysis workflows. Major barriers include the expensive task of preparing and loading data before querying, and converting the final results to a format that is understood by the existing post-processing and visualization tools. As a consequence, integrating a data management system into an existing scientific data analysis workflow is time-consuming and requires extensive user involvement. In this paper, we present the design of a new scientific data analysis system that efficiently processes queries directly over data stored in the HDF5 file format. This design choice eliminates the tedious and error-prone data loading process, and makes the query results readily available to the next processing steps of the analysis workflow. Our design leverages the increasing main memory capacities found in supercomputers through bitmap indexing and in-memory query execution. In addition, query processing over the HDF5 data format can be effortlessly parallelized to utilize the ample concurrency available in large-scale supercomputers and modern parallel file systems. We evaluate the performance of our system on a large supercomputing system and experiment with both a synthetic dataset and a real cosmology observation dataset. Our system frequently outperforms the relational database system that the cosmology team currently uses, and is more than 10X faster than Hive when processing data in parallel. Overall, by eliminating the data loading step, our query processing system is more effective in supporting in situ scientific analysis workflows.", "title": "" }, { "docid": "8e099249047cb4e1550f8ddb287bddca", "text": "Several arguments can be found in business intelligence literature that the use of business intelligence systems can bring multiple benefits, for example, via faster and easier access to information, savings in information technology (‘IT’) and greater customer satisfaction all the way through to the improved competitiveness of enterprises. Yet, most of these benefits are often very difficult to measure because of their indirect and delayed effects on business success. On top of the difficulties in justifying investments in information technology (‘IT’), particularly business intelligence (‘BI’), business executives generally want to know whether the investment is worth the money and if it can be economically justified. In looking for an answer to this question, various methods of evaluating investments can be employed. We can use the classic return on investment (‘ROI’) calculation, cost-benefit analysis, the net present value (‘NPV’) method, the internal rate of return (‘IRR’) and others. However, it often appears in business practice that the use of these methods alone is inappropriate, insufficient or unfeasible for evaluating an investment in business intelligence systems. Therefore, for this purpose, more appropriate methods are those based mainly on a qualitative approach, such as case studies, empirical analyses, user satisfaction analyses, and others that can be employed independently or can help us complete the whole picture in conjunction with the previously mentioned methods. Since there is no universal approach to the evaluation of an investment in information technology and business intelligence, it is necessary to approach each case in a different way based on the specific circumstances and purpose of the evaluation. This paper presents a case study in which the evaluation of an investment in on-line analytical processing (‘OLAP’) technology in the company Melamin was made through an", "title": "" }, { "docid": "14ca9dfee206612e36cd6c3b3e0ca61e", "text": "Radio-frequency identification (RFID) technology promises to revolutionize the way we track items in supply chain, retail store, and asset management applications. The size and different characteristics of RFID data pose many interesting challenges in the current data management systems. In this paper, we provide a brief overview of RFID technology and highlight a few of the data management challenges that we believe are suitable topics for exploratory research.", "title": "" }, { "docid": "4a3ced0711361d3267745c2b29f78ee7", "text": "Content delivery networks must balance a number of trade-offs when deciding how to direct a client to a CDN server. Whereas DNS-based redirection requires a complex global traffic manager, anycast depends on BGP to direct a client to a CDN front-end. Anycast is simple to operate, scalable, and naturally resilient to DDoS attacks. This simplicity, however, comes at the cost of precise control of client redirection. We examine the performance implications of using anycast in a global, latency-sensitive, CDN. We analyze millions of client-side measurements from the Bing search service to capture anycast versus unicast performance to nearby front-ends. We find that anycast usually performs well despite the lack of precise control but that it directs roughly 20% of clients to a suboptimal front-end. We also show that the performance of these clients can be improved through a simple history-based prediction scheme.", "title": "" }, { "docid": "006793685095c0772a1fe795d3ddbd76", "text": "Legislators, designers of legal information systems, as well as citizens face often problems due to the interdependence of the laws and the growing number of references needed to interpret them. In this paper, we introduce the ”Legislation Network” as a novel approach to address several quite challenging issues for identifying and quantifying the complexity inside the Legal Domain. We have collected an extensive data set of a more than 60-year old legislation corpus, as published in the Official Journal of the European Union, and we further analysed it as a complex network, thus gaining insight into its topological structure. Among other issues, we have performed a temporal analysis of the evolution of the Legislation Network, as well as a robust resilience test to assess its vulnerability under specific cases that may lead to possible breakdowns. Results are quite promising, showing that our approach can lead towards an enhanced explanation in respect to the structure and evolution of legislation properties.", "title": "" }, { "docid": "3e63c8a5499966f30bd3e6b73494ff82", "text": "Events can be understood in terms of their temporal structure. The authors first draw on several bodies of research to construct an analysis of how people use event structure in perception, understanding, planning, and action. Philosophy provides a grounding for the basic units of events and actions. Perceptual psychology provides an analogy to object perception: Like objects, events belong to categories, and, like objects, events have parts. These relationships generate 2 hierarchical organizations for events: taxonomies and partonomies. Event partonomies have been studied by looking at how people segment activity as it happens. Structured representations of events can relate partonomy to goal relationships and causal structure; such representations have been shown to drive narrative comprehension, memory, and planning. Computational models provide insight into how mental representations might be organized and transformed. These different approaches to event structure converge on an explanation of how multiple sources of information interact in event perception and conception.", "title": "" } ]
scidocsrr
0493c7dd3082a6c60012cc065512d542
Lifting from the Deep: Convolutional 3D Pose Estimation from a Single Image
[ { "docid": "79cffed53f36d87b89577e96a2b2e713", "text": "Human pose estimation has made significant progress during the last years. However current datasets are limited in their coverage of the overall pose estimation challenges. Still these serve as the common sources to evaluate, train and compare different models on. In this paper we introduce a novel benchmark \"MPII Human Pose\" that makes a significant advance in terms of diversity and difficulty, a contribution that we feel is required for future developments in human body models. This comprehensive dataset was collected using an established taxonomy of over 800 human activities [1]. The collected images cover a wider variety of human activities than previous datasets including various recreational, occupational and householding activities, and capture people from a wider range of viewpoints. We provide a rich set of labels including positions of body joints, full 3D torso and head orientation, occlusion labels for joints and body parts, and activity labels. For each image we provide adjacent video frames to facilitate the use of motion information. Given these rich annotations we perform a detailed analysis of leading human pose estimation approaches and gaining insights for the success and failures of these methods.", "title": "" }, { "docid": "ff39f9fdb98981137f93d156150e1b83", "text": "We describe a method for recovering 3D human body pose from silhouettes. Our model is based on learning a latent space using the Gaussian Process Latent Variable Model (GP-LVM) [1] encapsulating both pose and silhouette features Our method is generative, this allows us to model the ambiguities of a silhouette representation in a principled way. We learn a dynamical model over the latent space which allows us to disambiguate between ambiguous silhouettes by temporal consistency. The model has only two free parameters and has several advantages over both regression approaches and other generative methods. In addition to the application shown in this paper the suggested model is easily extended to multiple observation spaces without constraints on type.", "title": "" } ]
[ { "docid": "033253834167cecbcc2658c8ba22aa18", "text": "Semi-supervised image classification aims to classify a large quantity of unlabeled images by typically harnessing scarce labeled images. Existing semi-supervised methods often suffer from inadequate classification accuracy when encountering difficult yet critical images, such as outliers, because they treat all unlabeled images equally and conduct classifications in an imperfectly ordered sequence. In this paper, we employ the curriculum learning methodology by investigating the difficulty of classifying every unlabeled image. The reliability and the discriminability of these unlabeled images are particularly investigated for evaluating their difficulty. As a result, an optimized image sequence is generated during the iterative propagations, and the unlabeled images are logically classified from simple to difficult. Furthermore, since images are usually characterized by multiple visual feature descriptors, we associate each kind of features with a teacher, and design a multi-modal curriculum learning (MMCL) strategy to integrate the information from different feature modalities. In each propagation, each teacher analyzes the difficulties of the currently unlabeled images from its own modality viewpoint. A consensus is subsequently reached among all the teachers, determining the currently simplest images (i.e., a curriculum), which are to be reliably classified by the multi-modal learner. This well-organized propagation process leveraging multiple teachers and one learner enables our MMCL to outperform five state-of-the-art methods on eight popular image data sets.", "title": "" }, { "docid": "bd8788c3d4adc5f3671f741e884c7f34", "text": "We present a method for learning an embedding that places images of humans in similar poses nearby. This embedding can be used as a direct method of comparing images based on human pose, avoiding potential challenges of estimating body joint positions. Pose embedding learning is formulated under a triplet-based distance criterion. A deep architecture is used to allow learning of a representation capable of making distinctions between different poses. Experiments on human pose matching and retrieval from video data demonstrate the potential of the method.", "title": "" }, { "docid": "bdb9f3822ef89276b1aa1d493d1f9379", "text": "Individual performance is of high relevance for organizations and individuals alike. Showing high performance when accomplishing tasks results in satisfaction, feelings of selfefficacy and mastery (Bandura, 1997; Kanfer et aL, 2005). Moreover, high performing individuals get promoted, awarded and honored. Career opportunities for individuals who perform well are much better than those of moderate or low performing individuals (Van Scotter et aI., 2000). This chapter summarizes research on individual performance and addresses performance as a multi-dimensional and dynamic concept. First, we define the concept of performance, next we discuss antecedents of between-individual variation of performance, and describe intraindividual change and variability in performance, and finally, we present a research agenda for future research.", "title": "" }, { "docid": "9d2ec490b7efb23909abdbf5f209f508", "text": "Terrestrial Laser scanner (TLS) has been widely used in our recent architectural heritage projects and huge quantity of point cloud data was gotten. In order to process the huge quantity of point cloud data effectively and reconstruct their 3D models, more effective methods should be developed based on existing automatic or semiautomatic point cloud processing algorithms. Here introduce a new algorithm for rapid extracting the pillar features of Chinese ancient buildings from their point cloud data, the algorithm has the least human interaction in the data processing and is more efficient to extract pillars from point cloud data than existing feature extracting algorithms. With this algorithm we identify the pillar features by dividing the point cloud into slices firstly, and then get the projective parameters of pillar objects in selected slices, the next compare the local projective parameters in adjacent slices, the next combine them to get the global parameters of the pillars and at last reconstruct the 3d pillar models.", "title": "" }, { "docid": "0d23946f8a94db5943deee81deb3f322", "text": "The Spatial Semantic Hierarchy is a model of knowledge of large-scale space consisting of multiple interacting representations, both qualitative and quantitative. The SSH is inspired by the properties of the human cognitive map, and is intended to serve both as a model of the human cognitive map and as a method for robot exploration and map-building. The multiple levels of the SSH express states of partial knowledge, and thus enable the human or robotic agent to deal robustly with uncertainty during both learning and problem-solving. The control level represents useful patterns of sensorimotor interaction with the world in the form of trajectory-following and hill-climbing control laws leading to locally distinctive states. Local geometric maps in local frames of reference can be constructed at the control level to serve as observers for control laws in particular neighborhoods. The causal level abstracts continuous behavior among distinctive states into a discrete model consisting of states linked by actions. The topological level introduces the external ontology of places, paths and regions by abduction to explain the observed pattern of states and actions at the causal level. Quantitative knowledge at the control, causal and topological levels supports a “patchwork map” of local geometric frames of reference linked by causal and topological connections. The patchwork map can be merged into a single global frame of reference at the metrical level when sufficient information and computational resources are available. We describe the assumptions and guarantees behind the generality of the SSH across environments and sensorimotor systems. Evidence is presented from several partial implementations of the SSH on simulated and physical robots.  2000 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "6f95d8bcaefcc99209279dadb1beb0a6", "text": "Public cloud software marketplaces already offer users a wealth of choice in operating systems, database management systems, financial software, and virtual networking, all deployable and configurable at the click of a button. Unfortunately, this level of customization has not extended to emerging hypervisor-level services, partly because traditional virtual machines (VMs) are fully controlled by only one hypervisor at a time. Currently, a VM in a cloud platform cannot concurrently use hypervisorlevel services from multiple third-parties in a compartmentalized manner. We propose the notion of a multihypervisor VM, which is an unmodified guest that can simultaneously use services from multiple coresident, but isolated, hypervisors. We present a new virtualization architecture, called Span virtualization, that leverages nesting to allow multiple hypervisors to concurrently control a guest’s memory, virtual CPU, and I/O resources. Our prototype of Span virtualization on the KVM/QEMU platform enables a guest to use services such as introspection, network monitoring, guest mirroring, and hypervisor refresh, with performance comparable to traditional nested VMs.", "title": "" }, { "docid": "8dcb268612ba90ac420ebaa89becb879", "text": "Recognition of a human's continuous emotional states in real time plays an important role in machine emotional intelligence and human-machine interaction. Existing real-time emotion recognition systems use stimuli with low ecological validity (e.g., picture, sound) to elicit emotions and to recognise only valence and arousal. To overcome these limitations, in this paper, we construct a standardised database of 16 emotional film clips that were selected from over one thousand film excerpts. Based on emotional categories that are induced by these film clips, we propose a real-time movie-induced emotion recognition system for identifying an individual's emotional states through the analysis of brain waves. Thirty participants took part in this study and watched 16 standardised film clips that characterise real-life emotional experiences and target seven discrete emotions and neutrality. Our system uses a 2-s window and a 50 percent overlap between two consecutive windows to segment the EEG signals. Emotional states, including not only the valence and arousal dimensions but also similar discrete emotions in the valence-arousal coordinate space, are predicted in each window. Our real-time system achieves an overall accuracy of 92.26 percent in recognising high-arousal and valenced emotions from neutrality and 86.63 percent in recognising positive from negative emotions. Moreover, our system classifies three positive emotions (joy, amusement, tenderness) with an average of 86.43 percent accuracy and four negative emotions (anger, disgust, fear, sadness) with an average of 65.09 percent accuracy. These results demonstrate the advantage over the existing state-of-the-art real-time emotion recognition systems from EEG signals in terms of classification accuracy and the ability to recognise similar discrete emotions that are close in the valence-arousal coordinate space.", "title": "" }, { "docid": "225e7b608d06d218144853b900d40fd1", "text": "Deep neural networks require a large amount of labeled training data during supervised learning. However, collecting and labeling so much data might be infeasible in many cases. In this paper, we introduce a deep transfer learning scheme, called selective joint fine-tuning, for improving the performance of deep learning tasks with insufficient training data. In this scheme, a target learning task with insufficient training data is carried out simultaneously with another source learning task with abundant training data. However, the source learning task does not use all existing training data. Our core idea is to identify and use a subset of training images from the original source learning task whose low-level characteristics are similar to those from the target learning task, and jointly fine-tune shared convolutional layers for both tasks. Specifically, we compute descriptors from linear or nonlinear filter bank responses on training images from both tasks, and use such descriptors to search for a desired subset of training samples for the source learning task. Experiments demonstrate that our deep transfer learning scheme achieves state-of-the-art performance on multiple visual classification tasks with insufficient training data for deep learning. Such tasks include Caltech 256, MIT Indoor 67, and fine-grained classification problems (Oxford Flowers 102 and Stanford Dogs 120). In comparison to fine-tuning without a source domain, the proposed method can improve the classification accuracy by 2% - 10% using a single model. Codes and models are available at https://github.com/ZYYSzj/Selective-Joint-Fine-tuning.", "title": "" }, { "docid": "71a262b1c91c89f379527b271e45e86e", "text": "Geospatial object detection from high spatial resolution (HSR) remote sensing imagery is a heated and challenging problem in the field of automatic image interpretation. Despite convolutional neural networks (CNNs) having facilitated the development in this domain, the computation efficiency under real-time application and the accurate positioning on relatively small objects in HSR images are two noticeable obstacles which have largely restricted the performance of detection methods. To tackle the above issues, we first introduce semantic segmentation-aware CNN features to activate the detection feature maps from the lowest level layer. In conjunction with this segmentation branch, another module which consists of several global activation blocks is proposed to enrich the semantic information of feature maps from higher level layers. Then, these two parts are integrated and deployed into the original single shot detection framework. Finally, we use the modified multi-scale feature maps with enriched semantics and multi-task training strategy to achieve end-to-end detection with high efficiency. Extensive experiments and comprehensive evaluations on a publicly available 10-class object detection dataset have demonstrated the superiority of the presented method.", "title": "" }, { "docid": "99d84e588208ac09629a02a8349c560a", "text": "Psilocybin (4-phosphoryloxy-N,N-dimethyltryptamine) is the major psychoactive alkaloid of some species of mushrooms distributed worldwide. These mushrooms represent a growing problem regarding hallucinogenic drug abuse. Despite its experimental medical use in the 1960s, only very few pharmacological data about psilocybin were known until recently. Because of its still growing capacity for abuse and the widely dispersed data this review presents all the available pharmacological data about psilocybin.", "title": "" }, { "docid": "31bbb42b7b1a8723f5e37c1f93fef7be", "text": "Future 5G and Internet of Things (IoT) applications will heavily rely on long-range communication technologies such as low-power wireless area networks (LPWANs). In particular, LoRaWAN built on LoRa physical layer is gathering increasing interests, both from academia and industries, for enabling low-cost energy efficient IoT wireless sensor networks for, e.g., environmental monitoring over wide areas. While its communication range may go up to 20 kilometers, the achievable bit rates in LoRaWAN are limited to a few kilobits per second. In the event of collisions, the perceived rate is further reduced due to packet loss and retransmissions. Firstly, to alleviate the harmful impacts of collisions, we propose a decoding algorithm that enables to resolve several superposed LoRa signals. Our proposed method exploits the slight desynchronization of superposed signals and specific features of LoRa physical layer. Secondly, we design a full MAC protocol enabling collision resolution. The simulation results demonstrate that the proposed method outperforms conventional LoRaWAN jointly in terms of system throughput, energy efficiency as well as delay. These results show that our scheme is well suited for 5G and IoT systems, as one of their major goals is to provide the best trade-off among these performance objectives.", "title": "" }, { "docid": "6c29473469f392079fa8406419190116", "text": "The five-factor model of personality is a hierarchical organization of personality traits in terms of five basic dimensions: Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness to Experience. Research using both natural language adjectives and theoretically based personality questionnaires supports the comprehensiveness of the model and its applicability across observers and cultures. This article summarizes the history of the model and its supporting evidence; discusses conceptions of the nature of the factors; and outlines an agenda for theorizing about the origins and operation of the factors. We argue that the model should prove useful both for individual assessment and for the elucidation of a number of topics of interest to personality psychologists.", "title": "" }, { "docid": "6b0bb5e87efacf0008918380f98cd5ae", "text": "This paper discusses Low Power Wide Area Network technologies. The purpose of this work is a presentation of these technologies in a mutual context in order to analyse their coexistence. In this work there are described Low Power Wide Area Network terms and their representatives LoRa, Sigfox and IQRF, of which characteristics, topology and some significant technics are inspected. The technologies are also compared together in a frequency spectrum in order to detect risk bands causing collisions. A potential increased risk of collisions is found around 868.2 MHz. The main contribution of this paper is a summary of characteristics, which have an influence on the resulting coexistence.", "title": "" }, { "docid": "c2baa873bc2850b14b3868cdd164019f", "text": "It is expensive to obtain labeled real-world visual data for use in training of supervised algorithms. Therefore, it is valuable to leverage existing databases of labeled data. However, the data in the source databases is often obtained under conditions that differ from those in the new task. Transfer learning provides techniques for transferring learned knowledge from a source domain to a target domain by finding a mapping between them. In this paper, we discuss a method for projecting both source and target data to a generalized subspace where each target sample can be represented by some combination of source samples. By employing a low-rank constraint during this transfer, the structure of source and target domains are preserved. This approach has three benefits. First, good alignment between the domains is ensured through the use of only relevant data in some subspace of the source domain in reconstructing the data in the target domain. Second, the discriminative power of the source domain is naturally passed on to the target domain. Third, noisy information will be filtered out during knowledge transfer. Extensive experiments on synthetic data, and important computer vision problems such as face recognition application and visual domain adaptation for object recognition demonstrate the superiority of the proposed approach over the existing, well-established methods.", "title": "" }, { "docid": "7359729fe4bb369798c05c8c7c258111", "text": "By considering various situations of climatologically phenomena affecting local weather conditions in various parts of the world. These weather conditions have a direct effect on crop yield. Various researches have been done exploring the connections between large-scale climatologically phenomena and crop yield. Artificial neural networks have been demonstrated to be powerful tools for modeling and prediction, to increase their effectiveness. Crop prediction methodology is used to predict the suitable crop by sensing various parameter of soil and also parameter related to atmosphere. Parameters like type of soil, PH, nitrogen, phosphate, potassium, organic carbon, calcium, magnesium, sulphur, manganese, copper, iron, depth, temperature, rainfall, humidity. For that purpose we are used artificial neural network (ANN).", "title": "" }, { "docid": "578696bf921cc5d4e831786c67845346", "text": "Identifying and monitoring multiple disease biomarkers and other clinically important factors affecting the course of a disease, behavior or health status is of great clinical relevance. Yet conventional statistical practice generally falls far short of taking full advantage of the information available in multivariate longitudinal data for tracking the course of the outcome of interest. We demonstrate a method called multi-trajectory modeling that is designed to overcome this limitation. The method is a generalization of group-based trajectory modeling. Group-based trajectory modeling is designed to identify clusters of individuals who are following similar trajectories of a single indicator of interest such as post-operative fever or body mass index. Multi-trajectory modeling identifies latent clusters of individuals following similar trajectories across multiple indicators of an outcome of interest (e.g., the health status of chronic kidney disease patients as measured by their eGFR, hemoglobin, blood CO2 levels). Multi-trajectory modeling is an application of finite mixture modeling. We lay out the underlying likelihood function of the multi-trajectory model and demonstrate its use with two examples.", "title": "" }, { "docid": "4ade01af5fd850722fd690a5d8f938f4", "text": "IT may appear blasphemous to paraphrase the title of the classic article of Vannevar Bush but it may be a mitigating factor that it is done to pay tribute to another legendary scientist, Eugene Garfield. His ideas of citationbased searching, resource discovery and quantitative evaluation of publications serve as the basis for many of the most innovative and powerful online information services these days. Bush 60 years ago contemplated – among many other things – an information workstation, the Memex. A researcher would use it to annotate, organize, link, store, and retrieve microfilmed documents. He is acknowledged today as the forefather of the hypertext system, which in turn, is the backbone of the Internet. He outlined his thoughts in an essay published in the Atlantic Monthly. Maybe because of using a nonscientific outlet the paper was hardly quoted and cited in scholarly and professional journals for 30 years. Understandably, the Atlantic Monthly was not covered by the few, specialized abstracting and indexing databases of scientific literature. Such general interest magazines are not source journals in either the Web of Science (WoS), or Scopus databases. However, records for items which cite the ‘As We May Think’ article of Bush (also known as the ‘Memex’ paper) are listed with appropriate bibliographic information. Google Scholar (G-S) lists the records for the Memex paper and many of its citing papers. It is a rather confusing list with many dead links or otherwise dysfunctional links, and a hodge-podge of information related to Bush. It is quite telling that (based on data from the 1945– 2005 edition of WoS) the article of Bush gathered almost 90% of all its 712 citations in WoS between 1975 and 2005, peaking in 1999 with 45 citations in that year alone. Undoubtedly, this proportion is likely to be distorted because far fewer source articles from far fewer journals were processed by the Institute for Scientific Information for 1945–1974 than for 1975–2005. Scopus identifies 267 papers citing the Bush article. The main reason for the discrepancy is that Scopus includes cited references only from 1995 onward, while WoS does so from 1945. Bush’s impatience with the limitations imposed by the traditional classification and indexing tools and practices of the time is palpable. It is worth to quote it as a reminder. Interestingly, he brings up the terms ‘web of trails’ and ‘association of thoughts’ which establishes the link between him and Garfield.", "title": "" }, { "docid": "7435d1591725bbcd86fe93c607d5683c", "text": "This study evaluated the role of breast magnetic resonance (MR) imaging in the selective study breast implant integrity. We retrospectively analysed the signs of breast implant rupture observed at breast MR examinations of 157 implants and determined the sensitivity and specificity of the technique in diagnosing implant rupture by comparing MR data with findings at surgical explantation. The linguine and the salad-oil signs were statistically the most significant signs for diagnosing intracapsular rupture; the presence of siliconomas/seromas outside the capsule and/or in the axillary lymph nodes calls for immediate explantation. In agreement with previous reports, we found a close correlation between imaging signs and findings at explantation. Breast MR imaging can be considered the gold standard in the study of breast implants. Scopo del nostro lavoro è stato quello di valutare il ruolo della risonanza magnetica (RM) mammaria nello studio selettivo dell’integrità degli impianti protesici. è stata eseguita una valutazione retrospettiva dei segni di rottura documentati all’esame RM effettuati su 157 protesi mammarie, al fine di stabilire la sensibilità e specificità nella diagnosi di rottura protesica, confrontando tali dati RM con i reperti riscontrati in sala operatoria dopo la rimozione della protesi stessa. Il linguine sign e il salad-oil sign sono risultati i segni statisticamente più significativi nella diagnosi di rottura protesica intracapsulare; la presenza di siliconomi/sieromi extracapsulari e/o nei linfonodi ascellari impone l’immediato intervento chirurgico di rimozione della protesi rotta. I dati ottenuti dimostrano, in accordo con la letteratura, una corrispondenza tra i segni dell’imaging e i reperti chirurgici, confermando il ruolo di gold standard della RM nello studio delle protesi mammarie.", "title": "" }, { "docid": "390ebc9975960ff7a817efc8412bd8da", "text": "OBJECTIVE\nPhysical activity is critical for health, yet only about half of the U.S. adult population meets basic aerobic physical activity recommendations and almost a third are inactive. Mindfulness meditation is gaining attention for its potential to facilitate health-promoting behavior and may address some limitations of existing interventions for physical activity. However, little evidence exists on mindfulness meditation and physical activity. This study assessed whether mindfulness meditation is uniquely associated with physical activity in a nationally representative sample.\n\n\nMETHOD\nCross-sectional data from the adult sample (N = 34,525) of the 2012 National Health Interview Survey were analyzed. Logistic regression models tested whether past-year use of mindfulness meditation was associated with (a) inactivity and (b) meeting aerobic physical activity recommendations, after accounting for sociodemographics, another health-promoting behavior, and 2 other types of meditation. Data were weighted to represent the U.S. civilian, noninstitutionalized adult population.\n\n\nRESULTS\nAccounting for covariates, U.S. adults who practiced mindfulness meditation in the past year were less likely to be inactive and more likely to meet physical activity recommendations. Mindfulness meditation showed stronger associations with these indices of physical activity than the 2 other types of meditation.\n\n\nCONCLUSIONS\nThese results suggest that mindfulness meditation specifically, beyond meditation in general, is associated with physical activity in U.S adults. Future research should test whether intervening with mindfulness meditation-either as an adjunctive component or on its own-helps to increase or maintain physical activity. (PsycINFO Database Record", "title": "" } ]
scidocsrr
d69574a38e458ef616ee2661b9e60e93
The Feature Selection Method based on Genetic Algorithm for Efficient of Text Clustering and Text Classification
[ { "docid": "286ccc898eb9bdf2aae7ed5208b1ae18", "text": "It has recently been argued that a Naive Bayesian classifier can be used to filter unsolicited bulk e-mail (“spam”). We conduct a thorough evaluation of this proposal on a corpus that we make publicly available, contributing towards standard benchmarks. At the same time we investigate the effect of attribute-set size, training-corpus size, lemmatization, and stop-lists on the filter’s performance, issues that had not been previously explored. After introducing appropriate cost-sensitive evaluation measures, we reach the conclusion that additional safety nets are needed for the Naive Bayesian anti-spam filter to be viable in practice.", "title": "" } ]
[ { "docid": "be4082d2098c2d137624f5620eb6ca42", "text": "Data-driven approaches to sequence-to-sequence modelling have been successfully applied to short text summarization of news articles. Such models are typically trained on input-summary pairs consisting of only a single or a few sentences, partially due to limited availability of multi-sentence training data. Here, we propose to use scientific articles as a new milestone for text summarization: large-scale training data come almost for free with two types of high-quality summaries at different levels the title and the abstract. We generate two novel multi-sentence summarization datasets from scientific articles and test the suitability of a wide range of existing extractive and abstractive neural network-based summarization approaches. Our analysis demonstrates that scientific papers are suitable for data-driven text summarization. Our results could serve as valuable benchmarks for scaling sequence-to-sequence models to very long sequences.", "title": "" }, { "docid": "9bec22bcbf1ab3071d65dd8b41d3cf51", "text": "Omni-directional mobile platforms have the ability to move instantaneously in any direction from any configuration. As such, it is important to have a mathematical model of the platform, especially if the platform is to be used as an autonomous vehicle. Autonomous behaviour requires that the mobile robot choose the optimum vehicle motion in different situations for object/collision avoidance and task achievement. This paper develops and verifies a mathematical model of a mobile robot platform that implements mecanum wheels to achieve omni-directionality. The mathematical model will be used to achieve optimum autonomous control of the developed mobile robot as an office service robot. Omni-directional mobile platforms have improved performance in congested environments and narrow aisles, such as those found in factory workshops, offices, warehouses, hospitals, etc.", "title": "" }, { "docid": "c0d1a0e0d297a4020c5d6fba46517e8b", "text": "The spread of information available in the World Wide Web, it appears that the pursuit of quality data is effortless and simple but it has been a significant matter of concern. Various extractors, wrappers systems with advanced techniques have been studied that retrieves the desired data from a collection of web pages. In this paper we propose a method for extracting the news content from multiple news web sites considering the occurrence of similar pattern in their representation such as date, place and the content of the news that overcomes the cost and space constraint observed in previous studies which work on single web document at a time. The method is an unsupervised web extraction technique which builds a pattern representing the structure of the pages using the extraction rules learned from the web pages by creating a ternary tree which expands when a series of common tags are found in the web pages. The pattern can then be used to extract news from other new web pages. The analysis and the results on real time web sites validate the effectiveness of our approach.", "title": "" }, { "docid": "9895b44c54c0c26f5a42d1cf67e764ed", "text": "Millimeter wave (mmWave) links will offer high capacity but are poor at penetrating into or diffracting around solid objects. Thus, we consider a hybrid cellular network with traditional sub-6 GHz macrocells coexisting with denser mmWave small cells, where a mobile user can connect to either opportunistically. We develop a general analytical model to characterize and derive the uplink and downlink cell association in the view of the signal-to-interference-and-noise-ratio and rate coverage probabilities in such a mixed deployment. We offer extensive validation of these analytical results (which rely on several simplifying assumptions) with simulation results. Using the analytical results, different decoupled uplink and downlink cell association strategies are investigated and their superiority is shown compared with the traditional coupled approach. Finally, small cell biasing in mmWave is studied, and we show that unprecedented biasing values are desirable due to the wide bandwidth.", "title": "" }, { "docid": "c6e0843498747096ebdafd51d4b5cca6", "text": "The use of on-body wearable sensors is widespread in several academic and industrial domains. Of great interest are their applications in ambulatory monitoring and pervasive computing systems; here, some quantitative analysis of human motion and its automatic classification are the main computational tasks to be pursued. In this paper, we discuss how human physical activity can be classified using on-body accelerometers, with a major emphasis devoted to the computational algorithms employed for this purpose. In particular, we motivate our current interest for classifiers based on Hidden Markov Models (HMMs). An example is illustrated and discussed by analysing a dataset of accelerometer time series.", "title": "" }, { "docid": "b0cfa77559c53eeed18c5d499c3208dc", "text": "We provide two distributed confidence ball algorithms for solving linear bandit problems in peer to peer networks with limited communication capabilities. For the first, we assume that all the peers are solving the same linear bandit problem, and prove that our algorithm achieves the optimal asymptotic regret rate of any centralised algorithm that can instantly communicate information between the peers. For the second, we assume that there are clusters of peers solving the same bandit problem within each cluster, and we prove that our algorithm discovers these clusters, while achieving the optimal asymptotic regret rate within each one. Through experiments on several real-world datasets, we demonstrate the performance of proposed algorithms compared to the state-of-the-art.", "title": "" }, { "docid": "b8cd2ce49efd26b08581bea5129dd663", "text": "Automotive radar sensors are applied to measure the target range, azimuth angle and radial velocity simultaneously even in multiple target situations. The single target measured data are necessary for target tracking in advanced driver assistance systems (ADAS) e.g. in highway scenarios. In typical city traffic situations the radar measurement is also important but additionally even the lateral velocity component of each detected target such as a vehicle is of large interest in this case. It is shown in this paper that the lateral velocity of an extended target can be measured even in a mono observation situation. For an automotive radar sensor a high spectral resolution is required in this case which means the time on target should be sufficiently large", "title": "" }, { "docid": "0afd0f70859772054e589a2256efeba4", "text": "Hair is typically modeled and rendered using either explicitly defined hair strand geometry or a volume texture of hair densities. Taken each on their own, these two hair representations have difficulties in the case of animal fur as it consists of very dense and thin undercoat hairs in combination with coarse guard hairs. Explicit hair strand geometry is not well-suited for the undercoat hairs, while volume textures are not well-suited for the guard hairs. To efficiently model and render both guard hairs and undercoat hairs, we present a hybrid technique that combines rasterization of explicitly defined guard hairs with ray marching of a prismatic shell volume with dynamic resolution. The latter is the key to practical combination of the two techniques, and it also enables a high degree of detail in the undercoat. We demonstrate that our hybrid technique creates a more detailed and soft fur appearance as compared with renderings that only use explicitly defined hair strands. Finally, our rasterization approach is based on order-independent transparency and renders high-quality fur images in seconds.", "title": "" }, { "docid": "a4a6501af9edda1f7ede81d85a0f370b", "text": "This paper discusses the development of new winding configuration for six-phase permanent-magnet (PM) machines with 18 slots and 8 poles, which eliminates and/or reduces undesirable space harmonics in the stator magnetomotive force. The proposed configuration improves power/torque density and efficiency with a reduction in eddy-current losses in the rotor permanent magnets and copper losses in end windings. To improve drive train availability for applications in electric vehicles (EVs), this paper proposes the design of a six-phase PM machine as two independent three-phase windings. A number of possible phase shifts between two sets of three-phase windings due to their slot-pole combination and winding configuration are investigated, and the optimum phase shift is selected by analyzing the harmonic distributions and their effect on machine performance, including the rotor eddy-current losses. The machine design is optimized for a given set of specifications for EVs, under electrical, thermal and volumetric constraints, and demonstrated by the experimental measurements on a prototype machine.", "title": "" }, { "docid": "4df7857714e8b5149e315666fd4badd2", "text": "Visual place recognition and loop closure is critical for the global accuracy of visual Simultaneous Localization and Mapping (SLAM) systems. We present a place recognition algorithm which operates by matching local query image sequences to a database of image sequences. To match sequences, we calculate a matrix of low-resolution, contrast-enhanced image similarity probability values. The optimal sequence alignment, which can be viewed as a discontinuous path through the matrix, is found using a Hidden Markov Model (HMM) framework reminiscent of Dynamic Time Warping from speech recognition. The state transitions enforce local velocity constraints and the most likely path sequence is recovered efficiently using the Viterbi algorithm. A rank reduction on the similarity probability matrix is used to provide additional robustness in challenging conditions when scoring sequence matches. We evaluate our approach on seven outdoor vision datasets and show improved precision-recall performance against the recently published seqSLAM algorithm.", "title": "" }, { "docid": "6696d9092ff2fd93619d7eee6487f867", "text": "We propose an accelerated stochastic block coordinate descent algorithm for nonconvex optimization under sparsity constraint in the high dimensional regime. The core of our algorithm is leveraging both stochastic partial gradient and full partial gradient restricted to each coordinate block to accelerate the convergence. We prove that the algorithm converges to the unknown true parameter at a linear rate, up to the statistical error of the underlying model. Experiments on both synthetic and real datasets backup our theory.", "title": "" }, { "docid": "af5f5776fcbb14b2879acbbc4c89a922", "text": "The emerging edge computing paradigm promises to deliver superior user experience and enable a wide range of Internet of Things (IoT) applications. In this work, we propose a new market-based framework for efficiently allocating resources of heterogeneous capacity-limited edge nodes (EN) to multiple competing services at the network edge. By properly pricing the geographically distributed ENs, the proposed framework generates a market equilibrium (ME) solution that not only maximizes the edge computing resource utilization but also allocates optimal (i.e., utility-maximizing) resource bundles to the services given their budget constraints. When the utility of a service is defined as the maximum revenue that the service can achieve from its resource allotment, the equilibrium can be computed centrally by solving the Eisenberg-Gale (EG) convex program. drawn from the economics literature. We further show that the equilibrium allocation is Pareto-optimal and satisfies desired fairness properties including sharing incentive, proportionality, and envy-freeness. Also, two distributed algorithms are introduced, which efficiently converge to an ME. When each service aims to maximize its net profit (i.e., revenue minus cost) instead of the revenue, we derive a novel convex optimization problem and rigorously prove that its solution is exactly an ME. Extensive numerical results are presented to validate the effectiveness of the proposed techniques.", "title": "" }, { "docid": "68278896a61e13705e5ffb113487cceb", "text": "Universal Language Model for Fine-tuning [6] (ULMFiT) is one of the first NLP methods for efficient inductive transfer learning. Unsupervised pretraining results in improvements on many NLP tasks for English. In this paper, we describe a new method that uses subword tokenization to adapt ULMFiT to languages with high inflection. Our approach results in a new state-of-the-art for the Polish language, taking first place in Task 3 of PolEval’18. After further training, our final model outperformed the second best model by 35%. We have open-sourced our pretrained models and code.", "title": "" }, { "docid": "f3c0479308b50a66646a99f55d19b310", "text": "In the course of the More Electric Aircraft program frequently active three-phase rectifiers in the power range of several kilowatts are required. It is shown that the three-phase -switch rectifier (comprising three -connected bidirectional switches) is well suited for this application. The system is analyzed using space vector calculus and a novel PWM current controller modulation concept is presented, where all three phases are controlled simultaneously; the analysis shows that the proposed concept yields optimal switching sequences. Analytical relationships for calculating the power components average and rms current ratings are derived to facilitate the rectifier design. A laboratory prototype with an output power of 5 kW is built and measurements taken from this prototype confirm the operation of the proposed current controller. Finally, initial EMI-measurements of the system are also presented.", "title": "" }, { "docid": "b484d05525e016dfc834754568030a42", "text": "This study examines the academic abilities of children and adolescents who were once diagnosed with an autism spectrum disorder, but who no longer meet diagnostic criteria for this disorder. These individuals have achieved social and language skills within the average range for their ages, receive little or no school support, and are referred to as having achieved \"optimal outcomes.\" Performance of 32 individuals who achieved optimal outcomes, 41 high-functioning individuals with a current autism spectrum disorder diagnosis (high-functioning autism), and 34 typically developing peers was compared on measures of decoding, reading comprehension, mathematical problem solving, and written expression. Groups were matched on age, sex, and nonverbal IQ; however, the high-functioning autism group scored significantly lower than the optimal outcome and typically developing groups on verbal IQ. All three groups performed in the average range on all subtests measured, and no significant differences were found in performance of the optimal outcome and typically developing groups. The high-functioning autism group scored significantly lower on subtests of reading comprehension and mathematical problem solving than the optimal outcome group. These findings suggest that the academic abilities of individuals who achieved optimal outcomes are similar to those of their typically developing peers, even in areas where individuals who have retained their autism spectrum disorder diagnoses exhibit some ongoing difficulty.", "title": "" }, { "docid": "c9b2525d34eb58130d3f8c5d68bb8714", "text": "Cloud gaming is a new way to deliver high-quality gaming experience to gamers anywhere and anytime. In cloud gaming, sophisticated game software runs on powerful servers in data centers, rendered game scenes are streamed to gamers over the Internet in real time, and the gamers use light-weight software executed on heterogeneous devices to interact with the games. Due to the proliferation of high-speed networks and cloud computing, cloud gaming has attracted tremendous attentions in both the academia and industry since late 2000's. In this paper, we survey the latest cloud gaming research from different aspects, spanning over cloud gaming platforms, optimization techniques, and commercial cloud gaming services. The readers will gain the overview of cloud gaming research and get familiar with the recent developments in this area.", "title": "" }, { "docid": "449bc62a2a92b87019b114ad6d592c02", "text": "A phase-locked clock and data recovery circuit incorporates a multiphase LC oscillator and a quarter-rate bang-bang phase detector. The oscillator is based on differential excitation of a closed-loop transmission line at evenly spaced points, providing half-quadrature phases. The phase detector employs eight flip-flops to sample the input every 12.5 ps, detecting data transitions while retiming and demultiplexing the data into four 10-Gb/s outputs. Fabricated in 0.18m CMOS technology, the circuit produces a clock jitter of 0.9 psrms and 9.67 pspp with a PRBS of2 1 while consuming 144 mW from a 2-V supply.", "title": "" }, { "docid": "e7329e4c570303eb1255b4753a063543", "text": "One of the challenges of Industry 4.0 is the creation of vertical networks that connect smart production systems with design teams, suppliers, and the front office. To achieve such a vision, information has to be collected from machines and products throughout a smart factory. Smart factories are defined as flexible and fully connected factories that are able to make use of constant streams of data from operations and production systems. In such scenarios, the arguably most popular way for identifying and tracking objects is by adding labels or tags, which have evolved remarkably over the last years: from pure hand-written labels to barcodes, QR codes, and RFID tags. The latest trend in this evolution is smart labels which are not only mere identifiers with some kind of internal storage, but also sophisticated context-aware tags with embedded modules that make use of wireless communications, energy efficient displays, and sensors. Therefore, smart labels go beyond identification and are able to detect and react to the surrounding environment. Moreover, when the industrial Internet of Things paradigm is applied to smart labels attached to objects, they can be identified remotely and discovered by other Industry 4.0 systems, what allows such systems to react in the presence of smart labels, thus triggering specific events or performing a number of actions on them. The amount of possible interactions is endless and creates unprecedented industrial scenarios, where items can talk to each other and with tools, machines, remote computers, or workers. This paper, after reviewing the basics of Industry 4.0 and smart labels, details the latest technologies used by them, their applications, the most relevant academic and commercial implementations, and their internal architecture and design requirements, providing researchers with the necessary foundations for developing the next generation of Industry 4.0 human-centered smart label applications.", "title": "" }, { "docid": "b5333ce046458594490b42de142dacb1", "text": "This paper presents a Maximum Current Point Tracking (MCPT) Controller for SIC MOSFET based high power solid state 2 MHz RF inverter for RF driven H- ion source. This RF Inverter is based on a class-D, half-bridge with series resonance LC topology, operating slightly above the resonance frequency (near to 2 MHz). Since plasma systems have a dynamic behavior which affects the RF antenna impedance, hence the RF antenna voltage and current changes, according to change in plasma parameters. In order to continuously yield maximum current through an antenna, it has to operate at its maximum current point, despite the inevitable changes in the antenna impedance due to changes in plasma properties. An MCPT controller simulated using LT-spice, wherein the antenna current sensed, tracked to maximum point current in a close loop by varying frequency of the voltage controlled oscillator. Thus, impedance matching network redundancy is established for maximum RF power coupling to the antenna.", "title": "" }, { "docid": "107c839a73c12606d4106af7dc04cd96", "text": "This study presents a novel four-fingered robotic hand to attain a soft contact and high stability under disturbances while holding an object. Each finger is constructed using a tendon-driven skeleton, granular materials corresponding to finger pulp, and a deformable rubber skin. This structure provides soft contact with an object, as well as high adaptation to its shape. Even if the object is deformable and fragile, a grasping posture can be formed without deforming the object. If the air around the granular materials in the rubber skin and jamming transition is vacuumed, the grasping posture can be fixed and the object can be grasped firmly and stably. A high grasping stability under disturbances can be attained. Additionally, the fingertips can work as a small jamming gripper to grasp an object smaller than a fingertip. An experimental investigation indicated that the proposed structure provides a high grasping force with a jamming transition with high adaptability to the object's shape.", "title": "" } ]
scidocsrr
4e3bac67202b90957932894c971ff95e
Towards native code offloading based MCC frameworks for multimedia applications: A survey
[ { "docid": "677dea61996aa5d1461998c09ecc334f", "text": "Mobile applications are becoming increasingly ubiquitous and provide ever richer functionality on mobile devices, while such applications drain increasingly more battery power of mobile devices. Offloading some parts of the application running on mobile devices onto remote servers/clouds is a promising approach to extend the battery life of mobile devices. However, as data transmission of offloading causes delay and energy costs for mobile devices, it is necessary to carefully design application partitioning/offloading schemes to weigh the benefits against the transmission delay and costs. Due to bandwidth fluctuations in the wireless environment, static partitionings in previous work are unsuitable for mobile platforms with a fixed bandwidth assumption, while dynamic partitionings result in high overhead of continuous partitioning for mobile devices. Therefore, we propose a novel partitioning scheme taking the bandwidth as a variable to improve static partitioning and avoid high costs of dynamic partitioning. Firstly, we construct application Object Relation Graphs (ORGs) by combining static analysis and dynamic profiling to propose partitioning optimization models. Then based on our novel executiontime and energy optimization partitioning models, we propose the Branch-and-Bound based Application Partitioning (BBAP) algorithm and Min-Cut based Greedy Application Partitioning (MCGAP) algorithm. BBAP is suited to finding the optimal partitioning solutions for small applications, while MCGAP is applicable to quickly obtaining suboptimal solutions for large-scale applications. Experimental results demonstrate that both algorithms can adapt to bandwidth fluctuations well, and significantly reduce application execution time and energy consumption by optimally distributing components between mobile devices and servers. & 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "8869cab615e5182c7c03f074ead081f7", "text": "This article introduces the principal concepts of multimedia cloud computing and presents a novel framework. We address multimedia cloud computing from multimedia-aware cloud (media cloud) and cloud-aware multimedia (cloud media) perspectives. First, we present a multimedia-aware cloud, which addresses how a cloud can perform distributed multimedia processing and storage and provide quality of service (QoS) provisioning for multimedia services. To achieve a high QoS for multimedia services, we propose a media-edge cloud (MEC) architecture, in which storage, central processing unit (CPU), and graphics processing unit (GPU) clusters are presented at the edge to provide distributed parallel processing and QoS adaptation for various types of devices.", "title": "" } ]
[ { "docid": "9973dab94e708f3b87d52c24b8e18672", "text": "We show that two popular discounted reward natural actor-critics, NAC-LSTD and eNAC, follow biased estimates of the natural policy gradient. We derive the first unbiased discounted reward natural actor-critics using batch and iterative approaches to gradient estimation and prove their convergence to globally optimal policies for discrete problems and locally optimal policies for continuous problems. Finally, we argue that the bias makes the existing algorithms more appropriate for the average reward setting.", "title": "" }, { "docid": "4ab8913fff86d8a737ed62c56fe2b39d", "text": "This paper draws on the social and behavioral sciences in an endeavor to specify the nature and microfoundations of the capabilities necessary to sustain superior enterprise performance in an open economy with rapid innovation and globally dispersed sources of invention, innovation, and manufacturing capability. Dynamic capabilities enable business enterprises to create, deploy, and protect the intangible assets that support superior longrun business performance. The microfoundations of dynamic capabilities—the distinct skills, processes, procedures, organizational structures, decision rules, and disciplines—which undergird enterprise-level sensing, seizing, and reconfiguring capacities are difficult to develop and deploy. Enterprises with strong dynamic capabilities are intensely entrepreneurial. They not only adapt to business ecosystems, but also shape them through innovation and through collaboration with other enterprises, entities, and institutions. The framework advanced can help scholars understand the foundations of long-run enterprise success while helping managers delineate relevant strategic considerations and the priorities they must adopt to enhance enterprise performance and escape the zero profit tendency associated with operating in markets open to global competition. Copyright  2007 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "71333997a4f9f38de0b53697d7b7cff1", "text": "Environmental sustainability of a supply chain depends on the purchasing strategy of the supply chain members. Most of the earlier models have focused on cost, quality, lead time, etc. issues but not given enough importance to carbon emission for supplier evaluation. Recently, there is a growing pressure on supply chain members for reducing the carbon emission of their supply chain. This study presents an integrated approach for selecting the appropriate supplier in the supply chain, addressing the carbon emission issue, using fuzzy-AHP and fuzzy multi-objective linear programming. Fuzzy AHP (FAHP) is applied first for analyzing the weights of the multiple factors. The considered factors are cost, quality rejection percentage, late delivery percentage, green house gas emission and demand. These weights of the multiple factors are used in fuzzy multi-objective linear programming for supplier selection and quota allocation. An illustration with a data set from a realistic situation is presented to demonstrate the effectiveness of the proposed model. The proposed approach can handle realistic situation when there is information vagueness related to inputs. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b803d626421c7e7eaf52635c58523e8f", "text": "Force-directed algorithms are among the most flexible methods for calculating layouts of simple undirected graphs. Also known as spring embedders, such algorithms calculate the layout of a graph using only information contained within the structure of the graph itself, rather than relying on domain-specific knowledge. Graphs drawn with these algorithms tend to be aesthetically pleasing, exhibit symmetries, and tend to produce crossing-free layouts for planar graphs. In this survey we consider several classical algorithms, starting from Tutte’s 1963 barycentric method, and including recent scalable multiscale methods for large and dynamic graphs.", "title": "" }, { "docid": "a50763db7b9c73ab5e29389d779c343d", "text": "Near to real-time emotion recognition is a promising task for human-computer interaction (HCI) and human-robot interaction (HRI). Using knowledge about the user's emotions depends upon the possibility to extract information about users' emotions during HCI or HRI without explicitly asking users about the feelings they are experiencing. To be able to sense the user's emotions without interrupting the HCI, we present a new method applied to the emotional experience of the user for extracting semantic information from the autonomic nervous system (ANS) signals associated with emotions. We use the concepts of 1st person - where the subject consciously (and subjectively) extracts the semantic meaning of a given lived experience, (e.g. `I felt amused') - and 3rd person approach - where the experimenter interprets the semantic meaning of the subject's experience from a set of externally (and objectively) measured variables (e.g. galvanic skin response measures). Based on the 3rd person approach, our technique aims at psychologically interpreting physiological parameters (skin conductance and heart rate), and at producing a continuous extraction of the user's affective state during HCI or HRI. We also combine it with the 1st person approach measure which allows a tailored interpretation of the physiological measure closely related to the user own emotional experience", "title": "" }, { "docid": "99bd908e217eb9f56c40abd35839e9b3", "text": "How does the physical structure of an arithmetic expression affect the computational processes engaged in by reasoners? In handwritten arithmetic expressions containing both multiplications and additions, terms that are multiplied are often placed physically closer together than terms that are added. Three experiments evaluate the role such physical factors play in how reasoners construct solutions to simple compound arithmetic expressions (such as \"2 + 3 × 4\"). Two kinds of influence are found: First, reasoners incorporate the physical size of the expression into numerical responses, tending to give larger responses to more widely spaced problems. Second, reasoners use spatial information as a cue to hierarchical expression structure: More narrowly spaced subproblems within an expression tend to be solved first and tend to be multiplied. Although spatial relationships besides order are entirely formally irrelevant to expression semantics, reasoners systematically use these relationships to support their success with various formal properties.", "title": "" }, { "docid": "9c25a2e343e9e259a9881fd13983c150", "text": "Advances in cognitive, affective, and social neuroscience raise a host of new questions concerning the ways in which neuroscience can and should be used. These advances also challenge our intuitions about the nature of humans as moral and spiritual beings. Neuroethics is the new field that grapples with these issues. The present article surveys a number of applications of neuroscience to such diverse arenas as marketing, criminal justice, the military, and worker productivity. The ethical, legal, and societal effects of these applications are discussed. Less practical, but perhaps ultimately more consequential, is the impact of neuroscience on our worldview and our understanding of the human person.", "title": "" }, { "docid": "10fd3a7acae83f698ad04c4d0f011600", "text": "A continuous-rate digital clock and data recovery (CDR) with automatic frequency acquisition is presented. The proposed automatic frequency acquisition scheme implemented using a conventional bang-bang phase detector (BBPD) requires minimum additional hardware, is immune to input data transition density, and is applicable to subrate CDRs. A ring-oscillator-based two-stage fractional-N phase-locked loop (PLL) is used as a digitally controlled oscillator (DCO) to achieve wide frequency range, low noise, and to decouple the tradeoff between jitter transfer (JTRAN) bandwidth and ring oscillator noise suppression in conventional CDRs. The CDR is implemented using a digital D/PLL architecture to decouple JTRAN bandwidth from jitter tolerance (JTOL) corner frequency, eliminate jitter peaking, and remove JTRAN dependence on BBPD gain. Fabricated in a 65 nm CMOS process, the prototype CDR achieves error-free operation (BER <; 10-12) from 4 to 10.5 Gb/s with pseudorandom binary sequence (PRBS) data sequences ranging from PRBS7 to PRBS31. The proposed automatic frequency acquisition scheme always locks the CDR loop within 1000 ppm residual frequency error in worst case. At 10 Gb/s, the CDR consumes 22.5 mW power and achieves a recovered clock long-term jitter of 2.2 psrms/24.0 pspp with PRBS31 input data. The measured JTRAN bandwidth and JTOL corner frequencies are 0.2 and 9 MHz, respectively.", "title": "" }, { "docid": "509fa5630ed7e3e7bd914fb474da5071", "text": "Languages with rich type systems are beginning to employ a blend of type inference and type checking, so that the type inference engine is guided by programmer-supplied type annotations. In this paper we show, for the first time, how to combine the virtues of two well-established ideas: unification-based inference, and bidi-rectional propagation of type annotations. The result is a type system that conservatively extends Hindley-Milner, and yet supports both higher-rank types and impredicativity.", "title": "" }, { "docid": "ed5185ea36f61a9216c6f0183b81d276", "text": "Blockchain technology enables the creation of a decentralized environment where transactions and data are not under the control of any third party organization. Any transaction ever completed is recorded in a public ledger in a verifiable and permanent way. Based on blockchain technology, we propose a global higher education credit platform, named EduCTX. This platform is based on the concept of the European Credit Transfer and Accumulation System (ECTS). It constitutes a globally trusted, decentralized higher education credit and grading system that can offer a globally unified viewpoint for students and higher education institutions (HEIs), as well as for other potential stakeholders such as companies, institutions and organizations. As a proof of concept, we present a prototype implementation of the environment, based on the open-source Ark Blockchain Platform. Based on a globally distributed peer-to-peer network, EduCTX will process, manage and control ECTX tokens, which represent credits that students gain for completed courses such as ECTS. HEIs are the peers of the blockchain network. The platform is a first step towards a more transparent and technologically advanced form of higher education systems. The EduCTX platform represents the basis of the EduCTX initiative which anticipates that various HEIs would join forces in order to create a globally efficient, simplified and ubiquitous environment in order to avoid language and administrative barriers. Therefore we invite and encourage HEIs to join the EduCTX initiative and the EduCTX blockchain network.", "title": "" }, { "docid": "7c0677ad61691beecd7f89d5c70f2b5b", "text": "Bidirectional dc-dc converters (BDC) have recently received a lot of attention due to the increasing need to systems with the capability of bidirectional energy transfer between two dc buses. Apart from traditional application in dc motor drives, new applications of BDC include energy storage in renewable energy systems, fuel cell energy systems, hybrid electric vehicles (HEV) and uninterruptible power supplies (UPS). The fluctuation nature of most renewable energy resources, like wind and solar, makes them unsuitable for standalone operation as the sole source of power. A common solution to overcome this problem is to use an energy storage device besides the renewable energy resource to compensate for these fluctuations and maintain a smooth and continuous power flow to the load. As the most common and economical energy storage devices in medium-power range are batteries and super-capacitors, a dc-dc converter is always required to allow energy exchange between storage device and the rest of system. Such a converter must have bidirectional power flow capability with flexible control in all operating modes. In HEV applications, BDCs are required to link different dc voltage buses and transfer energy between them. For example, a BDC is used to exchange energy between main batteries (200-300V) and the drive motor with 500V dc link. High efficiency, lightweight, compact size and high reliability are some important requirements for the BDC used in such an application. BDCs also have applications in line-interactive UPS which do not use double conversion technology and thus can achieve higher efficiency. In a line-interactive UPS, the UPS output terminals are connected to the grid and therefore energy can be fed back to the inverter dc bus and charge the batteries via a BDC during normal mode. In backup mode, the battery feeds the inverter dc bus again via BDC but in reverse power flow direction. BDCs can be classified into non-isolated and isolated types. Non-isolated BDCs (NBDC) are simpler than isolated BDCs (IBDC) and can achieve better efficiency. However, galvanic isolation is required in many applications and mandated by different standards. The", "title": "" }, { "docid": "1f752034b5307c0118d4156d0b95eab3", "text": "Importance\nTherapy-related myeloid neoplasms are a potentially life-threatening consequence of treatment for autoimmune disease (AID) and an emerging clinical phenomenon.\n\n\nObjective\nTo query the association of cytotoxic, anti-inflammatory, and immunomodulating agents to treat patients with AID with the risk for developing myeloid neoplasm.\n\n\nDesign, Setting, and Participants\nThis retrospective case-control study and medical record review included 40 011 patients with an International Classification of Diseases, Ninth Revision, coded diagnosis of primary AID who were seen at 2 centers from January 1, 2004, to December 31, 2014; of these, 311 patients had a concomitant coded diagnosis of myelodysplastic syndrome (MDS) or acute myeloid leukemia (AML). Eighty-six cases met strict inclusion criteria. A case-control match was performed at a 2:1 ratio.\n\n\nMain Outcomes and Measures\nOdds ratio (OR) assessment for AID-directed therapies.\n\n\nResults\nAmong the 86 patients who met inclusion criteria (49 men [57%]; 37 women [43%]; mean [SD] age, 72.3 [15.6] years), 55 (64.0%) had MDS, 21 (24.4%) had de novo AML, and 10 (11.6%) had AML and a history of MDS. Rheumatoid arthritis (23 [26.7%]), psoriasis (18 [20.9%]), and systemic lupus erythematosus (12 [14.0%]) were the most common autoimmune profiles. Median time from onset of AID to diagnosis of myeloid neoplasm was 8 (interquartile range, 4-15) years. A total of 57 of 86 cases (66.3%) received a cytotoxic or an immunomodulating agent. In the comparison group of 172 controls (98 men [57.0%]; 74 women [43.0%]; mean [SD] age, 72.7 [13.8] years), 105 (61.0%) received either agent (P = .50). Azathioprine sodium use was observed more frequently in cases (odds ratio [OR], 7.05; 95% CI, 2.35- 21.13; P < .001). Notable but insignificant case cohort use among cytotoxic agents was found for exposure to cyclophosphamide (OR, 3.58; 95% CI, 0.91-14.11) followed by mitoxantrone hydrochloride (OR, 2.73; 95% CI, 0.23-33.0). Methotrexate sodium (OR, 0.60; 95% CI, 0.29-1.22), mercaptopurine (OR, 0.62; 95% CI, 0.15-2.53), and mycophenolate mofetil hydrochloride (OR, 0.66; 95% CI, 0.21-2.03) had favorable ORs that were not statistically significant. No significant association between a specific length of time of exposure to an agent and the drug's category was observed.\n\n\nConclusions and Relevance\nIn a large population with primary AID, azathioprine exposure was associated with a 7-fold risk for myeloid neoplasm. The control and case cohorts had similar systemic exposures by agent category. No association was found for anti-tumor necrosis factor agents. Finally, no timeline was found for the association of drug exposure with the incidence in development of myeloid neoplasm.", "title": "" }, { "docid": "c451d86c6986fab1a1c4cd81e87e6952", "text": "Large-scale is a trend in person re-identi- fication (re-id). It is important that real-time search be performed in a large gallery. While previous methods mostly focus on discriminative learning, this paper makes the attempt in integrating deep learning and hashing into one framework to evaluate the efficiency and accuracy for large-scale person re-id. We integrate spatial information for discriminative visual representation by partitioning the pedestrian image into horizontal parts. Specifically, Part-based Deep Hashing (PDH) is proposed, in which batches of triplet samples are employed as the input of the deep hashing architecture. Each triplet sample contains two pedestrian images (or parts) with the same identity and one pedestrian image (or part) of the different identity. A triplet loss function is employed with a constraint that the Hamming distance of pedestrian images (or parts) with the same identity is smaller than ones with the different identity. In the experiment, we show that the proposed PDH method yields very competitive re-id accuracy on the large-scale Market-1501 and Market-1501+500K datasets.", "title": "" }, { "docid": "a1018c89d326274e4b71ffc42f4ebba2", "text": "We describe a method for improving the classification of short text strings using a combination of labeled training data plus a secondary corpus of unlabeled but related longer documents. We show that such unlabeled background knowledge can greatly decrease error rates, particularly if the number of examples or the size of the strings in the training set is small. This is particularly useful when labeling text is a labor-intensive job and when there is a large amount of information available about a particular problem on the World Wide Web. Our approach views the task as one of information integration using WHIRL, a tool that combines database functionalities with techniques from the information-retrieval literature.", "title": "" }, { "docid": "b770124e1e5a7b4161b7f00a9bf3916f", "text": "In the biomedical domain large amount of text documents are unstructured information is available in digital text form. Text Mining is the method or technique to find for interesting and useful information from unstructured text. Text Mining is also an important task in medical domain. The technique uses for Information retrieval, Information extraction and natural language processing (NLP). Traditional approaches for information retrieval are based on key based similarity. These approaches are used to overcome these problems; Semantic text mining is to discover the hidden information from unstructured text and making relationships of the terms occurring in them. In the biomedical text, the text should be in the form of text which can be present in the books, articles, literature abstracts, and so forth. Most of information is stored in the text format, so in this paper we will focus on the role of ontology for semantic text mining by using WordNet. Specifically, we have presented a model for extracting concepts from text documents using linguistic ontology in the domain of medical.", "title": "" }, { "docid": "e090bb879e35dbabc5b3c77c98cd6832", "text": "Immunity of analog circuit blocks is becoming a major design risk. This paper presents an automated methodology to simulate the susceptibility of a circuit during the design phase. More specifically, we propose a CAD tool which determines the fail/pass criteria of a signal under direct power injection (DPI). This contribution describes the function of the tool which is validated by a LDO regulator.", "title": "" }, { "docid": "585c589cdab52eaa63186a70ac81742d", "text": "BACKGROUND\nThere has been a rapid increase in the use of technology-based activity trackers to promote behavior change. However, little is known about how individuals use these trackers on a day-to-day basis or how tracker use relates to increasing physical activity.\n\n\nOBJECTIVE\nThe aims were to use minute level data collected from a Fitbit tracker throughout a physical activity intervention to examine patterns of Fitbit use and activity and their relationships with success in the intervention based on ActiGraph-measured moderate to vigorous physical activity (MVPA).\n\n\nMETHODS\nParticipants included 42 female breast cancer survivors randomized to the physical activity intervention arm of a 12-week randomized controlled trial. The Fitbit One was worn daily throughout the 12-week intervention. ActiGraph GT3X+ accelerometer was worn for 7 days at baseline (prerandomization) and end of intervention (week 12). Self-reported frequency of looking at activity data on the Fitbit tracker and app or website was collected at week 12.\n\n\nRESULTS\nAdherence to wearing the Fitbit was high and stable, with a mean of 88.13% of valid days over 12 weeks (SD 14.49%). Greater adherence to wearing the Fitbit was associated with greater increases in ActiGraph-measured MVPA (binteraction=0.35, P<.001). Participants averaged 182.6 minutes/week (SD 143.9) of MVPA on the Fitbit, with significant variation in MVPA over the 12 weeks (F=1.91, P=.04). The majority (68%, 27/40) of participants reported looking at their tracker or looking at the Fitbit app or website once a day or more. Changes in Actigraph-measured MVPA were associated with frequency of looking at one's data on the tracker (b=-1.36, P=.07) but not significantly associated with frequency of looking at one's data on the app or website (P=.36).\n\n\nCONCLUSIONS\nThis is one of the first studies to explore the relationship between use of a commercially available activity tracker and success in a physical activity intervention. A deeper understanding of how individuals engage with technology-based trackers may enable us to more effectively use these types of trackers to promote behavior change.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT02332876; https://clinicaltrials.gov/ct2/show/NCT02332876?term=NCT02332876 &rank=1 (Archived by WebCite at http://www.webcitation.org/6wplEeg8i).", "title": "" }, { "docid": "7e32376722b669d592a4a97fc1d6bf89", "text": "The main challenge in achieving good image morphs is to create a map that aligns corresponding image elements. Our aim is to help automate this often tedious task. We compute the map by optimizing the compatibility of corresponding warped image neighborhoods using an adaptation of structural similarity. The optimization is regularized by a thin-plate spline and may be guided by a few user-drawn points. We parameterize the map over a halfway domain and show that this representation offers many benefits. The map is able to treat the image pair symmetrically, model simple occlusions continuously, span partially overlapping images, and define extrapolated correspondences. Moreover, it enables direct evaluation of the morph in a pixel shader without mesh rasterization. We improve the morphs by optimizing quadratic motion paths and by seamlessly extending content beyond the image boundaries. We parallelize the algorithm on a GPU to achieve a responsive interface and demonstrate challenging morphs obtained with little effort.", "title": "" }, { "docid": "ef584ca8b3e9a7f8335549927df1dc16", "text": "Rapid evolution in technology and the internet brought us to the era of online services. E-commerce is nothing but trading goods or services online. Many customers share their good or bad opinions about products or services online nowadays. These opinions become a part of the decision-making process of consumer and make an impact on the business model of the provider. Also, understanding and considering reviews will help to gain the trust of the customer which will help to expand the business. Many users give reviews for the single product. Such thousands of review can be analyzed using big data effectively. The results can be presented in a convenient visual form for the non-technical user. Thus, the primary goal of research work is the classification of customer reviews given for the product in the map-reduce framework.", "title": "" }, { "docid": "1de2d4e5b74461c142e054ffd2e62c2d", "text": "Table : Comparisons of CNN, LSTM and SWEM architectures. Columns correspond to the number of compositional parameters, computational complexity and sequential operations, respectively. v Consider a text sequence represented as X, composed of a sequence of words. Let {v#, v$, ...., v%} denote the respective word embeddings for each token, where L is the sentence/document length; v The compositional function, X → z, aims to combine word embeddings into a fixed-length sentence/document representation z. Typically, LSTM or CNN are employed for this purpose;", "title": "" } ]
scidocsrr
3dfe36e339acee9c61b60e67120c2bf3
HashTran-DNN: A Framework for Enhancing Robustness of Deep Neural Networks against Adversarial Malware Samples
[ { "docid": "de018dc74dd255cf54d9c5597a1f9f73", "text": "Smoothness regularization is a popular method to decrease generalization error. We propose a novel regularization technique that rewards local distributional smoothness (LDS), a KLdistance based measure of the model’s robustness against perturbation. The LDS is defined in terms of the direction to which the model distribution is most sensitive in the input space. We call the training with LDS regularization virtual adversarial training (VAT). VAT resembles the adversarial training (Goodfellow et al., 2015), but distinguishes itself in that it determines the adversarial direction from the model distribution alone, and does not use the label information. The technique is therefore applicable even to semi-supervised learning. When we applied our technique to the classification task of the permutation invariant MNIST dataset, it not only eclipsed all the models that are not dependent on generative models and pre-training, but also performed well even in comparison to the state of the art method (Rasmus et al., 2015) that uses a highly advanced generative model.", "title": "" }, { "docid": "8bf9fa7c100d195b0b59713a9fe28dcd", "text": "With smart phones being indispensable in people's everyday life, Android malware has posed serious threats to their security, making its detection of utmost concern. To protect legitimate users from the evolving Android malware attacks, machine learning-based systems have been successfully deployed and offer unparalleled flexibility in automatic Android malware detection. In these systems, based on different feature representations, various kinds of classifiers are constructed to detect Android malware. Unfortunately, as classifiers become more widely deployed, the incentive for defeating them increases. In this paper, we explore the security of machine learning in Android malware detection on the basis of a learning-based classifier with the input of a set of features extracted from the Android applications (apps). We consider different importances of the features associated with their contributions to the classification problem as well as their manipulation costs, and present a novel feature selection method (named SecCLS) to make the classifier harder to be evaded. To improve the system security while not compromising the detection accuracy, we further propose an ensemble learning approach (named SecENS) by aggregating the individual classifiers that are constructed using our proposed feature selection method SecCLS. Accordingly, we develop a system called SecureDroid which integrates our proposed methods (i.e., SecCLS and SecENS) to enhance security of machine learning-based Android malware detection. Comprehensive experiments on the real sample collections from Comodo Cloud Security Center are conducted to validate the effectiveness of SecureDroid against adversarial Android malware attacks by comparisons with other alternative defense methods. Our proposed secure-learning paradigm can also be readily applied to other malware detection tasks.", "title": "" } ]
[ { "docid": "e9e7cb42ed686ace9e9785fafd3c72f8", "text": "We present a fully automated multimodal medical image matching technique. Our method extends the concepts used in the computer vision SIFT technique for extracting and matching distinctive scale invariant features in 2D scalar images to scalar images of arbitrary dimensionality. This extension involves using hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. These features were successfully applied to determine accurate feature point correspondence between pairs of medical images (3D) and dynamic volumetric data (3D+time).", "title": "" }, { "docid": "a1332b94cf217fec5e3a51fe45b9ed4e", "text": "There is large voltage deviation on the dc bus of the three-stage solid-state transformer (SST) when the load suddenly changes. The feed-forward control can effectively reduce the voltage deviation and transition time. However, conventional power feed-forward scheme of SST cannot develop the feed-forward control to the full without extra current sensor. In this letter, an energy feed-forward scheme, which takes the energy changes of inductors into consideration, is proposed for the dual active bridge (DAB) controller. A direct feed-forward scheme, which directly passes the power of DAB converter to the rectifier stage, is proposed for the rectifier controller. They can further improve the dynamic performances of the two dc bus voltages, respectively. The experimental results in a 2-kW SST prototype are provided to verify the proposed feed-forward schemes and show the superior performances.", "title": "" }, { "docid": "38281a48838b32f603287520102c68f9", "text": "PURPOSE\nTo evaluate the corneal endothelial changes in patients with chronic renal failure.\n\n\nMETHODS\nA total of 128 corneas of 128 subjects were studied, and 3 groups were formed. The first, the dialyzed group, composed of 32 corneas of 32 patients; the second, the nondialyzed group, composed of 34 corneas of 34 patients; and the third, the age-matched control group, composed of 64 corneas of 64 healthy subjects were examined by a specular microscope and the endothelial parameters were compared. The dialyzed group (enhanced level of toxins in the blood) was further analyzed to assess the influence of blood urea, serum creatinine, serum calcium, and serum phosphorus including the duration of dialysis on corneal endothelium.\n\n\nRESULTS\nOn comparing the 3 groups using analysis of variance and posthoc tests, a significant difference was found in the central corneal thickness (CCT) and endothelial cell density (CD) between the control (CCT: 506 ± 29 μm, CD: 2760 ± 304 cells/mm) and dialyzed groups (CCT: 549 ± 30 μm, CD: 2337 ± 324 cells/mm) [P < 0.001 (CCT); P < 0.001 (CD)]; control and nondialyzed groups (CCT: 524 ± 27 μm, CD: 2574 ± 260 cells/mm) [P = 0.023 (CCT); P = 0.016 (CD)]; and dialyzed and nondialyzed groups [P = 0.002 (CCT); P = 0.007 (CD)]. Using the linear generalized model, a significant correlation was found between the endothelial parameters and blood urea only [P = 0.006 (CCT), 0.002 (coefficient of variation), 0.022 (CD), and 0.026 (percentage of hexagonality)], although the correlation was poorly positive for CCT but poorly negative for the remaining endothelial parameters.\n\n\nCONCLUSIONS\nCorneal endothelial alteration is present in patients with chronic renal failure, more marked in patients undergoing hemodialysis and with raised blood urea level.", "title": "" }, { "docid": "c5e37e68f7a7ce4b547b10a1888cf36f", "text": "SciDB [4, 3] is a new open-source data management system intended primarily for use in application domains that involve very large (petabyte) scale array data; for example, scientific applications such as astronomy, remote sensing and climate modeling, bio-science information management, risk management systems in financial applications, and the analysis of web log data. In this talk we will describe our set of motivating examples and use them to explain the features of SciDB. We then briefly give an overview of the project 'in flight', explaining our novel storage manager, array data model, query language, and extensibility frameworks.", "title": "" }, { "docid": "342e3fd05878ebff3bc2686fb05009f5", "text": "Due to a rapid advancement in the electronic commerce technology, use of credit cards has dramatically increased. As credit card becomes the most popular mode of payment, credit card frauds are becoming increasingly rampant in recent years. In this paper, we model the sequence of operations in credit card transaction processing using a confidence-based neural network. Receiver operating characteristic (ROC) analysis technology is also introduced to ensure the accuracy and effectiveness of fraud detection. A neural network is initially trained with synthetic data. If an incoming credit card transaction is not accepted by the trained neural network model (NNM) with sufficiently low confidence, it is considered to be fraudulent. This paper shows how confidence value, neural network algorithm and ROC can be combined successfully to perform credit card fraud detection.", "title": "" }, { "docid": "285a1c073ec4712ac735ab84cbcd1fac", "text": "During a survey of black yeasts of marine origin, some isolates of Hortaea werneckii were recovered from scuba diving equipment, such as silicone masks and snorkel mouthpieces, which had been kept under poor storage conditions. These yeasts were unambiguously identified by phenotypic and genotypic methods. Phylogenetic analysis of both the D1/D2 regions of 26S rRNA gene and ITS-5.8S rRNA gene sequences showed three distinct genetic types. This species is the agent of tinea nigra which is a rarely diagnosed superficial mycosis in Europe. In fact this mycosis is considered an imported fungal infection being much more prevalent in warm, humid parts of the world such as the Central and South Americas, Africa, and Asia. Although H. werneckii has been found in hypersaline environments in Europe, this is the first instance of the isolation of this halotolerant species from scuba diving equipment made with silicone rubber which is used in close contact with human skin and mucous membranes. The occurrence of this fungus in Spain is also an unexpected finding because cases of tinea nigra in this country are practically not seen.", "title": "" }, { "docid": "68e714e5a3e92924c63167781149e628", "text": "This paper presents a millimeter wave wideband differential line to waveguide transition using a short ended slot line. The slot line connected in parallel to the rectangular waveguide can effectively compensate the frequency dependence of the susceptance in the waveguide. Thus it is suitable to achieve a wideband characteristic together with a simpler structure. It is experimentally demonstrated that the proposed transitions have the relative bandwidth of 20.2 % with respect to -10 dB reflection, which is a significant wideband characteristic compared with the conventional transition's bandwidth of 11%.", "title": "" }, { "docid": "3c5e565f4b20f2b7abb7172dcb4cbaad", "text": "Imitation from observation (IfO) is the problem of learning directly from state-only demonstrations without having access to the demonstrator’s actions. The lack of action information both distinguishes IfO from most of the literature in imitation learning, and also sets it apart as a method that may enable agents to learn from a large set of previously inapplicable resources such as internet videos. In this paper, we propose both a general framework for IfO approaches and also a new IfO approach based on generative adversarial networks called generative adversarial imitation from observation (GAIfO). We conduct experiments in two different settings: (1) when demonstrations consist of low-dimensional, manually-defined state features, and (2) when demonstrations consist of high-dimensional, raw visual data. We demonstrate that our approach performs comparably to classical imitation learning approaches (which have access to the demonstrator’s actions) and significantly outperforms existing imitation from observation methods in highdimensional simulation environments.", "title": "" }, { "docid": "e4914b41b7d38ff04b0e5a9b88cf1dc6", "text": "In this paper, we investigate the secure nearest neighbor (SNN) problem, in which a client issues an encrypted query point E(q) to a cloud service provider and asks for an encrypted data point in E(D) (the encrypted database) that is closest to the query point, without allowing the server to learn the plaintexts of the data or the query (and its result). We show that efficient attacks exist for existing SNN methods [21], [15], even though they were claimed to be secure in standard security models (such as indistinguishability under chosen plaintext or ciphertext attacks). We also establish a relationship between the SNN problem and the order-preserving encryption (OPE) problem from the cryptography field [6], [5], and we show that SNN is at least as hard as OPE. Since it is impossible to construct secure OPE schemes in standard security models [6], [5], our results imply that one cannot expect to find the exact (encrypted) nearest neighbor based on only E(q) and E(D). Given this hardness result, we design new SNN methods by asking the server, given only E(q) and E(D), to return a relevant (encrypted) partition E(G) from E(D) (i.e., G ⊆ D), such that that E(G) is guaranteed to contain the answer for the SNN query. Our methods provide customizable tradeoff between efficiency and communication cost, and they are as secure as the encryption scheme E used to encrypt the query and the database, where E can be any well-established encryption schemes.", "title": "" }, { "docid": "3f43c2eaa993dc2d84d563fee3ea52a0", "text": "Finding optimal solutions to NP-Hard problems requires exponential time with respect to the size of the problem. Consequently, heuristic methods are usually utilized to obtain approximate solutions to problems of such difficulty. In this paper, a novel swarm-based nature-inspired metaheuristic algorithm for optimization is proposed. Inspired by human collective intelligence, Wisdom of Artificial Crowds (WoAC) algorithm relies on a group of simulated intelligent agents to arrive at independent solutions aggregated to produce a solution which in many cases is superior to individual solutions of all participating agents. We illustrate superior performance of WoAC by comparing it against another bio-inspired approach, the Genetic Algorithm, on one of the classical NP-Hard problems, the Travelling Salesperson Problem. On average a 3-10% improvement in quality of solutions is observed with little computational overhead.", "title": "" }, { "docid": "1cefbe0177c56d92e34c4b5a88a29099", "text": "Typical tasks of future service robots involve grasping and manipulating a large variety of objects differing in size and shape. Generating stable grasps on 3D objects is considered to be a hard problem, since many parameters such as hand kinematics, object geometry, material properties and forces have to be taken into account. This results in a high-dimensional space of possible grasps that cannot be searched exhaustively. We believe that the key to find stable grasps in an efficient manner is to use a special representation of the object geometry that can be easily analyzed. In this paper, we present a novel grasp planning method that evaluates local symmetry properties of objects to generate only candidate grasps that are likely to be of good quality. We achieve this by computing the medial axis which represents a 3D object as a union of balls. We analyze the symmetry information contained in the medial axis and use a set of heuristics to generate geometrically and kinematically reasonable candidate grasps. These candidate grasps are tested for force-closure. We present the algorithm and show experimental results on various object models using an anthropomorphic hand of a humanoid robot in simulation.", "title": "" }, { "docid": "5c48c8a2a20408775f5eaf4f575d5031", "text": "In this paper we present a computational cognitive model of task interruption and resumption, focusing on the effects of the problem state bottleneck. Previous studies have shown that the disruptiveness of interruptions is for an important part determined by three factors: interruption duration, interrupting-task complexity, and moment of interruption. However, an integrated theory of these effects is still missing. Based on previous research into multitasking, we propose a first step towards such a theory in the form of a process model that attributes these effects to problem state requirements of both the interrupted and the interrupting task. Subsequently, we tested two predictions of this model in two experiments. The experiments confirmed that problem state requirements are an important predictor for the disruptiveness of interruptions. This suggests that interfaces should be designed to a) interrupt users at low-problem state moments and b) maintain the problem state for the user when interrupted.", "title": "" }, { "docid": "735cc7f7b067175705cb605affd7f06e", "text": "This paper presents a design, simulation, implementation and measurement of a novel microstrip meander patch antenna for the application of sensor networks. The dimension of the microstrip chip antenna is 15 mm times 15 mm times 2 mm. The meander-type radiating patch is constructed on the upper layer of the 2 mm height substrate with 0.0 5 mm height metallic conduct lines. Because of using the very high relative permittivity substrate ( epsivr=90), the proposed antenna achieves 315 MHz band operations.", "title": "" }, { "docid": "38d0f9ecdf338997d5f82c13614bc88f", "text": "Multimedia understanding is a fast emerging interdisciplinary research area. There is tremendous potential for effective use of multimedia content through intelligent analysis. Diverse application areas are increasingly relying on multimedia understanding systems. Advances in multimedia understanding are related directly to advances in signal processing, computer vision, pattern recognition, multimedia databases, and smart sensors. We review the state-of-the-art techniques in multimedia retrieval. In particular, we discuss how multimedia retrieval can be viewed as a pattern recognition problem. We discuss how reliance on powerful pattern recognition and machine learning techniques is increasing in the field of multimedia retrieval. We review the state-of-the-art multimedia understanding systems with particular emphasis on a system for semantic video indexing centered around multijects and multinets. We discuss how semantic retrieval is centered around concepts and context and the various mechanisms for modeling concepts and context.", "title": "" }, { "docid": "5445892bdf8478cfacac9d599dead1f9", "text": "The problem of determining feature correspondences across multiple views is considered. The term \"true multi-image\" matching is introduced to describe techniques that make full and efficient use of the geometric relationships between multiple images and the scene. A true multi-image technique must generalize to any number of images, be of linear algorithmic complexity in the number of images, and use all the images in an equal manner. A new space-sweep approach to true multi-image matching is presented that simultaneously determines 2D feature correspondences and the 3D positions of feature points in the scene. The method is illustrated on a seven-image matching example from the aerial im-", "title": "" }, { "docid": "047b2a48ac3cea12cc6bb894616822f6", "text": "25 26 27 28 29 30 31 32 33 34 35 36 Article history: Received 1 October 2008 Received in revised form 2 July 2009 Accepted 4 August 2009 Available online xxxx", "title": "" }, { "docid": "6fe371a784928b17b3360d12961ae40d", "text": "The combination of filters concept is a simple and flexible method to circumvent various compromises hampering the operation of adaptive linear filters. Recently, applications which require the identification of not only linear, but also nonlinear systems are widely studied. In this paper, we propose a combination of adaptive Volterra filters as the most versatile nonlinear models with memory. Moreover, we develop a novel approach that shows a similar behavior but significantly reduces the computational load by combining Volterra kernels rather than complete Volterra filters. Following an outline of the basic principles, the second part of the paper focuses on the application to nonlinear acoustic echo cancellation scenarios. As the ratio of the linear to nonlinear echo signal power is, in general, a priori unknown and time-variant, the performance of nonlinear echo cancellers may be inferior to a linear echo canceller if the nonlinear distortion is very low. Therefore, a modified version of the combination of kernels is developed obtaining a robust behavior regardless of the level of nonlinear distortion. Experiments with noise and speech signals demonstrate the desired behavior and the robustness of both the combination of Volterra filters and the combination of kernels approaches in different application scenarios.", "title": "" }, { "docid": "0e74994211d0e3c1e85ba0c85aba3df5", "text": "Images of faces manipulated to make their shapes closer to the average are perceived as more attractive. The influences of symmetry and averageness are often confounded in studies based on full-face views of faces. Two experiments are reported that compared the effect of manipulating the averageness of female faces in profile and full-face views. Use of a profile view allows a face to be \"morphed\" toward an average shape without creating an image that becomes more symmetrical. Faces morphed toward the average were perceived as more attractive in both views, but the effect was significantly stronger for full-face views. Both full-face and profile views morphed away from the average shape were perceived as less attractive. It is concluded that the effect of averageness is independent of any effect of symmetry on the perceived attractiveness of female faces.", "title": "" } ]
scidocsrr
089a32ca1f138c1934cbdcd560a04a76
RelTextRank: An Open Source Framework for Building Relational Syntactic-Semantic Text Pair Representations
[ { "docid": "50648acbc0ec1d4a8c3c86f2456f4d14", "text": "We present DKPro Similarity, an open source framework for text similarity. Our goal is to provide a comprehensive repository of text similarity measures which are implemented using standardized interfaces. DKPro Similarity comprises a wide variety of measures ranging from ones based on simple n-grams and common subsequences to high-dimensional vector comparisons and structural, stylistic, and phonetic measures. In order to promote the reproducibility of experimental results and to provide reliable, permanent experimental conditions for future studies, DKPro Similarity additionally comes with a set of full-featured experimental setups which can be run out-of-the-box and be used for future systems to built upon.", "title": "" } ]
[ { "docid": "3a9d639e87d6163c18dd52ef5225b1a6", "text": "A variety of approaches have been recently proposed to automatically infer users’ personality from their user generated content in social media. Approaches differ in terms of the machine learning algorithms and the feature sets used, type of utilized footprint, and the social media environment used to collect the data. In this paper, we perform a comparative analysis of state-of-the-art computational personality recognition methods on a varied set of social media ground truth data from Facebook, Twitter and YouTube. We answer three questions: (1) Should personality prediction be treated as a multi-label prediction task (i.e., all personality traits of a given user are predicted at once), or should each trait be identified separately? (2) Which predictive features work well across different on-line environments? and (3) What is the decay in accuracy when porting models trained in one social media environment to another?", "title": "" }, { "docid": "32ae0b0c5b3ca3a7ede687872d631d29", "text": "Background—The benefit of catheter-based reperfusion for acute myocardial infarction (MI) is limited by a 5% to 15% incidence of in-hospital major ischemic events, usually caused by infarct artery reocclusion, and a 20% to 40% need for repeat percutaneous or surgical revascularization. Platelets play a key role in the process of early infarct artery reocclusion, but inhibition of aggregation via the glycoprotein IIb/IIIa receptor has not been prospectively evaluated in the setting of acute MI. Methods and Results —Patients with acute MI of,12 hours’ duration were randomized, on a double-blind basis, to placebo or abciximab if they were deemed candidates for primary PTCA. The primary efficacy end point was death, reinfarction, or any (urgent or elective) target vessel revascularization (TVR) at 6 months by intention-to-treat (ITT) analysis. Other key prespecified end points were early (7 and 30 days) death, reinfarction, or urgent TVR. The baseline clinical and angiographic variables of the 483 (242 placebo and 241 abciximab) patients were balanced. There was no difference in the incidence of the primary 6-month end point (ITT analysis) in the 2 groups (28.1% and 28.2%, P50.97, of the placebo and abciximab patients, respectively). However, abciximab significantly reduced the incidence of death, reinfarction, or urgent TVR at all time points assessed (9.9% versus 3.3%, P50.003, at 7 days; 11.2% versus 5.8%, P50.03, at 30 days; and 17.8% versus 11.6%, P50.05, at 6 months). Analysis by actual treatment with PTCA and study drug demonstrated a considerable effect of abciximab with respect to death or reinfarction: 4.7% versus 1.4%, P50.047, at 7 days; 5.8% versus 3.2%, P50.20, at 30 days; and 12.0% versus 6.9%, P50.07, at 6 months. The need for unplanned, “bail-out” stenting was reduced by 42% in the abciximab group (20.4% versus 11.9%, P50.008). Major bleeding occurred significantly more frequently in the abciximab group (16.6% versus 9.5%, P 0.02), mostly at the arterial access site. There was no intracranial hemorrhage in either group. Conclusions—Aggressive platelet inhibition with abciximab during primary PTCA for acute MI yielded a substantial reduction in the acute (30-day) phase for death, reinfarction, and urgent target vessel revascularization. However, the bleeding rates were excessive, and the 6-month primary end point, which included elective revascularization, was not favorably affected.(Circulation. 1998;98:734-741.)", "title": "" }, { "docid": "9422f8c85859aca10e7d2a673b0377ba", "text": "Many adolescents are experiencing a reduction in sleep as a consequence of a variety of behavioral factors (e.g., academic workload, social and employment opportunities), even though scientific evidence suggests that the biological need for sleep increases during maturation. Consequently, the ability to effectively interact with peers while learning and processing novel information may be diminished in many sleepdeprived adolescents. Furthermore, sleep deprivation may account for reductions in cognitive efficiency in many children and adolescents with special education needs. In response to recognition of this potential problem by parents, educators, and scientists, some school districts have implemented delayed bus schedules and school start times to allow for increased sleep duration for high school students, in an effort to increase academic performance and decrease behavioral problems. The long-term effects of this change are yet to be determined; however, preliminary studies suggest that the short-term impact on learning and behavior has been beneficial. Thus, many parents, teachers, and scientists are supporting further consideration of this information to formulate policies that may maximize learning and developmental opportunities for children. Although changing school start times may be an effective method to combat sleep deprivation in most adolescents, some adolescents experience sleep deprivation and consequent diminished daytime performance because of common underlying sleep disorders (e.g., asthma or sleep apnea). In such cases, surgical, pharmaceutical, or respiratory therapy, or a combination of the three, interventions are required to restore normal sleep and daytime performance.", "title": "" }, { "docid": "b17015641d4ae89767bedf105802d838", "text": "We propose prefix constraints, a novel method to enforce constraints on target sentences in neural machine translation. It places a sequence of special tokens at the beginning of target sentence (target prefix), while side constraints (Sennrich et al., 2016) places a special token at the end of source sentence (source suffix). Prefix constraints can be predicted from source sentence jointly with target sentence, while side constraints must be provided by the user or predicted by some other methods. In both methods, special tokens are designed to encode arbitrary features on target-side or metatextual information. We show that prefix constraints are more flexible than side constraints and can be used to control the behavior of neural machine translation, in terms of output length, bidirectional decoding, domain adaptation, and unaligned target word generation.", "title": "" }, { "docid": "215d3a65099a39f5489ef05a48dd7344", "text": "In this paper an automated video surveillance system for human posture recognition using active contours and neural networks is presented. Localization of moving objects in the scene and human posture estimation are key features of the proposed architecture. The system architecture consists of five sequential modules that include the moving target detection process, two levels of segmentation process for interested element localization, features extraction of the object shape and a human posture classification system based on the radial basis functions neural network. Moving objects are detected by using an adaptive background subtraction method with an automatic background adaptation speed parameter and a new fast gradient vector flow snake algorithm for the elements segmentation is proposed. The developed system has been tested for the classification of three different postures such as standing, bending and squatting considering different kinds of feature. Results are promising and the architecture is also useful for the discrimination of human activities.", "title": "" }, { "docid": "334cc321181669085ef1aa83e69ec475", "text": "The energy required to crush rocks is proportional to the amount of new surface area that is created; hence, a very important percentage of the energy consumed to produce construction aggregates is spent in producing non-commercial fines. Data gathered during visits to quarries, an extensive survey and laboratory experiments are used to explore the role of mineralogy and fracture mode in fines production during the crushing of single aggregates and aggregates within granular packs. Results show that particle-level loading conditions determine the failure mode, resulting particle shape and fines generation. Point loading (both single particles and grains in loose packings) produces clean fractures and a small percentage of fines. In choked operations, high inter-particle coordination controls particle-level loading conditions, causesmicro-fractures on new aggregate faces and generates a large amount of fines. The generation of fines increases when shear is imposed during crushing. Aggregates produced in current crushing operations show the effects of multiple loading conditions and fracture modes. Results support the producers' empirical observations that the desired cubicity of aggregates is obtained at the expense of increased fines generation when standard equipment is used. © 2006 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "4fa1b8c7396e636216d0c1af0d1adf15", "text": "Modern smartphone platforms have millions of apps, many of which request permissions to access private data and resources, like user accounts or location. While these smartphone platforms provide varying degrees of control over these permissions, the sheer number of decisions that users are expected to manage has been shown to be unrealistically high. Prior research has shown that users are often unaware of, if not uncomfortable with, many of their permission settings. Prior work also suggests that it is theoretically possible to predict many of the privacy settings a user would want by asking the user a small number of questions. However, this approach has neither been operationalized nor evaluated with actual users before. We report on a field study (n=72) in which we implemented and evaluated a Personalized Privacy Assistant (PPA) with participants using their own Android devices. The results of our study are encouraging. We find that 78.7% of the recommendations made by the PPA were adopted by users. Following initial recommendations on permission settings, participants were motivated to further review and modify their settings with daily “privacy nudges.” Despite showing substantial engagement with these nudges, participants only changed 5.1% of the settings previously adopted based on the PPA’s recommendations. The PPA and its recommendations were perceived as useful and usable. We discuss the implications of our results for mobile permission management and the design of personalized privacy assistant solutions.", "title": "" }, { "docid": "f638a8691d79874f4440aa349e28cbfa", "text": "Semantic segmentation requires a detailed labeling of image pixels by object category. Information derived from local image patches is necessary to describe the detailed shape of individual objects. However, this information is ambiguous and can result in noisy labels. Global inference of image content can instead capture the general semantic concepts present. We advocate that holistic inference of image concepts provides valuable information for detailed pixel labeling. We propose a generic framework to leverage holistic information in the form of a LabelBank for pixellevel segmentation. We show the ability of our framework to improve semantic segmentation performance in a variety of settings. We learn models for extracting a holistic LabelBank from visual cues, attributes, and/or textual descriptions. We demonstrate improvements in semantic segmentation accuracy on standard datasets across a range of state-of-the-art segmentation architectures and holistic inference approaches.", "title": "" }, { "docid": "f698eb36fb75c6eae220cf02e41bdc44", "text": "In this paper, an enhanced hierarchical control structure with multiple current loop damping schemes for voltage unbalance and harmonics compensation (UHC) in ac islanded microgrid is proposed to address unequal power sharing problems. The distributed generation (DG) is properly controlled to autonomously compensate voltage unbalance and harmonics while sharing the compensation effort for the real power, reactive power, and unbalance and harmonic powers. The proposed control system of the microgrid mainly consists of the positive sequence real and reactive power droop controllers, voltage and current controllers, the selective virtual impedance loop, the unbalance and harmonics compensators, the secondary control for voltage amplitude and frequency restoration, and the auxiliary control to achieve a high-voltage quality at the point of common coupling. By using the proposed unbalance and harmonics compensation, the auxiliary control, and the virtual positive/negative-sequence impedance loops at fundamental frequency, and the virtual variable harmonic impedance loop at harmonic frequencies, an accurate power sharing is achieved. Moreover, the low bandwidth communication (LBC) technique is adopted to send the compensation command of the secondary control and auxiliary control from the microgrid control center to the local controllers of DG unit. Finally, the hardware-in-the-loop results using dSPACE 1006 platform are presented to demonstrate the effectiveness of the proposed approach.", "title": "" }, { "docid": "310076f963d9591a083edac1734c38cb", "text": "The ganglion impar is an unpaired sympathetic structure located at the level of the sacrococcygeal joint. Blockade of this structure has been utilised to treat chronic perineal pain. Methods to achieve this block often involve the use of fluoroscopy which is associated with radiation exposure of staff involved in providing these procedures. We report a combined loss of resistance injection technique in association with ultrasound guidance to achieve the block. Ultrasound was used to identify the sacrococcygeal joint and a needle was shown to enter this region. Loss of resistance was then used to demonstrate that the needle tip lies in a presacral space. The implication being that any injectate would be located in an adequate position. The potential exception would be a neurodestructive procedure as radiographic control of needle tip in relation to the rectum should be performed and recorded. However when aiming for a diagnostic or local anaesthetic based treatment option we feel that this may become an accepted method.", "title": "" }, { "docid": "107960c3c2e714804133f5918ac03b74", "text": "This paper reports on a data-driven motion planning approach for interaction-aware, socially-compliant robot navigation among human agents. Autonomous mobile robots navigating in workspaces shared with human agents require motion planning techniques providing seamless integration and smooth navigation in such. Smooth integration in mixed scenarios calls for two abilities of the robot: predicting actions of others and acting predictably for them. The former requirement requests trainable models of agent behaviors in order to accurately forecast their actions in the future, taking into account their reaction on the robot's decisions. A human-like navigation style of the robot facilitates other agents-most likely not aware of the underlying planning technique applied-to predict the robot motion vice versa, resulting in smoother joint navigation. The approach presented in this paper is based on a feature-based maximum entropy model and is able to guide a robot in an unstructured, real-world environment. The model is trained to predict joint behavior of heterogeneous groups of agents from onboard data of a mobile platform. We evaluate the benefit of interaction-aware motion planning in a realistic public setting with a total distance traveled of over 4 km. Interestingly the motion models learned from human-human interaction did not hold for robot-human interaction, due to the high attention and interest of pedestrians in testing basic braking functionality of the robot.", "title": "" }, { "docid": "5a4aa3f4ff68fab80d7809ff04a25a3b", "text": "OBJECTIVE\nThe technique of short segment pedicle screw fixation (SSPSF) has been widely used for stabilization in thoracolumbar burst fractures (TLBFs), but some studies reported high rate of kyphosis recurrence or hardware failure. This study was to evaluate the results of SSPSF including fractured level and to find the risk factors concerned with the kyphosis recurrence in TLBFs.\n\n\nMETHODS\nThis study included 42 patients, including 25 males and 17 females, who underwent SSPSF for stabilization of TLBFs between January 2003 and December 2010. For radiologic assessments, Cobb angle (CA), vertebral wedge angle (VWA), vertebral body compression ratio (VBCR), and difference between VWA and Cobb angle (DbVC) were measured. The relationships between kyphosis recurrence and radiologic parameters or demographic features were investigated. Frankel classification and low back outcome score (LBOS) were used for assessment of clinical outcomes.\n\n\nRESULTS\nThe mean follow-up period was 38.6 months. CA, VWA, and VBCR were improved after SSPSF, and these parameters were well maintained at the final follow-up with minimal degree of correction loss. Kyphosis recurrence showed a significant increase in patients with Denis burst type A, load-sharing classification (LSC) score >6 or DbVC >6 (p<0.05). There were no patients who worsened to clinical outcome, and there was no significant correlation between kyphosis recurrence and clinical outcome in this series.\n\n\nCONCLUSION\nSSPSF including the fractured vertebra is an effective surgical method for restoration and maintenance of vertebral column stability in TLBFs. However, kyphosis recurrence was significantly associated with Denis burst type A fracture, LSC score >6, or DbVC >6.", "title": "" }, { "docid": "32b2cd6b63c6fc4de5b086772ef9d319", "text": "Link prediction for knowledge graphs is the task of predicting missing relationships between entities. Previous work on link prediction has focused on shallow, fast models which can scale to large knowledge graphs. However, these models learn less expressive features than deep, multi-layer models – which potentially limits performance. In this work we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets. We also show that the model is highly parameter efficient, yielding the same performance as DistMult and R-GCN with 8x and 17x fewer parameters. Analysis of our model suggests that it is particularly effective at modelling nodes with high indegree – which are common in highlyconnected, complex knowledge graphs such as Freebase and YAGO3. In addition, it has been noted that the WN18 and FB15k datasets suffer from test set leakage, due to inverse relations from the training set being present in the test set – however, the extent of this issue has so far not been quantified. We find this problem to be severe: a simple rule-based model can achieve state-of-the-art results on both WN18 and FB15k. To ensure that models are evaluated on datasets where simply exploiting inverse relations cannot yield competitive results, we investigate and validate several commonly used datasets – deriving robust variants where necessary. We then perform experiments on these robust datasets for our own and several previously proposed models, and find that ConvE achieves state-of-the-art Mean Reciprocal Rank across all datasets.", "title": "" }, { "docid": "e13d935c4950323a589dce7fd5bce067", "text": "Worker reliability is a longstanding issue in crowdsourcing, and the automatic discovery of high quality workers is an important practical problem. Most previous work on this problem mainly focuses on estimating the quality of each individual worker jointly with the true answer of each task. However, in practice, for some tasks, worker quality could be associated with some explicit characteristics of the worker, such as education level, major and age. So the following question arises: how do we automatically discover related worker attributes for a given task, and further utilize the findings to improve data quality? In this paper, we propose a general crowd targeting framework that can automatically discover, for a given task, if any group of workers based on their attributes have higher quality on average; and target such groups, if they exist, for future work on the same task. Our crowd targeting framework is complementary to traditional worker quality estimation approaches. Furthermore, an advantage of our framework is that it is more budget efficient because we are able to target potentially good workers before they actually do the task. Experiments on real datasets show that the accuracy of final prediction can be improved significantly for the same budget (or even less budget in some cases). Our framework can be applied to many real word tasks and can be easily integrated in current crowdsourcing platforms.", "title": "" }, { "docid": "b0a206b80b63c509cbad8e60701a3760", "text": "For most businesses there are costs involved when acquiring new customers and having longer relationships with customers is therefore often more profitable. Predicting if an individual is prone to leave the business is then a useful tool to help any company take actions to mitigate this cost. The event when a person ends their relationship with a business is called attrition or churn. Predicting peoples actions is however hard and many different factors can affect their choices. This paper investigates different machine learning methods for predicting attrition in the customer base of a bank. Four different methods are chosen based on the results they have shown in previous research and these are then tested and compared to find which works best for predicting these events. Four different datasets from two different products and with two different applications are created from real world data from a European bank. All methods are trained and tested on each dataset. The results of the tests are then evaluated and compared to find what works best. The methods found in previous research to most reliably achieve good results in predicting churn in banking customers are the Support Vector Machine, Neural Network, Balanced Random Forest, and the Weighted Random Forest. The results show that the Balanced Random Forest achieves the best results with an average AUC of 0.698 and an average F-score of 0.376. The accuracy and precision of the model are concluded to not be enough to make definite decisions but can be used with other factors such as profitability estimations to improve the effectiveness of any actions taken to prevent the negative effects of churn.", "title": "" }, { "docid": "a5e52fc842c9b1780282efc071d87b0e", "text": "The highly influential framework of conceptual spaces provides a geometric way of representing knowledge. Instances are represented by points and concepts are represented by regions in a (potentially) high-dimensional space. Based on our recent formalization, we present a comprehensive implementation of the conceptual spaces framework that is not only capable of representing concepts with inter-domain correlations, but that also offers a variety of operations on these concepts.", "title": "" }, { "docid": "89db58eb8793baf03bb86d382d76326e", "text": "Embedded phishing exercises, which send test phishing emails, are utilized by organizations to reduce the susceptibility of its employees to this type of attack. Research studies seeking to evaluate the effectiveness of these exercises have generally been limited by small sample sizes. These studies have not been able to measure possible factors that might bias results. As a result, companies have had to create their own design and evaluation methods, with no framework to guide their efforts. Lacking such guidelines, it can often be difficult to determine whether these types of exercises are truly effective, and if reported results are statistically reliable. In this paper, we conduct a systematic analysis of data from a large real world embedded phishing exercise that involved 19,180 participants from a single organization, and utilized 115,080 test phishing emails. The first part of our study focuses on developing methodologies to correct some sources of bias, enabling sounder evaluations of the efficacy of embedded phishing exercises and training. We then use these methods to perform an analysis of the effectiveness of this embedded phishing exercise, and through our analysis, identify how the design of these exercises might be improved.", "title": "" }, { "docid": "d3d58715498167d3fbf863b9f6423fcd", "text": "In this paper, we focus on online detection and isolation of erroneous values reported by medical wireless sensors. We propose a lightweight approach for online anomaly detection in collected data, able to raise alarms only when patients enter in emergency situation and to discard faulty measurements. The proposed approach is based on Haar wavelet decomposition and Hampel filter for spatial analysis, and on boxplot for temporal analysis. Our objective is to reduce false alarms resulted from unreliable measurements. We apply our proposed approach on real physiological data set. Our experimental results prove the effectiveness of our approach to achieve good detection accuracy with low false alarm rate.", "title": "" }, { "docid": "e9353d465c5dfd8af684d4e09407ea28", "text": "An overview of the main contributions that introduced the use of nonresonating modes for the realization of pseudoelliptic narrowband waveguide filters is presented. The following are also highlighted: early work using asymmetric irises; oversized H-plane cavity; transverse magnetic cavity; TM dual-mode cavity; and multiple cavity filters.", "title": "" }, { "docid": "ca8c13c0a7d637234460f20caaa15df5", "text": "This paper presents a nonlinear control law for an automobile to autonomously track a trajectory, provided in real-time, on rapidly varying, off-road terrain. Existing methods can suffer from a lack of global stability, a lack of tracking accuracy, or a dependence on smooth road surfaces, any one of which could lead to the loss of the vehicle in autonomous off-road driving. This work treats automobile trajectory tracking in a new manner, by considering the orientation of the front wheels - not the vehicle's body - with respect to the desired trajectory, enabling collocated control of the system. A steering control law is designed using the kinematic equations of motion, for which global asymptotic stability is proven. This control law is then augmented to handle the dynamics of pneumatic tires and of the servo-actuated steering wheel. To control vehicle speed, the brake and throttle are actuated by a switching proportional integral (PI) controller. The complete control system consumes a negligible fraction of a computer's resources. It was implemented on a Volkswagen Touareg, \"Stanley\", the Stanford Racing Team's entry in the DARPA Grand Challenge 2005, a 132 mi autonomous off-road race. Experimental results from Stanley demonstrate the ability of the controller to track trajectories between obstacles, over steep and wavy terrain, through deep mud puddles, and along cliff edges, with a typical root mean square (RMS) crosstrack error of under 0.1 m. In the DARPA National Qualification Event 2005, Stanley was the only vehicle out of 40 competitors to not hit an obstacle or miss a gate, and in the DARPA Grand Challenge 2005 Stanley had the fastest course completion time.", "title": "" } ]
scidocsrr
ec43b1b7a7ead9699dd1ffe663e8e08c
Active Learning to Rank using Pairwise Supervision
[ { "docid": "14838947ee3b95c24daba5a293067730", "text": "In this paper we address the issue of learning to rank for document retrieval. In the task, a model is automatically created with some training data and then is utilized for ranking of documents. The goodness of a model is usually evaluated with performance measures such as MAP (Mean Average Precision) and NDCG (Normalized Discounted Cumulative Gain). Ideally a learning algorithm would train a ranking model that could directly optimize the performance measures with respect to the training data. Existing methods, however, are only able to train ranking models by minimizing loss functions loosely related to the performance measures. For example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs. To deal with the problem, we propose a novel learning algorithm within the framework of boosting, which can minimize a loss function directly defined on the performance measures. Our algorithm, referred to as AdaRank, repeatedly constructs 'weak rankers' on the basis of reweighted training data and finally linearly combines the weak rankers for making ranking predictions. We prove that the training process of AdaRank is exactly that of enhancing the performance measure used. Experimental results on four benchmark datasets show that AdaRank significantly outperforms the baseline methods of BM25, Ranking SVM, and RankBoost.", "title": "" }, { "docid": "f1a162f64838817d78e97a3c3087fae4", "text": "Most literature on support vector machines (SVMs) concentrates on the dual optimization problem. In this letter, we point out that the primal problem can also be solved efficiently for both linear and nonlinear SVMs and that there is no reason for ignoring this possibility. On the contrary, from the primal point of view, new families of algorithms for large-scale SVM training can be investigated.", "title": "" } ]
[ { "docid": "252b8722acd43c9f61a6b10019715392", "text": "Semantic segmentation is an important step of visual scene understanding for autonomous driving. Recently, Convolutional Neural Network (CNN) based methods have successfully applied in semantic segmentation using narrow-angle or even wide-angle pinhole camera. However, in urban traffic environments, autonomous vehicles need wider field of view to perceive surrounding things and stuff, especially at intersections. This paper describes a CNN-based semantic segmentation solution using fisheye camera which covers a large field of view. To handle the complex scene in the fisheye image, Overlapping Pyramid Pooling (OPP) module is proposed to explore local, global and pyramid local region context information. Based on the OPP module, a network structure called OPP-net is proposed for semantic segmentation. The net is trained and evaluated on a fisheye image dataset for semantic segmentation which is generated from an existing dataset of urban traffic scenes. In addition, zoom augmentation, a novel data augmentation policy specially designed for fisheye image, is proposed to improve the net's generalization performance. Experiments demonstrate the outstanding performance of the OPP-net for urban traffic scenes and the effectiveness of the zoom augmentation.", "title": "" }, { "docid": "b5097e718754c02cddd02a1c147c6398", "text": "Semi-automatic parking system is a driver convenience system automating steering control required during parking operation. This paper proposes novel monocular-vision based target parking-slot recognition by recognizing parking-slot markings when driver designates a seed-point inside the target parking-slot with touch screen. Proposed method compensates the distortion of fisheye lens and constructs a bird’s eye view image using homography. Because adjacent vehicles are projected along the outward direction from camera in the bird’s eye view image, if marking line-segment distinguishing parking-slots from roadway and front-ends of marking linesegments dividing parking-slots are observed, proposed method successfully recognizes the target parking-slot marking. Directional intensity gradient, utilizing the width of marking line-segment and the direction of seed-point with respect to camera position as a prior knowledge, can detect marking linesegments irrespective of noise and illumination variation. Making efficient use of the structure of parking-slot markings in the bird’s eye view image, proposed method simply recognizes the target parking-slot marking. It is validated by experiments that proposed method can successfully recognize target parkingslot under various situations and illumination conditions.", "title": "" }, { "docid": "cc17b3548d2224b15090ead8c398f808", "text": "Malaria is a global health problem that threatens 300–500 million people and kills more than one million people annually. Disease control is hampered by the occurrence of multi-drug-resistant strains of the malaria parasite Plasmodium falciparum. Synthetic antimalarial drugs and malarial vaccines are currently being developed, but their efficacy against malaria awaits rigorous clinical testing. Artemisinin, a sesquiterpene lactone endoperoxide extracted from Artemisia annua L (family Asteraceae; commonly known as sweet wormwood), is highly effective against multi-drug-resistant Plasmodium spp., but is in short supply and unaffordable to most malaria sufferers. Although total synthesis of artemisinin is difficult and costly, the semi-synthesis of artemisinin or any derivative from microbially sourced artemisinic acid, its immediate precursor, could be a cost-effective, environmentally friendly, high-quality and reliable source of artemisinin. Here we report the engineering of Saccharomyces cerevisiae to produce high titres (up to 100 mg l-1) of artemisinic acid using an engineered mevalonate pathway, amorphadiene synthase, and a novel cytochrome P450 monooxygenase (CYP71AV1) from A. annua that performs a three-step oxidation of amorpha-4,11-diene to artemisinic acid. The synthesized artemisinic acid is transported out and retained on the outside of the engineered yeast, meaning that a simple and inexpensive purification process can be used to obtain the desired product. Although the engineered yeast is already capable of producing artemisinic acid at a significantly higher specific productivity than A. annua, yield optimization and industrial scale-up will be required to raise artemisinic acid production to a level high enough to reduce artemisinin combination therapies to significantly below their current prices.", "title": "" }, { "docid": "b4978b2fbefc79fba6e69ad8fd55ebf9", "text": "This paper proposes an approach based on Least Squares Suppo rt Vect r Machines (LS-SVMs) for solving second order parti al differential equations (PDEs) with variable coe fficients. Contrary to most existing techniques, the proposed m thod provides a closed form approximate solution. The optimal representat ion of the solution is obtained in the primal-dual setting. T he model is built by incorporating the initial /boundary conditions as constraints of an optimization prob lem. The developed method is well suited for problems involving singular, variable and const a t coefficients as well as problems with irregular geometrical domai ns. Numerical results for linear and nonlinear PDEs demonstrat e he efficiency of the proposed method over existing methods.", "title": "" }, { "docid": "9516cf7ea68b16380669d47d6aee472b", "text": "In this paper, we survey the work that has been done in threshold concepts in computing since they were first discussed in 2005: concepts that have been identified, methodologies used, and issues discussed. Based on this survey, we then identify some promising unexplored areas for future work.", "title": "" }, { "docid": "c9fc05c0587a15a63b325ef6095aa0cb", "text": "Background:Recent epidemiological results suggested an increase of cancer risk after receiving computed tomography (CT) scans in childhood or adolescence. Their interpretation is questioned due to the lack of information about the reasons for examination. Our objective was to estimate the cancer risk related to childhood CT scans, and examine how cancer-predisposing factors (PFs) affect assessment of the radiation-related risk.Methods:The cohort included 67 274 children who had a first scan before the age of 10 years from 2000 to 2010 in 23 French departments. Cumulative X-rays doses were estimated from radiology protocols. Cancer incidence was retrieved through the national registry of childhood cancers; PF from discharge diagnoses.Results:During a mean follow-up of 4 years, 27 cases of tumours of the central nervous system, 25 of leukaemia and 21 of lymphoma were diagnosed; 32% of them among children with PF. Specific patterns of CT exposures were observed according to PFs. Adjustment for PF reduced the excess risk estimates related to cumulative doses from CT scans. No significant excess risk was observed in relation to CT exposures.Conclusions:This study suggests that the indication for examinations, whether suspected cancer or PF management, should be considered to avoid overestimation of the cancer risks associated with CT scans.", "title": "" }, { "docid": "807564cfc2e90dee21a3efd8dc754ba3", "text": "The present paper reports two studies designed to test the Dualistic Model of Passion with regard to performance attainment in two fields of expertise. Results from both studies supported the Passion Model. Harmonious passion was shown to be a positive source of activity investment in that it directly predicted deliberate practice (Study 1) and positively predicted mastery goals which in turn positively predicted deliberate practice (Study 2). In turn, deliberate practice had a direct positive impact on performance attainment. Obsessive passion was shown to be a mixed source of activity investment. While it directly predicted deliberate practice (Study 1) and directly predicted mastery goals (which predicted deliberate practice), it also predicted performance-avoidance and performance-approach goals, with the former having a tendency to facilitate performance directly, and the latter to directly negatively impact on performance attainment (Study 2). Finally, harmonious passion was also positively related to subjective well-being (SWB) in both studies, while obsessive passion was either unrelated (Study 1) or negatively related to SWB (Study 2). The conceptual and applied implications of the differential influences of harmonious and obsessive passion in performance are discussed.", "title": "" }, { "docid": "ce404452a843d18e4673d0dcf6cf01b1", "text": "We propose a formal mathematical model for sparse representations in neocortex based on a neuron model and associated operations. The design of our model neuron is inspired by recent experimental findings on active dendritic processing and NMDA spikes in pyramidal neurons. We derive a number of scaling laws that characterize the accuracy of such neurons in detecting activation patterns in a neuronal population under adverse conditions. We introduce the union property which shows that synapses for multiple patterns can be randomly mixed together within a segment and still lead to highly accurate recognition. We describe simulation results that provide overall insight into sparse representations as well as two primary results. First we show that pattern recognition by a neuron can be extremely accurate and robust with high dimensional sparse inputs even when using a tiny number of synapses to recognize large patterns. Second, equations representing recognition accuracy of a dendrite predict optimal NMDA spiking thresholds under a generous set of assumptions. The prediction tightly matches NMDA spiking thresholds measured in the literature. Our model neuron matches many of the known properties of pyramidal neurons. As such the theory provides a unified and practical mathematical framework for understanding the benefits and limits of sparse representations in cortical networks.", "title": "" }, { "docid": "44b7ed6c8297b6f269c8b872b0fd6266", "text": "vii", "title": "" }, { "docid": "b8b2d68955d6ed917900d30e4e15f71e", "text": "Due to the explosive growth of wireless devices and wireless traffic, the spectrum scarcity problem is becoming more urgent in numerous Radio Frequency (RF) systems. At the same time, many studies have shown that spectrum resources allocated to various existing RF systems are largely underutilized. As a potential solution to this spectrum scarcity problem, spectrum sharing among multiple, potentially dissimilar RF systems has been proposed. However, such spectrum sharing solutions are challenging to develop due to the lack of efficient coordination schemes and potentially different PHY/MAC properties. In this paper, we investigate existing spectrum sharing methods facilitating coexistence of various RF systems. The cognitive radio technique, which has been the subject of various surveys, constitutes a subset of our wider scope. We study more general coexistence scenarios and methods such as coexistence of communication systems with similar priorities, utilizing similar or different protocols or standards, as well as the coexistence of communication and non-communication systems using the same spectral resources. Finally, we explore open research issues on the spectrum sharing methods as well as potential approaches to resolving these issues. © 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "b2c299e13eff8776375c14357019d82e", "text": "This paper is focused on the application of complementary split-ring resonators (CSRRs) to the suppression of the common (even) mode in microstrip differential transmission lines. By periodically and symmetrically etching CSRRs in the ground plane of microstrip differential lines, the common mode can be efficiently suppressed over a wide band whereas the differential signals are not affected. Throughout the paper, we present and discuss the principle for the selective common-mode suppression, the circuit model of the structure (including the models under even- and odd-mode excitation), the strategies for bandwidth enhancement of the rejected common mode, and a methodology for common-mode filter design. On the basis of the dispersion relation for the common mode, it is shown that the maximum achievable rejection bandwidth can be estimated. Finally, theory is validated by designing and measuring a differential line and a balanced bandpass filter with common-mode suppression, where double-slit CSRRs (DS-CSRRs) are used in order to enhance the common-mode rejection bandwidth. Due to the presence of DS-CSRRs, the balanced filter exhibits more than 40 dB of common-mode rejection within a 34% bandwidth around the filter pass band.", "title": "" }, { "docid": "c043e7a5d5120f5a06ef6decc06c184a", "text": "Entities are further categorized into those that are the object of the measurement (‘assayed components’) and those, if any, that are subjected to targeted and controlled experimental interventions (‘perturbations/interventions’). These two core categories are related to the concepts ‘perturbagen’ and ‘target’ in the Bioassay Ontology (BAO2) and capture an important aspect of the design of experiments where multiple conditions are compared with each other in order to test whether a given perturbation (e.g., the presence or absence of a drug), causes a given response (e.g., a change in gene expression). Additional categories include ‘experimental variables’, ‘reporters’, ‘normalizing components’ and generic ‘biological components’ (Supplementary Data). We developed a web-based tool with a graphical user interface that allows computer-assisted manual extraction of the metadata model described above at the level of individual figure panels based on the information provided in figure legends and in the images. Files that contain raw or minimally processed data, when available, can furthermore be linked or uploaded and attached to the figure. As proof of principle, we have curated a compendium of over 18,000 experiments published across 23 journals. From the 721 papers processed, 381 papers were related to the field of autophagy, and the rest were annotated during the publication process of accepted manuscripts at four partner molecular biology journals. Both sets of papers were processed identically. Out of the 18,157 experimental panels annotated, 77% included at least one ‘intervention/assayed component’ pair, and this supported the broad applicability of the perturbation-centric SourceData model. We provide a breakdown of entities by categories in Supplementary Figure 1. We note that the presence of a perturbation is not a requirement for the model. As such, the SourceData model is also applicable in cases such as correlative observations. The SourceData model is independent of data type (i.e., image-based or numerical values) and is well suited for cell and molecular biology experiments. 77% of the processed entities were explicitly mentioned in the text of the legend. For the remaining entities, curators added the terms based on the labels directly displayed on the image of the figure. SourceData: a semantic platform for curating and searching figures", "title": "" }, { "docid": "e0f6878845e02e966908311e6818dbe9", "text": "Smart Home is one of emerging application domains of The Internet of things which following the computer and Internet. Although home automation technologies have been commercially available already, they are basically designed for signal-family smart homes with a high cost, and along with the constant growth of digital appliances in smart home, we merge smart home into smart-home-oriented Cloud to release the stress on the smart home system which mostly installs application software on their local computers. In this paper, we present a framework for Cloud-based smart home for enabling home automation, household mobility and interconnection which easy extensible and fit for future demands. Through subscribing services of the Cloud, smart home consumers can easily enjoy smart home services without purchasing computers which owns strong power and huge storage. We focus on the overall Smart Home framework, the features and architecture of the components of Smart Home, the interaction and cooperation between them in detail.", "title": "" }, { "docid": "cccecb08c92f8bcec4a359373a20afcb", "text": "To solve the problem of the false matching and low robustness in detecting copy-move forgeries, a new method was proposed in this study. It involves the following steps: first, establish a Gaussian scale space; second, extract the orientated FAST key points and the ORB features in each scale space; thirdly, revert the coordinates of the orientated FAST key points to the original image and match the ORB features between every two different key points using the hamming distance; finally, remove the false matched key points using the RANSAC algorithm and then detect the resulting copy-move regions. The experimental results indicate that the new algorithm is effective for geometric transformation, such as scaling and rotation, and exhibits high robustness even when an image is distorted by Gaussian blur, Gaussian white noise and JPEG recompression; the new algorithm even has great detection on the type of hiding object forgery.", "title": "" }, { "docid": "fb63ab21fa40b125c1a85b9c3ed1dd8d", "text": "The two central topics of information theory are the compression and the transmission of data. Shannon, in his seminal work, formalized both these problems and determined their fundamental limits. Since then the main goal of coding theory has been to find practical schemes that approach these limits. Polar codes, recently invented by Arıkan, are the first “practical” codes that are known to achieve the capacity for a large class of channels. Their code construction is based on a phenomenon called “channel polarization”. The encoding as well as the decoding operation of polar codes can be implemented with O(N log N) complexity, where N is the blocklength of the code. We show that polar codes are suitable not only for channel coding but also achieve optimal performance for several other important problems in information theory. The first problem we consider is lossy source compression. We construct polar codes that asymptotically approach Shannon’s rate-distortion bound for a large class of sources. We achieve this performance by designing polar codes according to the “test channel”, which naturally appears in Shannon’s formulation of the rate-distortion function. The encoding operation combines the successive cancellation algorithm of Arıkan with a crucial new ingredient called “randomized rounding”. As for channel coding, both the encoding as well as the decoding operation can be implemented with O(N log N) complexity. This is the first known “practical” scheme that approaches the optimal rate-distortion trade-off. We also construct polar codes that achieve the optimal performance for the Wyner-Ziv and the Gelfand-Pinsker problems. Both these problems can be tackled using “nested” codes and polar codes are naturally suited for this purpose. We further show that polar codes achieve the capacity of asymmetric channels, multi-terminal scenarios like multiple access channels, and degraded broadcast channels. For each of these problems, our constructions are the first known “practical” schemes that approach the optimal performance. The original polar codes of Arıkan achieve a block error probability decaying exponentially in the square root of the block length. For source coding, the gap between the achieved distortion and the limiting distortion also vanishes exponentially in the square root of the blocklength. We explore other polarlike code constructions with better rates of decay. With this generalization,", "title": "" }, { "docid": "460d6a8a5f78e6fa5c42fb6c219b3254", "text": "Generative Adversarial Networks (GANs) have been successfully applied to the problem of policy imitation in a model-free setup. However, the computation graph of GANs, that include a stochastic policy as the generative model, is no longer differentiable end-to-end, which requires the use of high-variance gradient estimation. In this paper, we introduce the Modelbased Generative Adversarial Imitation Learning (MGAIL) algorithm. We show how to use a forward model to make the computation fully differentiable, which enables training policies using the exact gradient of the discriminator. The resulting algorithm trains competent policies using relatively fewer expert samples and interactions with the environment. We test it on both discrete and continuous action domains and report results that surpass the state-of-the-art.", "title": "" }, { "docid": "4753ea589bd7dd76d3fb08ba8dce65ff", "text": "Frequent Patterns are very important in knowledge discovery and data mining process such as mining of association rules, correlations etc. Prefix-tree based approach is one of the contemporary approaches for mining frequent patterns. FP-tree is a compact representation of transaction database that contains frequency information of all relevant Frequent Patterns (FP) in a dataset. Since the introduction of FP-growth algorithm for FP-tree construction, three major algorithms have been proposed, namely AFPIM, CATS tree, and CanTree, that have adopted FP-tree for incremental mining of frequent patterns. All of the three methods perform incremental mining by processing one transaction of the incremental database at a time and updating it to the FP-tree of the initial (original) database. Here in this paper we propose a novel method to take advantage of FP-tree representation of incremental transaction database for incremental mining. We propose “Batch Incremental Tree (BIT)” algorithm to merge two small consecutive duration FP-trees to obtain a FP-tree that is equivalent of FP-tree obtained when the entire database is processed at once from the beginning of the first duration", "title": "" }, { "docid": "6052c0f2adfe4b75f96c21a5ee128bf5", "text": "I present a new Markov chain sampling method appropriate for distributions with isolated modes. Like the recently-developed method of \\simulated tempering\", the \\tempered transition\" method uses a series of distributions that interpolate between the distribution of interest and a distribution for which sampling is easier. The new method has the advantage that it does not require approximate values for the normalizing constants of these distributions, which are needed for simulated tempering, and can be tedious to estimate. Simulated tempering performs a random walk along the series of distributions used. In contrast, the tempered transitions of the new method move systematically from the desired distribution, to the easily-sampled distribution, and back to the desired distribution. This systematic movement avoids the ineeciency of a random walk, an advantage that unfortunately is cancelled by an increase in the number of interpolating distributions required. Because of this, the sampling eeciency of the tempered transition method in simple problems is similar to that of simulated tempering. On more complex distributions, however, simulated tempering and tempered transitions may perform differently. Which is better depends on the ways in which the interpolating distributions are \\deceptive\".", "title": "" }, { "docid": "1acc97afa9facf77289ddf1015b1e110", "text": "This short note presents a new formal language, lambda dependency-based compositional semantics (lambda DCS) for representing logical forms in semantic parsing. By eliminating variables and making existential quantification implicit, lambda DCS logical forms are generally more compact than those in lambda calculus.", "title": "" }, { "docid": "322141533594ed1927f36b850b8d963f", "text": "Microelectrodes are widely used in the physiological recording of cell field potentials. As microelectrode signals are generally in the μV range, characteristics of the cell-electrode interface are important to the recording accuracy. Although the impedance of the microelectrode-solution interface has been well studied and modeled in the past, no effective model has been experimentally verified to estimate the noise of the cell-electrode interface. Also in existing interface models, spectral information is largely disregarded. In this work, we developed a model for estimating the noise of the cell-electrode interface from interface impedances. This model improves over existing noise models by including the cell membrane capacitor and frequency dependent impedances. With low-noise experiment setups, this model is verified by microelectrode array (MEA) experiments with mouse muscle myoblast cells. Experiments show that the noise estimated from this model has <;10% error, which is much less than estimations from existing models. With this model, noise of the cell-electrode interface can be estimated by simply measuring interface impedances. This model also provides insights for micro- electrode design to achieve good recording signal-to-noise ratio.", "title": "" } ]
scidocsrr
864571bb992259be037a73252faea145
BreakingNews: Article Annotation by Image and Text Processing
[ { "docid": "10365680ff0a5da9b97727bf40432aae", "text": "In this paper, we investigate the contextualization of news documents with geographic and visual information. We propose a matrix factorization approach to analyze the location relevance for each news document. We also propose a method to enrich the document with a set of web images. For location relevance analysis, we first perform toponym extraction and expansion to obtain a toponym list from news documents. We then propose a matrix factorization method to estimate the location-document relevance scores while simultaneously capturing the correlation of locations and documents. For image enrichment, we propose a method to generate multiple queries from each news document for image search and then employ an intelligent fusion approach to collect a set of images from the search results. Based on the location relevance analysis and image enrichment, we introduce a news browsing system named NewsMap which can support users in reading news via browsing a map and retrieving news with location queries. The news documents with the corresponding enriched images are presented to help users quickly get information. Extensive experiments demonstrate the effectiveness of our approaches.", "title": "" } ]
[ { "docid": "14868b01ec5f7f6d4005331e592f756d", "text": "The proposed next-generation air traffic control system depends crucially on a surveillance technology called ADS-B. By 2020, nearly all aircraft flying through U.S. airspace must carry ADS-B transponders to continuously transmit their precise real-time location and velocity to ground-based air traffic control and to other en route aircraft. Surprisingly, the ADS-B protocol has no built-in security mechanisms, which renders ADS-B systems vulnerable to a wide range of malicious attacks. Herein, we address the question “can cryptography secure ADS-B?”— in other words, is there a practical and effective cryptographic solution that can be retrofit to the existing ADS-B system and enhance the security of this critical aviation technology?", "title": "" }, { "docid": "80d920f1f886b81e167d33d5059b8afe", "text": "Agriculture is one of the most important aspects of human civilization. The usages of information and communication technologies (ICT) have significantly contributed in the area in last two decades. Internet of things (IOT) is a technology, where real life physical objects (e.g. sensor nodes) can work collaboratively to create an information based and technology driven system to maximize the benefits (e.g. improved agricultural production) with minimized risks (e.g. environmental impact). Implementation of IOT based solutions, at each phase of the area, could be a game changer for whole agricultural landscape, i.e. from seeding to selling and beyond. This article presents a technical review of IOT based application scenarios for agriculture sector. The article presents a brief introduction to IOT, IOT framework for agricultural applications and discusses various agriculture specific application scenarios, e.g. farming resource optimization, decision support system, environment monitoring and control systems. The article concludes with the future research directions in this area.", "title": "" }, { "docid": "689c2bac45b0933994337bd28ce0515d", "text": "Jealousy is a powerful emotional force in couples' relationships. In just seconds it can turn love into rage and tenderness into acts of control, intimidation, and even suicide or murder. Yet it has been surprisingly neglected in the couples therapy field. In this paper we define jealousy broadly as a hub of contradictory feelings, thoughts, beliefs, actions, and reactions, and consider how it can range from a normative predicament to extreme obsessive manifestations. We ground jealousy in couples' basic relational tasks and utilize the construct of the vulnerability cycle to describe processes of derailment. We offer guidelines on how to contain the couple's escalation, disarm their ineffective strategies and power struggles, identify underlying vulnerabilities and yearnings, and distinguish meanings that belong to the present from those that belong to the past, or to other contexts. The goal is to facilitate relational and personal changes that can yield a better fit between the partners' expectations.", "title": "" }, { "docid": "e07756fb1ae9046c3b8c29b85a00bf0f", "text": "We present a clustering scheme that combines a mode-seeking phase with a cluster merging phase in the corresponding density map. While mode detection is done by a standard graph-based hill-climbing scheme, the novelty of our approach resides in its use of topological persistence to guide the merging of clusters. Our algorithm provides additional feedback in the form of a set of points in the plane, called a persistence diagram (PD), which provably reflects the prominences of the modes of the density. In practice, this feedback enables the user to choose relevant parameter values, so that under mild sampling conditions the algorithm will output the correct number of clusters, a notion that can be made formally sound within persistence theory. In addition, the output clusters have the property that their spatial locations are bound to the ones of the basins of attraction of the peaks of the density.\n The algorithm only requires rough estimates of the density at the data points, and knowledge of (approximate) pairwise distances between them. It is therefore applicable in any metric space. Meanwhile, its complexity remains practical: although the size of the input distance matrix may be up to quadratic in the number of data points, a careful implementation only uses a linear amount of memory and takes barely more time to run than to read through the input.", "title": "" }, { "docid": "58d629b3ac6bd731cd45126ce3ed8494", "text": "The Support Vector Machine (SVM) is a common machine learning tool that is widely used because of its high classification accuracy. Implementing SVM for embedded real-time applications is very challenging because of the intensive computations required. This increases the attractiveness of implementing SVM on hardware platforms for reaching high performance computing with low cost and power consumption. This paper provides the first comprehensive survey of current literature (2010-2015) of different hardware implementations of SVM classifier on Field-Programmable Gate Array (FPGA). A classification of existing techniques is presented, along with a critical analysis and discussion. A challenging trade-off between meeting embedded real-time systems constraints and high classification accuracy has been observed. Finally, some key future research directions are suggested.", "title": "" }, { "docid": "101af2d0539fa1470e8acfcf7c728891", "text": "OnlineEnsembleLearning", "title": "" }, { "docid": "38aeacd5d85523b494010debd69f4bac", "text": "We propose to train trading systems by optimizing financial objective functions via reinforcement learning. The performance functions that we consider as value functions are profit or wealth, the Sharpe ratio and our recently proposed ifferential Sharpe ratio for online learning. In Moody & Wu (1997), we presented empirical results in controlled experiments that demonstrated the advantages of reinforcement learning relative to supervised learning. Here we extend our previous work to compare Q-Learning to a reinforcement learning technique based on real-time recurrent learning (RTRL) that maximizes immediate reward. Our simulation results include a spectacular demonstration of the presence of predictability in the monthly Standard and Poors 500 stock index for the 25 year period 1970 through 1994. Our reinforcement trader achieves a simulated out-of-sample profit of over 4000% for this period, compared to the return for a buy and hold strategy of about 1300% (with dividends reinvested). This superior result is achieved with substantially lower isk.", "title": "" }, { "docid": "feb34f36aed8e030f93c0adfbe49ee8b", "text": "Complex queries containing outer joins are, for the most part, executed by commercial DBMS products in an \"as written\" manner. Only a very few reorderings of the operations are considered and the benefits of considering comprehensive reordering schemes are not exploited. This is largely due to the fact there are no readily usable results for reordering such operations for relations with duplicates and/or outer join predicates that are other than \"simple.\" Most previous approaches have ignored duplicates and complex predicates; the very few that have considered these aspects have suggested approaches that lead to a possibly exponential number of, and redundant intermediate joins. Since traditional query graph models are inadequate for modeling outer join queries with complex predicates, we present the needed hypergraph abstraction and algorithms for reordering such queries with joins and outer joins. As a result, the query optimizer can explore a significantly larger space of execution plans, and choose one with a low cost. Further, these algorithms are easily incorporated into well known and widely used enumeration methods such as dynamic programming.", "title": "" }, { "docid": "97e358d68b3593efd2e0ae553bbe96a5", "text": "Malware authors evade the signature based detection by packing the original malware using custom packers. In this paper, we present a static heuristics based approach for the detection of packed executables. We present 1) the PE heuristics considered for analysis and taxonomy of heuristics; 2) a method for computing the score using power distance based on weights and risks assigned to the defined heuristics; and 3) classification of packed executable based on the threshold obtained with the training data set, and the results achieved with the test data set. The experimental results show that our approach has a high detection rate of 99.82% with a low false positive rate of 2.22%. We also bring out difficulties in detecting packed DLL, CLR and Debug mode executables via header analysis.", "title": "" }, { "docid": "b47d53485704f4237e57d220640346a7", "text": "Features of consciousness difficult to understand in terms of conventional neuroscience have evoked application of quantum theory, which describes the fundamental behavior of matter and energy. In this paper we propose that aspects of quantum theory (e.g. quantum coherence) and of a newly proposed physical phenomenon of quantum wave function \"self-collapse\" (objective reduction: OR Penrose, 1994) are essential for consciousness, and occur in cytoskeletal microtubules and other structures within each of the brain's neurons. The particular characteristics of microtubules suitable for quantum effects include their crystal-like lattice structure, hollow inner core, organization of cell function and capacity for information processing. We envisage that conformational states of microtubule subunits (tubulins) are coupled to internal quantum events, and cooperatively interact (compute) with other tubulins. We further assume that macroscopic coherent superposition of quantum-coupled tubulin conformational states occurs throughout significant brain volumes and provides the global binding essential to consciousness. We equate the emergence of the microtubule quantum coherence with pre-conscious processing which grows (for up to 500 ms) until the mass energy difference among the separated states of tubulins reaches a threshold related to quantum gravity. According to the arguments for OR put forth in Penrose (1994), superpositioned states each have their own space-time geometries. When the degree of coherent mass energy difference leads to sufficient separation of space time geometry, the system must choose and decay (reduce, collapse) to a single universe state. In this way, a transient superposition of slightly differing space-time geometries persists until an abrupt quantum --, classical reduction occurs. Unlike the random, \"subjective reduction\" (SR, or R) of standard quantum theory caused by observation or environmental entanglement, the OR we propose in microtubules is a se(f-collapse and it results in particular patterns of microtubule-tubulin conformational states that regulate neuronal activities including synaptic functions. Possibilities and probabilities for post-reduction tubulin states are influenced by factors including attachments of microtubule-associated proteins (MAPs) acting as \"nodes\" which tune and \"orchestrate\" the quantum oscillations. We thus term the self-tuning OR process in microtubules \"orchestrated objective reduction\" (\"Orch OR\"), and calculate an estimate for the number of tubulins (and neurons) whose coherence for relevant time periods (e.g. 500ms) will elicit Orch OR. In providing a connection among (1) pre-conscious to conscious transition, (2) fundamental space time notions, (3) non-computability, and (4) binding of various (time scale and spatial) reductions into an instantaneous event (\"conscious now\"), we believe Orch OR in brain microtubules is the most specific and plausible model for consciousness yet proposed. * Corresponding author. Tel.: (520) 626-2116. Fax: (520) 626-2689. E-Mail: srh(cv ccit.arizona.edu. 0378-4754/96/$15.00 © 1996 Elsevier Science B.V. All rights reserved SSDI0378-4754(95 ) 0049-6 454 S. Hameroff, R. Penrose/Mathematics and Computers in Simulation 40 (1996) 453 480", "title": "" }, { "docid": "345e6a4f17eeaca196559ed55df3862e", "text": "Synaptic plasticity, the putative basis of learning and memory formation, manifests in various forms and across different timescales. Here we show that the interaction of Hebbian homosynaptic plasticity with rapid non-Hebbian heterosynaptic plasticity is, when complemented with slower homeostatic changes and consolidation, sufficient for assembly formation and memory recall in a spiking recurrent network model of excitatory and inhibitory neurons. In the model, assemblies were formed during repeated sensory stimulation and characterized by strong recurrent excitatory connections. Even days after formation, and despite ongoing network activity and synaptic plasticity, memories could be recalled through selective delay activity following the brief stimulation of a subset of assembly neurons. Blocking any component of plasticity prevented stable functioning as a memory network. Our modelling results suggest that the diversity of plasticity phenomena in the brain is orchestrated towards achieving common functional goals.", "title": "" }, { "docid": "a30a40f97b688cd59005434bc936e4ef", "text": "The Semantic Web works on the existing Web which presents the meaning of information as well-defined vocabularies understood by the people. Semantic Search, at the same time, works on improving the accuracy of a search by understanding the intent of the search and providing contextually relevant results. The paper describes a semantic approach towards web search through a PHP application. The goal was to parse through a user’s browsing history and return semantically relevant web pages for the search query provided. The browser used for this purpose was Mozilla Firefox. The user’s history was stored in a MySQL database, which, in turn, was accessed using PHP. The ontology, created from the browsing history, was then parsed for the entered search query and the corresponding results were returned to the user providing a semantically organized and relevant output.", "title": "" }, { "docid": "51b8fe57500d1d74834d1f9faa315790", "text": "Simulations of smoke are pervasive in the production of visual effects for commercials, movies and games: from cigarette smoke and subtle dust to large-scale clouds of soot and vapor emanating from fires and explosions. In this talk we present a new Eulerian method that targets the simulation of such phenomena on a structured spatially adaptive voxel grid --- thereby achieving an improvement in memory usage and computational performance over regular dense and sparse grids at uniform resolution. Contrary to e.g. Setaluri et al. [2014], we use velocities collocated at voxel corners which allows sharper interpolation for spatially adaptive simulations, is faster for sampling, and promotes ease-of-use in an open procedural environment where technical artists often construct small computational graphs that apply forces, dissipation etc. to the velocities. The collocated method requires special treatment when projecting out the divergent velocity modes to prevent non-physical high frequency oscillations (not addressed by Ferstl et al. [2014]). To this end we explored discretization and filtering methods from computational physics, combining them with a matrix-free adaptive multigrid scheme based on MLAT and FAS [Trottenberg and Schuller 2001]. Finally we contribute a new volumetric quadrature approach to temporally smooth emission which outperforms e.g. Gaussian quadrature at large time steps. We have implemented our method in the cross-platform Autodesk Bifrost procedural environment which facilitates customization by the individual technical artist, and our implementation is in production use at several major studios. We refer the reader to the accompanying video for examples that illustrate our novel workflows for spatially adaptive simulations and the benefits of our approach. We note that several methods for adaptive fluid simulation have been proposed in recent years, e.g. [Ferstl et al. 2014; Setaluri et al. 2014], and we have drawn a lot of inspiration from these. However, to the best of our knowledge we are the first in computer graphics to propose a collocated velocity, spatially adaptive and matrix-free smoke simulation method that explicitly mitigates non-physical divergent modes.", "title": "" }, { "docid": "4a6d48bd0f214a94f2137f424dd401eb", "text": "During the past decade, scientific research has provided new insight into the development from an acute, localised musculoskeletal disorder towards chronic widespread pain/fibromyalgia (FM). Chronic widespread pain/FM is characterised by sensitisation of central pain pathways. An in-depth review of basic and clinical research was performed to design a theoretical framework for manual therapy in these patients. It is explained that manual therapy might be able to influence the process of chronicity in three different ways. (I) In order to prevent chronicity in (sub)acute musculoskeletal disorders, it seems crucial to limit the time course of afferent stimulation of peripheral nociceptors. (II) In the case of chronic widespread pain and established sensitisation of central pain pathways, relatively minor injuries/trauma at any locations are likely to sustain the process of central sensitisation and should be treated appropriately with manual therapy accounting for the decreased sensory threshold. Inappropriate pain beliefs should be addressed and exercise interventions should account for the process of central sensitisation. (III) However, manual therapists ignoring the processes involved in the development and maintenance of chronic widespread pain/FM may cause more harm then benefit to the patient by triggering or sustaining central sensitisation.", "title": "" }, { "docid": "dac4ee56923c850874f8c6199456a245", "text": "In this paper, we present a multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 h of raw sensor data from a tractor-mounted sensor system in a grass mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal camera, web camera, 360 ∘ camera, LiDAR and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present, including humans, mannequin dolls, rocks, barrels, buildings, vehicles and vegetation. All obstacles have ground truth object labels and geographic coordinates.", "title": "" }, { "docid": "4dc20aa2c72a95022ba6cf3b592960a8", "text": "Relation Classification aims to classify the semantic relationship between two marked entities in a given sentence. It plays a vital role in a variety of natural language processing applications. Most existing methods focus on exploiting mono-lingual data, e.g., in English, due to the lack of annotated data in other languages. In this paper, we come up with a feature adaptation approach for cross-lingual relation classification, which employs a generative adversarial network (GAN) to transfer feature representations from one language with rich annotated data to another language with scarce annotated data. Such a feature adaptation approach enables feature imitation via the competition between a relation classification network and a rival discriminator. Experimental results on the ACE 2005 multilingual training corpus, treating English as the source language and Chinese the target, demonstrate the effectiveness of our proposed approach, yielding an improvement of 5.7% over the state-of-the-art.", "title": "" }, { "docid": "b0d9c5716052e9cfe9d61d20e5647c8c", "text": "We propose Efficient Neural Architecture Search (ENAS), a faster and less expensive approach to automated model design than previous methods. In ENAS, a controller learns to discover neural network architectures by searching for an optimal path within a larger model. The controller is trained with policy gradient to select a path that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected path is trained to minimize the cross entropy loss. On the Penn Treebank dataset, ENAS can discover a novel architecture thats achieves a test perplexity of 57.8, which is state-of-the-art among automatic model design methods on Penn Treebank. On the CIFAR-10 dataset, ENAS can design novel architectures that achieve a test error of 2.89%, close to the 2.65% achieved by standard NAS (Zoph et al., 2017). Most importantly, our experiments show that ENAS is more than 10x faster and 100x less resource-demanding than NAS.", "title": "" }, { "docid": "49f1d3ebaf3bb3e575ac3e40101494d9", "text": "This paper discusses the current status of research on fraud detection undertaken a.s part of the European Commissionfunded ACTS ASPECT (Advanced Security for Personal Communications Technologies) project, by Royal Holloway University of London. Using a recurrent neural network technique, we uniformly distribute prototypes over Toll Tickets. sampled from the U.K. network operator, Vodafone. The prototypes, which continue to adapt to cater for seasonal or long term trends, are used to classify incoming Toll Tickets to form statistical behaviour proFdes covering both the short and long-term past. These behaviour profiles, maintained as probability distributions, comprise the input to a differential analysis utilising a measure known as the HeUinger distance[5] between them as an alarm criteria. Fine tuning the system to minimise the number of false alarms poses a significant ask due to the low fraudulent/non fraudulent activity ratio. We benefit from using unsupervised learning in that no fraudulent examples ate requited for training. This is very relevant considering the currently secure nature of GSM where fraud scenarios, other than Subscription Fraud, have yet to manifest themselves. It is the aim of ASPECT to be prepared for the would-be fraudster for both GSM and UMTS, Introduction When a mobile originated phone call is made or various inter-call criteria are met he cells or switches that a mobile phone is communicating with produce information pertaining to the call attempt. These data records, for billing purposes, are referred to as Toll Tickets. Toll Tickets contain a wealth of information about the call so that charges can be made to the subscriber. By considering well studied fraud indicators these records can also be used to detect fraudulent activity. By this we mean i terrogating a series of recent Toll Tickets and comparing a function of the various fields with fixed criteria, known as triggers. A trigger, if activated, raises an alert status which cumulatively would lead to investigation by the network operator. Some xample fraud indicators are that of a new subscriber making long back-to-back international calls being indicative of direct call selling or short back-to-back calls to a single land number indicating an attack on a PABX system. Sometimes geographical information deduced from the cell sites visited in a call can indicate cloning. This can be detected through setting a velocity trap. Fixed trigger criteria can be set to catch such extremes of activity, but these absolute usage criteria cannot trap all types of fraud. An alternative approach to the problem is to perform a differential analysis. Here we develop behaviour profiles relating to the mobile phone’s activity and compare its most recent activities with a longer history of its usage. Techniques can then be derived to determine when the mobile phone’s behaviour changes ignificantly. One of the most common indicators of fraud is a significant change in behaviour. The performance expectations of such a system must be of prime concern when developing any fraud detection strategy. To implement a real time fraud detection tool on the Vodafone network in the U.K, it was estimated that, on average, the system would need to be able to process around 38 Toll Tickets per second. This figure varied with peak and off-peak usage and also had seasonal trends. The distribution of the times that calls are made and the duration of each call is highly skewed. Considering all calls that are made in the U.K., including the use of supplementary services, we found the average call duration to be less than eight seconds, hardly time to order a pizza. In this paper we present one of the methods developed under ASPECT that tackles the problem of skewed distributions and seasonal trends using a recurrent neural network technique that is based around unsupervised learning. We envisage this technique would form part of a larger fraud detection suite that also comprises a rule based fraud detection tool and a neural network fraud detection tool that uses supervised learning on a multi-layer perceptron. Each of the systems has its strengths and weaknesses but we anticipate that the hybrid system will combine their strengths. 9 From: AAAI Technical Report WS-97-07. Compilation copyright © 1997, AAAI (www.aaai.org). All rights reserved.", "title": "" }, { "docid": "8721382dd1674fac3194d015b9c64f94", "text": "fines excipients as “substances, other than the active drug substance of finished dosage form, which have been appropriately evaluated for safety and are included in a drug delivery system to either aid the processing of the drug delivery system during its manufacture; protect; support; enhance stability, bioavailability, or patient acceptability; assist in product identification; or enhance any other attributes of the overall safety and effectiveness of the drug delivery system during storage or use” (1). This definition implies that excipients serve a purpose in a formulation and contrasts with the old terminology, inactive excipients, which hints at the property of inertness. With a literal interpretation of this definition, an excipient can include diverse molecules or moieties such as replication incompetent viruses (adenoviral or retroviral vectors), bacterial protein components, monoclonal antibodies, bacteriophages, fusion proteins, and molecular chimera. For example, using gene-directed enzyme prodrug therapy, research indicated that chimera containing a transcriptional regulatory DNA sequence capable of being selectively activated in mammalian cells was linked to a sequence that encodes a -lactamase enzyme and delivered to target cells (2). The expressed enzyme in the targeted cells catalyzes the conversion of a subsequently administered prodrug to a toxic agent. A similar purpose is achieved by using an antibody conjugated to an enzyme followed by the administration of a noncytotoxic substance that is converted in vivo by the enzyme to its toxic form (3). In these examples, the chimera or the enzyme-linked antibody would qualify as excipients. Furthermore, many emerging delivery systems use a drug or gene covalently linked to the molecules, polymers, antibody, or chimera responsible for drug targeting, internalization, or transfection. Conventional wisdom dictates that such an entity be classified as the active substance or prodrug for regulatory purposes and be subject to one set of specifications for the entire molecule. The fact remains, however, that only a discrete part of this prodrug is responsible for the therapeutic effect, and a similar effect may be obtained by physically entrapping the drug as opposed to covalent conjugation. The situation is further complicated when fusion proteins are used as a combination of drug and delivery system or when the excipients themselves", "title": "" } ]
scidocsrr
d41690639179e7c26ff2ffad24b31ba6
Nonlinear and cooperative control of multiple hovercraft with input constraints
[ { "docid": "44dfc8c3c5c1f414197ad7cd8dedfb2e", "text": "In this paper, we propose a framework for formation stabilization of multiple autonomous vehicles in a distributed fashion. Each vehicle is assumed to have simple dynamics, i.e. a double-integrator, with a directed (or an undirected) information flow over the formation graph of the vehicles. Our goal is to find a distributed control law (with an efficient computational cost) for each vehicle that makes use of limited information regarding the state of other vehicles. Here, the key idea in formation stabilization is the use of natural potential functions obtained from structural constraints of a desired formation in a way that leads to a collision-free, distributed, and bounded state feedback law for each vehicle.", "title": "" }, { "docid": "4c290421dc42c3a5a56c7a4b373063e5", "text": "In this paper, we provide a graph theoretical framework that allows us to formally define formations of multiple vehicles and the issues arising in uniqueness of graph realizations and its connection to stability of formations. The notion of graph rigidity is crucial in identifying the shape variables of a formation and an appropriate potential function associated with the formation. This allows formulation of meaningful optimization or nonlinear control problems for formation stabilization/tacking, in addition to formal representation of split, rejoin, and reconfiguration maneuvers for multi-vehicle formations. We introduce an algebra that consists of performing some basic operations on graphs which allow creation of larger rigidby-construction graphs by combining smaller rigid subgraphs. This is particularly useful in performing and representing rejoin/split maneuvers of multiple formations in a distributed fashion.", "title": "" } ]
[ { "docid": "fb426b89d1a65c597d190582393254eb", "text": "The amount of data of all kinds available electronically has increased dramatically in recent years. The data resides in di erent forms, ranging from unstructured data in le systems to highly structured in relational database systems. Data is accessible through a variety of interfaces including Web browsers, database query languages, application-speci c interfaces, or data exchange formats. Some of this data is raw data, e.g., images or sound. Some of it has structure even if the structure is often implicit, and not as rigid or regular as that found in standard database systems. Sometimes the structure exists but has to be extracted from the data. Sometimes also it exists but we prefer to ignore it for certain purposes such as browsing. We call here semi-structured data this data that is (from a particular viewpoint) neither raw data nor strictly typed, i.e., not table-oriented as in a relational model or sorted-graph as in object databases. As will seen later when the notion of semi-structured data is more precisely de ned, the need for semi-structured data arises naturally in the context of data integration, even when the data sources are themselves well-structured. Although data integration is an old topic, the need to integrate a wider variety of dataformats (e.g., SGML or ASN.1 data) and data found on the Web has brought the topic of semi-structured data to the forefront of research. The main purpose of the paper is to isolate the essential aspects of semistructured data. We also survey some proposals of models and query languages for semi-structured data. In particular, we consider recent works at Stanford U. and U. Penn on semi-structured data. In both cases, the motivation is found in the integration of heterogeneous data. The \\lightweight\" data models they use (based on labelled graphs) are very similar. As we shall see, the topic of semi-structured data has no precise boundary. Furthermore, a theory of semi-structured data is still missing. We will try to highlight some important issues in this context. The paper is organized as follows. In Section 2, we discuss the particularities of semi-structured data. In Section 3, we consider the issue of the data structure and in Section 4, the issue of the query language.", "title": "" }, { "docid": "1d3eb22e6f244fbe05d0cc0f7ee37b84", "text": "Robots that use learned perceptual models in the real world must be able to safely handle cases where they are forced to make decisions in scenarios that are unlike any of their training examples. However, state-of-the-art deep learning methods are known to produce erratic or unsafe predictions when faced with novel inputs. Furthermore, recent ensemble, bootstrap and dropout methods for quantifying neural network uncertainty may not efficiently provide accurate uncertainty estimates when queried with inputs that are very different from their training data. Rather than unconditionally trusting the predictions of a neural network for unpredictable real-world data, we use an autoencoder to recognize when a query is novel, and revert to a safe prior behavior. With this capability, we can deploy an autonomous deep learning system in arbitrary environments, without concern for whether it has received the appropriate training. We demonstrate our method with a vision-guided robot that can leverage its deep neural network to navigate 50% faster than a safe baseline policy in familiar types of environments, while reverting to the prior behavior in novel environments so that it can safely collect additional training data and continually improve. A video illustrating our approach is available at: http://groups.csail.mit.edu/rrg/videos/safe visual navigation.", "title": "" }, { "docid": "af8fdea69016ec8e61e935c84f1c72be", "text": "Many developing countries are suffering from air pollution recently. Governments have built a few air quality monitoring stations in cities to inform people the concentration of air pollutants. Unfortunately, urban air quality is highly skewed in a city, depending on multiple complex factors, such as the meteorology, traffic volume, and land uses. Building more monitoring stations is very costly in terms of money, land uses, and human resources. As a result, people do not really know the fine-grained air quality of a location without a monitoring station. In this paper, we introduce a cloud-based knowledge discovery system that infers the real-time and fine-grained air quality information throughout a city based on the (historical and realtime) air quality data reported by existing monitor stations and a variety of data sources observed in the city, such as meteorology, traffic flow, human mobility, structure of road networks, and point of interests (POIs). The system also provides a mobile client, with which a user can monitor the air quality of multiple locations in a city (e.g. the current location, home and work places), and a web service that allows other applications to call the air quality of any location. The system has been evaluated based on the real data from 9 cities in China, including Beijing, Shanghai, Guanzhou, and Shenzhen, etc. The system is running on Microsoft Azure and the mobile client is publicly available in Window Phone App Store, entitled Urban Air. Our system gives a cost-efficient example for enabling a knowledge discovery prototype involving big data on the cloud.", "title": "" }, { "docid": "1c6e10b9b797b70bb76793f4d36bfad5", "text": "The diet is an essential factor affecting the risk for development and progression of modern day chronic diseases, particularly those with pathophysiological roots in inflammation and oxidative stress-induced damage. The potential impact of certain foods and their bioactive compounds to reverse or prevent destructive dysregulated processes leading to disease has attracted intense research attention. The mango (Mangifera indica Linn.) is a tropical fruit with distinctive nutritional and phytochemical composition. Notably, the mango contains several essential water- and lipid-soluble micronutrients along with the distinguishing phytochemicals gallotannins and mangiferin. In vitro and in vivo studies reveal various mechanisms through which mangos or their associated compounds reduce risk or reverse metabolic- and inflammation-associated diseases. Health benefits of isolated individual mango compounds and extracts from mango by-products are well described in the literature with less attention devoted to the whole fruit. Here, we review and summarize the available literature assessing the health promoting potential of mango flesh, the edible portion contributing to dietary fruit intake, focusing specifically on modern day health issues of obesity and the risk factors and diseases it precipitates, including diabetes and cardiovascular disease. Additionally, this review explores new insights on the benefits of mango for brain, skin and intestinal health. Overall, the foundation of research supporting the potential role of mangos in reducing risk for inflammation- and metabolically-based chronic diseases is growing.", "title": "" }, { "docid": "dca8895967ae9b86979f428d77e84ae5", "text": "This study examined how the frequency of positive and negative emotions is related to life satisfaction across nations. Participants were 8,557 people from 46 countries who reported on their life satisfaction and frequency of positive and negative emotions. Multilevel analyses showed that across nations, the experience of positive emotions was more strongly related to life satisfaction than the absence of negative emotions. Yet, the cultural dimensions of individualism and survival/self-expression moderated these relationships. Negative emotional experiences were more negatively related to life satisfaction in individualistic than in collectivistic nations, and positive emotional experiences had a larger positive relationship with life satisfaction in nations that stress self-expression than in nations that value survival. These findings show how emotional aspects of the good life vary with national culture and how this depends on the values that characterize one's society. Although to some degree, positive and negative emotions might be universally viewed as desirable and undesirable, respectively, there appear to be clear cultural differences in how relevant such emotional experiences are to quality of life.", "title": "" }, { "docid": "55d584440f6925f12dd3a28917b10c85", "text": "Bitcoin and other similar digital currencies on blockchains are not ideal means for payment, because their prices tend to go up in the long term (thus people are incentivized to hoard those currencies), and to fluctuate widely in the short term (thus people would want to avoid risks of losing values). The reason why those blockchain currencies based on proof of work are unstable may be found in their designs that the supplies of currencies do not respond to their positive and negative demand shocks, as the authors have formulated in our past work. Continuing from our past work, this paper proposes minimal changes to the design of blockchain currencies so that their market prices are automatically stabilized, absorbing both positive and negative demand shocks of the currencies by autonomously controlling their supplies. Those changes are: 1) limiting re-adjustment of proof-of-work targets, 2) making mining rewards variable according to the observed over-threshold changes of block intervals, and 3) enforcing negative interests to remove old coins in circulation. We have made basic design checks of these measures through simple simulations. In addition to stabilization of prices, the proposed measures may have effects of making those currencies preferred means for payment by disincentivizing hoarding, and improving sustainability of the currency systems by making rewards to miners perpetual.", "title": "" }, { "docid": "1ce1110186fe91b70889d9897de1c186", "text": "Sentiment analysis is the fundamental component in text-driven monitoring or forecasting systems, where the general sentiment towards real-world entities (e.g., people, products, organizations) are analyzed based on the sentiment signals embedded in a myriad of web text available today. Building such systems involves several practically important problems, from data cleansing (e.g., boilerplate removal, web-spam detection), and sentiment analysis at individual mention-level (e.g., phrase, sentence-, document-level) to the aggregation of sentiment for each entity-level (e.g., person, company) analysis. Most previous research in sentiment analysis however, has focused only on individual mention-level analysis, and there has been relatively less work that copes with other practically important problems for enabling a large-scale sentiment monitoring system. In this paper, we propose Empath, a new framework for evaluating entity-level sentiment analysis. Empath leverages objective measurements of entities in various domains such as people, companies, countries, movies, and sports, to facilitate entity-level sentiment analysis and tracking. We demonstrate the utility of Empath for the evaluation of a large-scale sentiment system by applying it to various lexicons using Lydia, our own large scale text-analytics tool, over a corpus consisting of more than a terabyte of newspaper data. We expect that Empath will encourage research that encompasses end-to-end pipelines to enable a large-scale text-driven monitoring and forecasting systems.", "title": "" }, { "docid": "45447ab4e0a8bd84fcf683ac482f5497", "text": "Most of the current learning analytic techniques have as starting point the data recorded by Learning Management Systems (LMS) about the interactions of the students with the platform and among themselves. But there is a tendency on students to rely less on the functionality offered by the LMS and use more applications that are freely available on the net. This situation is magnified in studies in which students need to interact with a set of tools that are easily installed on their personal computers. This paper shows an approach using Virtual Machines by which a set of events occurring outside of the LMS are recorded and sent to a central server in a scalable and unobtrusive manner.", "title": "" }, { "docid": "42f3032626b2a002a855476a718a2b1b", "text": "Learning controllers for bipedal robots is a challenging problem, often requiring expert knowledge and extensive tuning of parameters that vary in different situations. Recently, deep reinforcement learning has shown promise at automatically learning controllers for complex systems in simulation. This has been followed by a push towards learning controllers that can be transferred between simulation and hardware, primarily with the use of domain randomization. However, domain randomization can make the problem of finding stable controllers even more challenging, especially for underactuated bipedal robots. In this work, we explore whether policies learned in simulation can be transferred to hardware with the use of high-fidelity simulators and structured controllers. We learn a neural network policy which is a part of a more structured controller. While the neural network is learned in simulation, the rest of the controller stays fixed, and can be tuned by the expert as needed. We show that using this approach can greatly speed up the rate of learning in simulation, as well as enable transfer of policies between simulation and hardware. We present our results on an ATRIAS robot and explore the effect of action spaces and cost functions on the rate of transfer between simulation and hardware. Our results show that structured policies can indeed be learned in simulation and implemented on hardware successfully. This has several advantages, as the structure preserves the intuitive nature of the policy, and the neural network improves the performance of the hand-designed policy. In this way, we propose a way of using neural networks to improve expert designed controllers, while maintaining ease of understanding.", "title": "" }, { "docid": "449c57f0679400c970acbf32d76d6c3c", "text": "The objective of the study was to empirically examine the impact of credit risk on profitability of commercial banks in Ethiopia. For the purpose secondary data collected from 8 sample commercial banks for a 12 year period (2003-2004) were collected from annual reports of respective banks and National Bank of Ethiopia. The data were analyzed using a descriptive statics and panel data regression model and the result showed that credit risk measures: non-performing loan, loan loss provisions and capital adequacy have a significant impact on the profitability of commercial banks in Ethiopia. The study suggested a need for enhancing credit risk management to maintain the prevailing profitability of commercial banks in Ethiopia.", "title": "" }, { "docid": "ca7380c0b194aa5308f3329205b6e211", "text": "Endopolyploidy was observed in the protocorms of diploid Phalaenopsis aphrodite subsp. formosana with ploidy doubling achieved by in vitro regeneration of excised protocorms, or protocorm-like bodies (PLBs). Thirty-four per cent of the PLBs regenerated from the first cycle of sectioned protocorms were found to be polyploids with ploidy doubled once or twice as determined by flow-cytometry. The frequency of ploidy doubling increased as the sectioning cycles increased and was highest in diploid followed by the triploid and tetraploid. Regeneration of the endopolyploid cells in the tissue of the protocorms or PLBs is proposed as the source of the development of ploidy doubled plantlets. The frequency of ploidy doubling was similar in seven other Phalaenopsis species, although the rate of increase within cycles was genotype specific. In two species, a comparison of five parameters between 5-month-old diploid and tetraploid potted plants showed only the stomata density differed significantly. The flowers of the tetraploid plant were larger and heavier than those of the diploids. This ploidy doubling method is a simple and effective means to produce large number of polyploid Phalaenopsis species plants as well as their hybrids. The method will be beneficial to orchid breeding programs especially for the interspecific hybridization between varieties having different chromosome sizes and ploidy levels.", "title": "" }, { "docid": "566a2b2ff835d10e0660fb89fd6ae618", "text": "We argue that an understanding of the faculty of language requires substantial interdisciplinary cooperation. We suggest how current developments in linguistics can be profitably wedded to work in evolutionary biology, anthropology, psychology, and neuroscience. We submit that a distinction should be made between the faculty of language in the broad sense (FLB) and in the narrow sense (FLN). FLB includes a sensory-motor system, a conceptual-intentional system, and the computational mechanisms for recursion, providing the capacity to generate an infinite range of expressions from a finite set of elements. We hypothesize that FLN only includes recursion and is the only uniquely human component of the faculty of language. We further argue that FLN may have evolved for reasons other than language, hence comparative studies might look for evidence of such computations outside of the domain of communication (for example, number, navigation, and social relations).", "title": "" }, { "docid": "cff32690c2421b2ad94dea33f5e4479d", "text": "Heavy ion single-event effect (SEE) measurements on Xilinx Zynq-7000 are reported. Heavy ion susceptibility to Single-Event latchup (SEL), single event upsets (SEUs) of BRAM, configuration bits of FPGA and on chip memory (OCM) of the processor were investigated.", "title": "" }, { "docid": "47b8daaaa43535ec29461f0d1b86566d", "text": "This article aims to improve nurses' knowledge of wound debridement through a review of different techniques and the related physiology of wound healing. Debridement has long been an established component of effective wound management. However, recent clinical developments have widened the choice of methods available. This article provides an overview of the physiology of wounds, wound bed preparation, methods of debridement and the important considerations for the practitioner in implementing effective, informed and patient-centred wound care.", "title": "" }, { "docid": "10c7b7a19197c8562ebee4ae66c1f5e8", "text": "Generative Adversarial Networks (GANs) have recently achieved impressive results for many real-world applications, and many GAN variants have emerged with improvements in sample quality and training stability. However, visualization and understanding of GANs is largely missing. How does a GAN represent our visual world internally? What causes the artifacts in GAN results? How do architectural choices affect GAN learning? Answering such questions could enable us to develop new insights and better models. In this work, we present an analytic framework to visualize and understand GANs at the unit-, object-, and scene-level. We first identify a group of interpretable units that are closely related to object concepts with a segmentation-based network dissection method. Then, we quantify the causal effect of interpretable units by measuring the ability of interventions to control objects in the output. Finally, we examine the contextual relationship between these units and their surrounding by inserting the discovered object concepts into new images. We show several practical applications enabled by our framework, from comparing internal representations across different layers, models, and datasets, to improving GANs by locating and removing artifact-causing units, to interactively manipulating objects in the scene. We provide open source interpretation tools to help peer researchers and practitioners better understand their GAN models∗.", "title": "" }, { "docid": "c027cb4288d5eb024a72b37efd663ba4", "text": "We previously isolated a novel tyrosine kinase receptor, Flt-1, now known as VEGF-receptor (VEGFR)-1. The VEGF-VEGFR system plays a pivotal role in not only physiological but also pathological angiogenesis. We examined the role of Flt-1 in carcinogenesis using Flt-1-signal-deficient (Flt-1 TK-/-) mice, and found that this receptor stimulates tumor growth and metastasis most likely via macrophages, making it an important potential target in the treatment of cancer. In addition to the full-length receptor, the Flt-1 gene produces a soluble protein, sFlt-1, an endogenous VEGF-inhibitor. sFlt-1 is expressed in trophoblasts of the placenta between fetal and maternal blood vessels, suggesting it to be a barrier against extreme VEGF-signaling. Abnormally high expression of sFlt-1 occurs in most preeclampsia patients, whose main symptoms are hypertension and proteinurea. In cancer patients, strong suppression of VEGF-VEGFR by drugs induces similar side effects including hypertension. These results indicate a close relationship between abnormal VEGF-block and hypertension/proteinurea. sFlt-1 is an attractive target for the control of preeclampsia.", "title": "" }, { "docid": "839f8f079c4134641f6bf4051200dd8d", "text": "Although Industrie 4.0 is currently a top priority for many companies, research centers, and universities, a generally accepted definition of the term does not exist. As a result, discussing the topic on an academic level is difficult, and so is implementing Industrie 4.0 scenarios. Based on a literature review, the paper provides a definition of Industrie 4.0 and identifies six design principles for its implementation: interoperability, virtualization, decentralization, real-time capability, service orientation, and modularity. Taking into account these principles, academics may be enabled to further investigate on the topic, while practitioners may find assistance in implementing appropriate scenarios.", "title": "" }, { "docid": "f37d32a668751198ed8acde8ab3bdc12", "text": "INTRODUCTION\nAlthough the critical feature of attention-deficit/hyperactivity disorder (ADHD) is a persistent pattern of inattention and/or hyperactivity/impulsivity behavior, the disorder is clinically heterogeneous, and concomitant difficulties are common. Children with ADHD are at increased risk for experiencing lifelong impairments in multiple domains of daily functioning. In the present study we aimed to build a brief ADHD impairment-related tool -ADHD concomitant difficulties scale (ADHD-CDS)- to assess the presence of some of the most important comorbidities that usually appear associated with ADHD such as emotional/motivational management, fine motor coordination, problem-solving/management of time, disruptive behavior, sleep habits, academic achievement and quality of life. The two main objectives of the study were (i) to discriminate those profiles with several and important ADHD functional difficulties and (ii) to create a brief clinical tool that fosters a comprehensive evaluation process and can be easily used by clinicians.\n\n\nMETHODS\nThe total sample included 399 parents of children with ADHD aged 6-18 years (M = 11.65; SD = 3.1; 280 males) and 297 parents of children without a diagnosis of ADHD (M = 10.91; SD = 3.2; 149 male). The scale construction followed an item improved sequential process.\n\n\nRESULTS\nFactor analysis showed a 13-item single factor model with good fit indices. Higher scores on inattention predicted higher scores on ADHD-CDS for both the clinical sample (β = 0.50; p < 0.001) and the whole sample (β = 0.85; p < 0.001). The ROC curve for the ADHD-CDS (against the ADHD diagnostic status) gave an area under the curve (AUC) of.979 (95%, CI = [0.969, 0.990]).\n\n\nDISCUSSION\nThe ADHD-CDS has shown preliminary adequate psychometric properties, with high convergent validity and good sensitivity for different ADHD profiles, which makes it a potentially appropriate and brief instrument that may be easily used by clinicians, researchers, and health professionals in dealing with ADHD.", "title": "" }, { "docid": "1b656c70d5ccd8fffc78242a07f650fd", "text": "Semantic image parsing, which refers to the process of decomposing images into semantic regions and constructing the structure representation of the input, has recently aroused widespread interest in the field of computer vision. The recent application of deep representation learning has driven this field into a new stage of development. In this paper, we summarize three aspects of the progress of research on semantic image parsing, i.e., category-level semantic segmentation, instance-level semantic segmentation, and beyond segmentation. Specifically, we first review the general frameworks for each task and introduce the relevant variants. The advantages and limitations of each method are also discussed. Moreover, we present a comprehensive comparison of different benchmark datasets and evaluation metrics. Finally, we explore the future trends and challenges of semantic image parsing.", "title": "" }, { "docid": "659c1e333f77bb6453288645a7c4f1d9", "text": "Analysis Of Machine Learning Classifier Performance In Adding Custom Gestures To The Leap Motion Eric Yun The use of supervised machine learning to extend the capabilities and overall viability of motion sensing input devices has been an increasingly popular avenue of research since the release of the Leap Motion in 2013. The device's optical sensors are capable of recognizing and tracking key features of a user's hands and fingers, which can be obtained and manipulated through a robust API. This makes statistical classification ideal for tackling the otherwise laborious and error prone nature of adding new programmer-defined gestures to the set of recognized gestures. Although a handful of studies have explored the effectiveness of machine learning with the Leap Motion, none to our knowledge have run a comparative performance analysis of classification algorithms or made use of more than several of them in their experiments. The aim of this study is to improve the reliability of detecting newly added gestures by identifying the classifiers that produce the best results. To this end, a formal analysis of the most popular classifiers used in the field of machine learning was performed to determine those most appropriate to the requirements of the Leap Motion. A recording and", "title": "" } ]
scidocsrr
a46507fbc4f3c8315a605e2c951a575d
Super-Convergence: Very Fast Training of Residual Networks Using Large Learning Rates
[ { "docid": "6af09f57f2fcced0117dca9051917a0d", "text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.", "title": "" }, { "docid": "65dfecb5e0f4f658a19cd87fb94ff0ae", "text": "Although deep learning has produced dazzling successes for applications of image, speech, and video processing in the past few years, most trainings are with suboptimal hyper-parameters, requiring unnecessarily long training times. Setting the hyper-parameters remains a black art that requires years of experience to acquire. This report proposes several efficient ways to set the hyper-parameters that significantly reduce training time and improves performance. Specifically, this report shows how to examine the training validation/test loss function for subtle clues of underfitting and overfitting and suggests guidelines for moving toward the optimal balance point. Then it discusses how to increase/decrease the learning rate/momentum to speed up training. Our experiments show that it is crucial to balance every manner of regularization for each dataset and architecture. Weight decay is used as a sample regularizer to show how its optimal value is tightly coupled with the learning rates and momentums.", "title": "" }, { "docid": "938395ce421e0fede708e3b4ab7185b5", "text": "This paper provides a review and commentary on the past, present, and future of numerical optimization algorithms in the context of machine learning applications. Through case studies on text classification and the training of deep neural networks, we discuss how optimization problems arise in machine learning and what makes them challenging. A major theme of our study is that large-scale machine learning represents a distinctive setting in which the stochastic gradient (SG) method has traditionally played a central role while conventional gradient-based nonlinear optimization techniques typically falter. Based on this viewpoint, we present a comprehensive theory of a straightforward, yet versatile SG algorithm, discuss its practical behavior, and highlight opportunities for designing algorithms with improved performance. This leads to a discussion about the next generation of optimization methods for large-scale machine learning, including an investigation of two main streams of research on techniques that diminish noise in the stochastic directions and methods that make use of second-order derivative approximations.", "title": "" }, { "docid": "3abf10f8539840b1830f14d83a7d3ab0", "text": "We consider two questions at the heart of machine learning; how can we predict if a minimum will generalize to the test set, and why does stochastic gradient descent find minima that generalize well? Our work responds to Zhang et al. (2016), who showed deep neural networks can easily memorize randomly labeled training data, despite generalizing well on real labels of the same inputs. We show that the same phenomenon occurs in small linear models. These observations are explained by the Bayesian evidence, which penalizes sharp minima but is invariant to model parameterization. We also demonstrate that, when one holds the learning rate fixed, there is an optimum batch size which maximizes the test set accuracy. We propose that the noise introduced by small mini-batches drives the parameters towards minima whose evidence is large. Interpreting stochastic gradient descent as a stochastic differential equation, we identify the “noise scale” g = (NB −1) ≈ N/B, where is the learning rate, N the training set size and B the batch size. Consequently the optimum batch size is proportional to both the learning rate and the size of the training set, Bopt ∝ N . We verify these predictions empirically.", "title": "" } ]
[ { "docid": "9d5d667c6d621bd90a688c993065f5df", "text": "Creative individuals increasingly rely on online crowdfunding platforms to crowdsource funding for new ventures. For novice crowdfunding project creators, however, there are few resources to turn to for assistance in the planning of crowdfunding projects. We are building a tool for novice project creators to get feedback on their project designs. One component of this tool is a comparison to existing projects. As such, we have applied a variety of machine learning classifiers to learn the concept of a successful online crowdfunding project at the time of project launch. Currently our classifier can predict with roughly 68% accuracy, whether a project will be successful or not. The classification results will eventually power a prediction segment of the proposed feedback tool. Future work involves turning the results of the machine learning algorithms into human-readable content and integrating this content into the feedback tool.", "title": "" }, { "docid": "c2fe863aba72df9df8405329c36046b6", "text": "Feature learning for 3D shapes is challenging due to the lack of natural paramterization for 3D surface models. We adopt the multi-view depth image representation and propose Multi-View Deep Extreme Learning Machine (MVD-ELM) to achieve fast and quality projective feature learning for 3D shapes. In contrast to existing multiview learning approaches, our method ensures the feature maps learned for different views are mutually dependent via shared weights and in each layer, their unprojections together form a valid 3D reconstruction of the input 3D shape through using normalized convolution kernels. These lead to a more accurate 3D feature learning as shown by the encouraging results in several applications. Moreover, the 3D reconstruction property enables clear visualization of the learned features, which further demonstrates the meaningfulness of our feature learning.", "title": "" }, { "docid": "9bacc1ef43fd8c05dde814a18f59e467", "text": "The processes that affect removal and retention of nitrogen during wastewater treatment in constructed wetlands (CWs) are manifold and include NH(3) volatilization, nitrification, denitrification, nitrogen fixation, plant and microbial uptake, mineralization (ammonification), nitrate reduction to ammonium (nitrate-ammonification), anaerobic ammonia oxidation (ANAMMOX), fragmentation, sorption, desorption, burial, and leaching. However, only few processes ultimately remove total nitrogen from the wastewater while most processes just convert nitrogen to its various forms. Removal of total nitrogen in studied types of constructed wetlands varied between 40 and 55% with removed load ranging between 250 and 630 g N m(-2) yr(-1) depending on CWs type and inflow loading. However, the processes responsible for the removal differ in magnitude among systems. Single-stage constructed wetlands cannot achieve high removal of total nitrogen due to their inability to provide both aerobic and anaerobic conditions at the same time. Vertical flow constructed wetlands remove successfully ammonia-N but very limited denitrification takes place in these systems. On the other hand, horizontal-flow constructed wetlands provide good conditions for denitrification but the ability of these system to nitrify ammonia is very limited. Therefore, various types of constructed wetlands may be combined with each other in order to exploit the specific advantages of the individual systems. The soil phosphorus cycle is fundamentally different from the N cycle. There are no valency changes during biotic assimilation of inorganic P or during decomposition of organic P by microorganisms. Phosphorus transformations during wastewater treatment in CWs include adsorption, desorption, precipitation, dissolution, plant and microbial uptake, fragmentation, leaching, mineralization, sedimentation (peat accretion) and burial. The major phosphorus removal processes are sorption, precipitation, plant uptake (with subsequent harvest) and peat/soil accretion. However, the first three processes are saturable and soil accretion occurs only in FWS CWs. Removal of phosphorus in all types of constructed wetlands is low unless special substrates with high sorption capacity are used. Removal of total phosphorus varied between 40 and 60% in all types of constructed wetlands with removed load ranging between 45 and 75 g N m(-2) yr(-1) depending on CWs type and inflow loading. Removal of both nitrogen and phosphorus via harvesting of aboveground biomass of emergent vegetation is low but it could be substantial for lightly loaded systems (cca 100-200 g N m(-2) yr(-1) and 10-20 g P m(-2) yr(-1)). Systems with free-floating plants may achieve higher removal of nitrogen via harvesting due to multiple harvesting schedule.", "title": "" }, { "docid": "8b7b16f9825d0922d1cd91fc6b4c3fb2", "text": "There is a recent outbreak in the amounts of spatial data generated by different sources, e.g., smart phones, space telescopes, and medical devices, which urged researchers to exploit the existing distributed systems to process such amounts of spatial data. However, as these systems are not designed for spatial data, they cannot fully utilize its spatial properties to achieve high performance. In this paper, we describe SpatialHadoop, a full-fledged MapReduce framework which extends Hadoop to support spatial data efficiently. SpatialHadoop consists of four main layers, namely, language, indexing, query processing, and visualization. The language layer provides a high level language with standard spatial data types and operations to make the system accessible to non-technical users. The indexing layer supports standard spatial indexes, such as grid, R-tree and R+-tree, inside Hadoop file system in order to speed up spatial operations. The query processing layer encapsulates the spatial operations supported by SpatialHadoop such as range query, k nearest neighbor, spatial join and computational geometry operations. Finally, the visualization layer allows users to produce images that describe very large datasets to make it easier to explore and understand big spatial data. SpatialHadoop is already used as a main component in several real systems such as MNTG, TAREEG, TAGHREED, and SHAHED.", "title": "" }, { "docid": "d633f883c3dd61c22796a5774a56375c", "text": "Neural networks are the topic of this paper. Neural networks are very powerful as nonlinear signal processors, but obtained results are often far from satisfactory. The purpose of this article is to evaluate the reasons for these frustrations and show how to make these neural networks successful. The following are the main challenges of neural network applications: (1) Which neural network architectures should be used? (2) How large should a neural network be? (3) Which learning algorithms are most suitable? The multilayer perceptron (MLP) architecture is unfortunately the preferred neural network topology of most researchers. It is the oldest neural network architecture, and it is compatible with all training softwares. However, the MLP topology is less powerful than other topologies such as bridged multilayer perceptron (BMLP), where connections across layers are allowed. The error-back propagation (EBP) algorithm is the most popular learning algorithm, but it is very slow and seldom gives adequate results. The EBP training process requires 100-1,000 times more iterations than the more advanced algorithms such as Levenberg-Marquardt (LM) or neuron by neuron (NBN) algorithms. What is most important is that the EBP algorithm is not only slow but often it is not able to find solutions for close-to-optimum neural networks. The paper describes and compares several learning algorithms.", "title": "" }, { "docid": "97cc6d9ed4c1aba0dc09635350a401ee", "text": "The Public Key Infrastructure (PKI) in use today on the Internet to secure communications has several drawbacks arising from its centralised and non-transparent design. In the past there has been instances of certificate authorities publishing rogue certificates for targeted attacks, and this has been difficult to immediately detect as certificate authorities are not transparent about the certificates they issue. Furthermore, the centralised selection of trusted certificate authorities by operating system and browser vendors means that it is not practical to untrust certificate authorities that have issued rogue certificates, as this would disrupt the TLS process for many other hosts.\n SCPKI is an alternative PKI system based on a decentralised and transparent design using a web-of-trust model and a smart contract on the Ethereum blockchain, to make it easily possible for rogue certificates to be detected when they are published. The web-of-trust model is designed such that an entity or authority in the system can verify (or vouch for) fine-grained attributes of another entity's identity (such as company name or domain name), as an alternative to the centralised certificate authority identity verification model.", "title": "" }, { "docid": "8c0e3083fb80fa03c5a0b4fbce92a0a3", "text": "In this paper, we present a study of the community structure of ego-networks—the graphs representing the connections among the neighbors of a node—for several online social networks. Toward this goal, we design a new technique to efficiently build and cluster all the ego-networks of a graph in parallel (note that even just building the ego-nets efficiently is challenging on large networks). Our experimental findings are quite compelling: at a microscopic level it is easy to detect high quality communities. Leveraging on this fact we, then, develop new features for friend suggestion based on co-occurrences of two nodes in different ego-nets’ communities. Our new features can be computed efficiently on very large scale graphs by just analyzing the neighborhood of each node. Furthermore, we prove formally on a stylized model, and by experimental analysis that this new similarity measure outperforms the classic local features employed for friend suggestions.", "title": "" }, { "docid": "70fa92a5211b1cb59323be294ea048e9", "text": "We present a technique for providing feedback on syntax errors that uses Recurrent neural networks (RNNs) to model syntactically valid token sequences. Syntax errors constitute one of the largest classes of errors (34%) in our dataset of student submissions obtained from a MOOC course on edX. For a given programming assignment, we first learn an RNN to model all valid token sequences using the set of syntactically correct submissions. Then, for a student submission with syntax errors, we query the learnt RNN model with the prefix token sequence to predict token sequences that can fix the error by either replacing or inserting the predicted token sequence at the error location. We evaluate our technique on over 14, 000 student submissions with syntax errors.", "title": "" }, { "docid": "a1cd5424dea527e365f038fce60fd821", "text": "Producing literature reviews of complex evidence for policymaking questions is a challenging methodological area. There are several established and emerging approaches to such reviews, but unanswered questions remain, especially around how to begin to make sense of large data sets drawn from heterogeneous sources. Drawing on Kuhn's notion of scientific paradigms, we developed a new method-meta-narrative review-for sorting and interpreting the 1024 sources identified in our exploratory searches. We took as our initial unit of analysis the unfolding 'storyline' of a research tradition over time. We mapped these storylines by using both electronic and manual tracking to trace the influence of seminal theoretical and empirical work on subsequent research within a tradition. We then drew variously on the different storylines to build up a rich picture of our field of study. We identified 13 key meta-narratives from literatures as disparate as rural sociology, clinical epidemiology, marketing and organisational studies. Researchers in different traditions had conceptualised, explained and investigated diffusion of innovations differently and had used different criteria for judging the quality of empirical work. Moreover, they told very different over-arching stories of the progress of their research. Within each tradition, accounts of research depicted human characters emplotted in a story of (in the early stages) pioneering endeavour and (later) systematic puzzle-solving, variously embellished with scientific dramas, surprises and 'twists in the plot'. By first separating out, and then drawing together, these different meta-narratives, we produced a synthesis that embraced the many complexities and ambiguities of 'diffusion of innovations' in an organisational setting. We were able to make sense of seemingly contradictory data by systematically exposing and exploring tensions between research paradigms as set out in their over-arching storylines. In some traditions, scientific revolutions were identifiable in which breakaway researchers had abandoned the prevailing paradigm and introduced a new set of concepts, theories and empirical methods. We concluded that meta-narrative review adds value to the synthesis of heterogeneous bodies of literature, in which different groups of scientists have conceptualised and investigated the 'same' problem in different ways and produced seemingly contradictory findings. Its contribution to the mixed economy of methods for the systematic review of complex evidence should be explored further.", "title": "" }, { "docid": "7c7bec32e3949f3a6c0e1109cacd80f5", "text": "Attackers can render distributed denial-of-service attacks more difficult to defend against by bouncing their flooding traffic off of reflectors; that is, by spoofing requests from the victim to a large set of Internet servers that will in turn send their combined replies to the victim. The resulting dilution of locality in the flooding stream complicates the victim's abilities both to isolate the attack traffic in order to block it, and to use traceback techniques for locating the source of streams of packets with spoofed source addresses, such as ITRACE [Be00a], probabilistic packet marking [SWKA00], [SP01], and SPIE [S+01]. We discuss a number of possible defenses against reflector attacks, finding that most prove impractical, and then assess the degree to which different forms of reflector traffic will have characteristic signatures that the victim can use to identify and filter out the attack traffic. Our analysis indicates that three types of reflectors pose particularly significant threats: DNS and Gnutella servers, and TCP-based servers (particularly Web servers) running on TCP implementations that suffer from predictable initial sequence numbers. We argue in conclusion in support of \"reverse ITRACE\" [Ba00] and for the utility of packet traceback techniques that work even for low volume flows, such as SPIE.", "title": "" }, { "docid": "967f1e68847111ecf96d964422bea913", "text": "Text preprocessing is an essential stage in text categorization (TC) particularly and text mining generally. Morphological tools can be used in text preprocessing to reduce multiple forms of the word to one form. There has been a debate among researchers about the benefits of using morphological tools in TC. Studies in the English language illustrated that performing stemming during the preprocessing stage degrades the performance slightly. However, they have a great impact on reducing the memory requirement and storage resources needed. The effect of the preprocessing tools on Arabic text categorization is an area of research. This work provides an evaluation study of several morphological tools for Arabic Text Categorization. The study includes using the raw text, the stemmed text, and the root text. The stemmed and root text are obtained using two different preprocessing tools. The results illustrated that using light stemmer combined with a good performing feature selection method enhances the performance of Arabic Text Categorization especially for small threshold values.", "title": "" }, { "docid": "eccbc87e4b5ce2fe28308fd9f2a7baf3", "text": "3", "title": "" }, { "docid": "df4477952bc78f9ddca6a637b0d9b990", "text": "Food preference learning is an important component of wellness applications and restaurant recommender systems as it provides personalized information for effective food targeting and suggestions. However, existing systems require some form of food journaling to create a historical record of an individual's meal selections. In addition, current interfaces for food or restaurant preference elicitation rely extensively on text-based descriptions and rating methods, which can impose high cognitive load, thereby hampering wide adoption.\n In this paper, we propose PlateClick, a novel system that bootstraps food preference using a simple, visual quiz-based user interface. We leverage a pairwise comparison approach with only visual content. Using over 10,028 recipes collected from Yummly, we design a deep convolutional neural network (CNN) to learn the similarity distance metric between food images. Our model is shown to outperform state-of-the-art CNN by 4 times in terms of mean Average Precision. We explore a novel online learning framework that is suitable for learning users' preferences across a large scale dataset based on a small number of interactions (≤ 15). Our online learning approach balances exploitation-exploration and takes advantage of food similarities using preference-propagation in locally connected graphs.\n We evaluated our system in a field study of 227 anonymous users. The results demonstrate that our method outperforms other baselines by a significant margin, and the learning process can be completed in less than one minute. In summary, PlateClick provides a light-weight, immersive user experience for efficient food preference elicitation.", "title": "" }, { "docid": "0081abb45db5d3e893ee1086d1680041", "text": "`introduction Technologies are amplifying each other in a fusion of technologies across the physical digital and biological worlds. We are witnessing profound shifts across all industries market by the emergence of new business models, the disruption of incumbents and the reshaping of production, consumption, transportation and delivery systems. On the social front a paradigm shift is underway in how we work and communicate, as well as how we express, inform, and entertain our self. Decision makers are too often caught in traditional linear (non-disruptive) thinking or too absorbed by immediate concerns to think strategically about the forces of disruption and innovation shaping our future.", "title": "" }, { "docid": "c4b0d93105e434d4d407575157a005a4", "text": "Online Judge is widespread for the undergraduate to study programming. The users usually feel confused while locating the problems they prefer from the massive ones. This paper proposes a specialized recommendation model for the online judge systems in order to present the alternative problems to the users which they may be interested in potentially. In this model, a three-level collaborative filtering recommendation method is referred to and redesigned catering for the specific interaction mode of Online Judge. This method is described in detail in this paper and implemented in our demo system which demonstrates its availability.", "title": "" }, { "docid": "33cab0ec47af5e40d64e34f8ffc7dd6f", "text": "This inaugural article has a twofold purpose: (i) to present a simpler and more general justification of the fundamental scaling laws of quasibrittle fracture, bridging the asymptotic behaviors of plasticity, linear elastic fracture mechanics, and Weibull statistical theory of brittle failure, and (ii) to give a broad but succinct overview of various applications and ramifications covering many fields, many kinds of quasibrittle materials, and many scales (from 10(-8) to 10(6) m). The justification rests on developing a method to combine dimensional analysis of cohesive fracture with second-order accurate asymptotic matching. This method exploits the recently established general asymptotic properties of the cohesive crack model and nonlocal Weibull statistical model. The key idea is to select the dimensionless variables in such a way that, in each asymptotic case, all of them vanish except one. The minimal nature of the hypotheses made explains the surprisingly broad applicability of the scaling laws.", "title": "" }, { "docid": "8bcf4a423458feb81e44f362ba3177a9", "text": "In recent years, power electronic energy storage systems using super capacitor bank have been widely studied and developed for the electronic vehicles. In this paper, a full-bridge/centertapped push-pull circuit with active clamp-based soft switching bidirectional DC-DC converter and its control method are presented and discussed. From the results of basic experimental demonstration, the proposed system is able to perform adequate charging and discharging operation between low- voltage high-current super capacitor side and high-voltage low-current side with drive train and main battery. In addition, RCDi snubber losses appeared in the basic circuit topology are drastically reduced by ZCS/ZVS operation with the assistance of newly added active clump circuit, as well as ZVS operation with lossless snubber capacitor in high-voltage primary side.", "title": "" }, { "docid": "af3faaf203d771bd7fae3363b8ec8060", "text": "Recent advances on biometrics, information forensics, and security have improved the accuracy of biometric systems, mainly those based on facial information. However, an ever-growing challenge is the vulnerability of such systems to impostor attacks, in which users without access privileges try to authenticate themselves as valid users. In this work, we present a solution to video-based face spoofing to biometric systems. Such type of attack is characterized by presenting a video of a real user to the biometric system. To the best of our knowledge, this is the first attempt of dealing with video-based face spoofing based in the analysis of global information that is invariant to video content. Our approach takes advantage of noise signatures generated by the recaptured video to distinguish between fake and valid access. To capture the noise and obtain a compact representation, we use the Fourier spectrum followed by the computation of the visual rhythm and extraction of the gray-level co-occurrence matrices, used as feature descriptors. Results show the effectiveness of the proposed approach to distinguish between valid and fake users for video-based spoofing with near-perfect classification results.", "title": "" }, { "docid": "1f95e7fcd4717429259aa4b9581cf308", "text": "This project is mainly focused to develop system for animal researchers & wild life photographers to overcome so many challenges in their day life today. When they engage in such situation, they need to be patiently waiting for long hours, maybe several days in whatever location and under severe weather conditions until capturing what they are interested in. Also there is a big demand for rare wild life photo graphs. The proposed method makes the task automatically use microcontroller controlled camera, image processing and machine learning techniques. First with the aid of microcontroller and four passive IR sensors system will automatically detect the presence of animal and rotate the camera toward that direction. Then the motion detection algorithm will get the animal into middle of the frame and capture by high end auto focus web cam. Then the captured images send to the PC and are compared with photograph database to check whether the animal is exactly the same as the photographer choice. If that captured animal is the exactly one who need to capture then it will automatically capture more. Though there are several technologies available none of these are capable of recognizing what it captures. There is no detection of animal presence in different angles. Most of available equipment uses a set of PIR sensors and whatever it disturbs the IR field will automatically be captured and stored. Night time images are black and white and have less details and clarity due to infrared flash quality. If the infrared flash is designed for best image quality, range will be sacrificed. The photographer might be interested in a specific animal but there is no facility to recognize automatically whether captured animal is the photographer’s choice or not.", "title": "" }, { "docid": "af827d6ec2d93f1bce7b2c4938fac378", "text": "During the last two decades, the credit card system has been widely used as a mechanism to drive the global economy to grow dramatically. A credit card provider has issued millions of credit cards to its customers. However, issuing credit cards to wrong customers can be a crucial factor of a financial crisis, e.g., the ones happened in 1997 and 2008. This paper presents a systematic analysis and a comprehensive review of data mining techniques and their applications in the credit card process which we divide into 4 main activities. We have studied research works which were published between 2007 and the first quarter of 2015 inclusively. Our work focuses on data mining techniques applied specifically in the credit card process, and this makes our review different from others' which emphasize much wider areas. As a result, this survey can be useful for any credit card provider to select an appropriate solution for their problem and, also, for researchers to have a comprehensive view of the literature in this area.", "title": "" } ]
scidocsrr
5f5b949a4f90253e6585c69ecc2325e1
Four Principles of Memory Improvement : A Guide to Improving Learning Efficiency
[ { "docid": "660d47a9ffc013f444954f3f210de05e", "text": "Taking tests enhances learning. But what happens when one cannot answer a test question-does an unsuccessful retrieval attempt impede future learning or enhance it? The authors examined this question using materials that ensured that retrieval attempts would be unsuccessful. In Experiments 1 and 2, participants were asked fictional general-knowledge questions (e.g., \"What peace treaty ended the Calumet War?\"). In Experiments 3-6, participants were shown a cue word (e.g., whale) and were asked to guess a weak associate (e.g., mammal); the rare trials on which participants guessed the correct response were excluded from the analyses. In the test condition, participants attempted to answer the question before being shown the answer; in the read-only condition, the question and answer were presented together. Unsuccessful retrieval attempts enhanced learning with both types of materials. These results demonstrate that retrieval attempts enhance future learning; they also suggest that taking challenging tests-instead of avoiding errors-may be one key to effective learning.", "title": "" }, { "docid": "4d7cd44f2bbe9896049a7868165bd415", "text": "Testing previously studied information enhances long-term memory, particularly when the information is successfully retrieved from memory. The authors examined the effect of unsuccessful retrieval attempts on learning. Participants in 5 experiments read an essay about vision. In the test condition, they were asked about embedded concepts before reading the passage; in the extended study condition, they were given a longer time to read the passage. To distinguish the effects of testing from attention direction, the authors emphasized the tested concepts in both conditions, using italics or bolded keywords or, in Experiment 5, by presenting the questions but not asking participants to answer them before reading the passage. Posttest performance was better in the test condition than in the extended study condition in all experiments--a pretesting effect--even though only items that were not successfully retrieved on the pretest were analyzed. The testing effect appears to be attributable, in part, to the role unsuccessful tests play in enhancing future learning.", "title": "" }, { "docid": "3faeedfe2473dc837ab0db9eb4aefc4b", "text": "The spacing effect—that is, the benefit of spacing learning events apart rather than massing them together—has been demonstrated in hundreds of experiments, but is not well known to educators or learners. I investigated the spacing effect in the realistic context of flashcard use. Learners often divide flashcards into relatively small stacks, but compared to a large stack, small stacks decrease the spacing between study trials. In three experiments, participants used a web-based study programme to learn GRE-type word pairs. Studying one large stack of flashcards (i.e. spacing) was more effective than studying four smaller stacks of flashcards separately (i.e. massing). Spacing was also more effective than cramming—that is, massing study on the last day before the test. Across experiments, spacing was more effective than massing for 90% of the participants, yet after the first study session, 72% of the participants believed that massing had been more effective than spacing. Copyright # 2009 John Wiley & Sons, Ltd.", "title": "" } ]
[ { "docid": "42d5712d781140edbc6a35703d786e15", "text": "This paper considers control and estimation problems where the sensor signals and the actuator signals are transmitted to various subsystems over a network. In contrast to traditional control and estimation problems, here the observation and control packets may be lost or delayed. The unreliability of the underlying communication network is modeled stochastically by assigning probabilities to the successful transmission of packets. This requires a novel theory which generalizes classical control/estimation paradigms. The paper offers the foundations of such a novel theory. The central contribution is to characterize the impact of the network reliability on the performance of the feedback loop. Specifically, it is shown that for network protocols where successful transmissions of packets is acknowledged at the receiver (e.g., TCP-like protocols), there exists a critical threshold of network reliability (i.e., critical probabilities for the successful delivery of packets), below which the optimal controller fails to stabilize the system. Further, for these protocols, the separation principle holds and the optimal LQG controller is a linear function of the estimated state. In stark contrast, it is shown that when there is no acknowledgement of successful delivery of control packets (e.g., UDP-like protocols), the LQG optimal controller is in general nonlinear. Consequently, the separation principle does not hold in this circumstance", "title": "" }, { "docid": "244745da710e8c401173fe39359c7c49", "text": "BACKGROUND\nIntegrating information from the different senses markedly enhances the detection and identification of external stimuli. Compared with unimodal inputs, semantically and/or spatially congruent multisensory cues speed discrimination and improve reaction times. Discordant inputs have the opposite effect, reducing performance and slowing responses. These behavioural features of crossmodal processing appear to have parallels in the response properties of multisensory cells in the superior colliculi and cerebral cortex of non-human mammals. Although spatially concordant multisensory inputs can produce a dramatic, often multiplicative, increase in cellular activity, spatially disparate cues tend to induce a profound response depression.\n\n\nRESULTS\nUsing functional magnetic resonance imaging (fMRI), we investigated whether similar indices of crossmodal integration are detectable in human cerebral cortex, and for the synthesis of complex inputs relating to stimulus identity. Ten human subjects were exposed to varying epochs of semantically congruent and incongruent audio-visual speech and to each modality in isolation. Brain activations to matched and mismatched audio-visual inputs were contrasted with the combined response to both unimodal conditions. This strategy identified an area of heteromodal cortex in the left superior temporal sulcus that exhibited significant supra-additive response enhancement to matched audio-visual inputs and a corresponding sub-additive response to mismatched inputs.\n\n\nCONCLUSIONS\nThe data provide fMRI evidence of crossmodal binding by convergence in the human heteromodal cortex. They further suggest that response enhancement and depression may be a general property of multisensory integration operating at different levels of the neuroaxis and irrespective of the purpose for which sensory inputs are combined.", "title": "" }, { "docid": "bd3374fefa94fbb11d344d651c0f55bc", "text": "Extensive study has been conducted in the detection of license plate for the applications in intelligent transportation system (ITS). However, these results are all based on images acquired at a resolution of 640 times 480. In this paper, a new method is proposed to extract license plate from the surveillance video which is shot at lower resolution (320 times 240) as well as degraded by video compression. Morphological operations of bottom-hat and morphology gradient are utilized to detect the LP candidates, and effective schemes are applied to select the correct one. The average rates of correct extraction and false alarms are 96.62% and 1.77%, respectively, based on the experiments using more than four hours of video. The experimental results demonstrate the effectiveness and robustness of the proposed method", "title": "" }, { "docid": "e776c87ec35d67c6acbdf79d8a5cac0a", "text": "Continuous deployment speeds up the process of existing agile methods, such as Scrum, and Extreme Programming (XP) through the automatic deployment of software changes to end-users upon passing of automated tests. Continuous deployment has become an emerging software engineering process amongst numerous software companies, such as Facebook, Github, Netflix, and Rally Software. A systematic analysis of software practices used in continuous deployment can facilitate a better understanding of continuous deployment as a software engineering process. Such analysis can also help software practitioners in having a shared vocabulary of practices and in choosing the software practices that they can use to implement continuous deployment. The goal of this paper is to aid software practitioners in implementing continuous deployment through a systematic analysis of software practices that are used by software companies. We studied the continuous deployment practices of 19 software companies by performing a qualitative analysis of Internet artifacts and by conducting follow-up inquiries. In total, we found 11 software practices that are used by 19 software companies. We also found that in terms of use, eight of the 11 software practices are common across 14 software companies. We observe that continuous deployment necessitates the consistent use of sound software engineering practices such as automated testing, automated deployment, and code review.", "title": "" }, { "docid": "512d29a398f51041466884f4decec84a", "text": "Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.2", "title": "" }, { "docid": "113b8cfda23cf7e8b3d7b4821d549bf7", "text": "A load dependent zero-current detector is proposed in this paper for speeding up the transient response when load current changes from heavy to light loads. The fast transient control signal determines how long the reversed inductor current according to sudden load variations. At the beginning of load variation from heavy to light loads, the sensed voltage compared with higher voltage to discharge the overshoot output voltage for achieving fast transient response. Besides, for an adaptive reversed current period, the fast transient mechanism is turned off since the output voltage is rapidly regulated back to the acceptable level. Simulation results demonstrate that the ZCD circuit permits the reverse current flowing back into n-type power MOSFET at the beginning of load variations. The settling time is decreased to about 35 mus when load current suddenly changes from 500mA to 10 mA.", "title": "" }, { "docid": "dc5bb80426556e3dd9090a705d3e17b4", "text": "OBJECTIVES\nThe aim of this study was to locate the scientific literature dealing with addiction to the Internet, video games, and cell phones and to characterize the pattern of publications in these areas.\n\n\nMETHODS\nOne hundred seventy-nine valid articles were retrieved from PubMed and PsycINFO between 1996 and 2005 related to pathological Internet, cell phone, or video game use.\n\n\nRESULTS\nThe years with the highest numbers of articles published were 2004 (n = 42) and 2005 (n = 40). The most productive countries, in terms of number of articles published, were the United States (n = 52), China (n = 23), the United Kingdom (n = 17), Taiwan (n = 13), and South Korea (n = 9). The most commonly used language was English (65.4%), followed by Chinese (12.8%) and Spanish (4.5%). Articles were published in 96 different journals, of which 22 published 2 or more articles. The journal that published the most articles was Cyberpsychology & Behavior (n = 41). Addiction to the Internet was the most intensely studied (85.3%), followed by addiction to video games (13.6%) and cell phones (2.1%).\n\n\nCONCLUSIONS\nThe number of publications in this area is growing, but it is difficult to conduct precise searches due to a lack of clear terminology. To facilitate retrieval, bibliographic databases should include descriptor terms referring specifically to Internet, video games, and cell phone addiction as well as to more general addictions involving communications and information technologies and other behavioral addictions.", "title": "" }, { "docid": "b240041ea6a885151fd39d863b9217dc", "text": "Engaging in a test over previously studied information can serve as a potent learning event, a phenomenon referred to as the testing effect. Despite a surge of research in the past decade, existing theories have not yet provided a cohesive account of testing phenomena. The present study uses meta-analysis to examine the effects of testing versus restudy on retention. Key results indicate support for the role of effortful processing as a contributor to the testing effect, with initial recall tests yielding larger testing benefits than recognition tests. Limited support was found for existing theoretical accounts attributing the testing effect to enhanced semantic elaboration, indicating that consideration of alternative mechanisms is warranted in explaining testing effects. Future theoretical accounts of the testing effect may benefit from consideration of episodic and contextually derived contributions to retention resulting from memory retrieval. Additionally, the bifurcation model of the testing effect is considered as a viable framework from which to characterize the patterns of results present across the literature.", "title": "" }, { "docid": "43ef67c897e7f998b1eb7d3524d514f4", "text": "This brief proposes a delta-sigma modulator that operates at extremely low voltage without using a clock boosting technique. To maintain the advantages of a discrete-time integrator in oversampled data converters, a mixed differential difference amplifier (DDA) integrator is developed that removes the input sampling switch in a switched-capacitor integrator. Conventionally, many low-voltage delta-sigma modulators have used high-voltage generating circuits to boost the clock voltage levels. A mixed DDA integrator with both a switched-resistor and a switched-capacitor technique is developed to implement a discrete-time integrator without clock boosted switches. The proposed mixed DDA integrator is demonstrated by a third-order delta-sigma modulator with a feedforward topology. The fabricated modulator shows a 68-dB signal-to-noise-plus-distortion ratio for a 20-kHz signal bandwidth with an oversampling ratio of 80. The chip consumes 140 μW of power at a true 0.4-V power supply, which is the lowest voltage without a clock boosting technique among the state-of-the-art modulators in this signal band.", "title": "" }, { "docid": "106fefb169c7e95999fb411b4e07954e", "text": "Additional contents in web pages, such as navigation panels, advertisements, copyrights and disclaimer notices, are typically not related to the main subject and may hamper the performance of Web data mining. They are traditionally taken as noises and need to be removed properly. To achieve this, two intuitive and crucial kinds of information—the textual information and the visual information of web pages—is considered in this paper. Accordingly, Text Density and Visual Importance are defined for the Document Object Model (DOM) nodes of a web page. Furthermore, a content extraction method with these measured values is proposed. It is a fast, accurate and general method for extracting content from diverse web pages. And with the employment of DOM nodes, the original structure of the web page can be preserved. Evaluated with the CleanEval benchmark and with randomly selected pages from well-known Web sites, where various web domains and styles are tested, the effect of the method is demonstrated. The average F1-scores with our method were 8.7 % higher than the best scores among several alternative methods.", "title": "" }, { "docid": "e797fbf7b53214df32d5694527ce5ba3", "text": "One key task of fine-grained sentiment analysis of product reviews is to extract product aspects or features that users have expressed opinions on. This paper focuses on supervised aspect extraction using deep learning. Unlike other highly sophisticated supervised deep learning models, this paper proposes a novel and yet simple CNN model 1 employing two types of pre-trained embeddings for aspect extraction: general-purpose embeddings and domain-specific embeddings. Without using any additional supervision, this model achieves surprisingly good results, outperforming state-of-the-art sophisticated existing methods. To our knowledge, this paper is the first to report such double embeddings based CNN model for aspect extraction and achieve very good results.", "title": "" }, { "docid": "2f17160c9f01aa779b1745a57e34e1aa", "text": "OBJECTIVE\nTo report an ataxic variant of Alzheimer disease expressing a novel molecular phenotype.\n\n\nDESIGN\nDescription of a novel phenotype associated with a presenilin 1 mutation.\n\n\nSETTING\nThe subject was an outpatient who was diagnosed at the local referral center.\n\n\nPATIENT\nA 28-year-old man presented with psychiatric symptoms and cerebellar signs, followed by cognitive dysfunction. Severe beta-amyloid (Abeta) deposition was accompanied by neurofibrillary tangles and cell loss in the cerebral cortex and by Purkinje cell dendrite loss in the cerebellum. A presenilin 1 gene (PSEN1) S170F mutation was detected.\n\n\nMAIN OUTCOME MEASURES\nWe analyzed the processing of Abeta precursor protein in vitro as well as the Abeta species in brain tissue.\n\n\nRESULTS\nThe PSEN1 S170F mutation induced a 3-fold increase of both secreted Abeta(42) and Abeta(40) species and a 60% increase of secreted Abeta precursor protein in transfected cells. Soluble and insoluble fractions isolated from brain tissue showed a prevalence of N-terminally truncated Abeta species ending at both residues 40 and 42.\n\n\nCONCLUSION\nThese findings define a new Alzheimer disease molecular phenotype and support the concept that the phenotypic variability associated with PSEN1 mutations may be dictated by the Abeta aggregates' composition.", "title": "" }, { "docid": "0b5f0cd5b8d49d57324a0199b4925490", "text": "Deep brain stimulation (DBS) has an increasing role in the treatment of idiopathic Parkinson's disease. Although, the subthalamic nucleus (STN) is the commonly chosen target, a number of groups have reported that the most effective contact lies dorsal/dorsomedial to the STN (region of the pallidofugal fibres and the rostral zona incerta) or at the junction between the dorsal border of the STN and the latter. We analysed our outcome data from Parkinson's disease patients treated with DBS between April 2002 and June 2004. During this period we moved our target from the STN to the region dorsomedial/medial to it and subsequently targeted the caudal part of the zona incerta nucleus (cZI). We present a comparison of the motor outcomes between these three groups of patients with optimal contacts within the STN (group 1), dorsomedial/medial to the STN (group 2) and in the cZI nucleus (group 3). Thirty-five patients with Parkinson's disease underwent MRI directed implantation of 64 DBS leads into the STN (17), dorsomedial/medial to STN (20) and cZI (27). The primary outcome measure was the contralateral Unified Parkinson's Disease Rating Scale (UPDRS) motor score (off medication/off stimulation versus off medication/on stimulation) measured at follow-up (median time 6 months). The secondary outcome measures were the UPDRS III subscores of tremor, bradykinesia and rigidity. Dyskinesia score, L-dopa medication reduction and stimulation parameters were also recorded. The mean adjusted contralateral UPDRS III score with cZI stimulation was 3.1 (76% reduction) compared to 4.9 (61% reduction) in group 2 and 5.7 (55% reduction) in the STN (P-value for trend <0.001). There was a 93% improvement in tremor with cZI stimulation versus 86% in group 2 versus 61% in group 1 (P-value = 0.01). Adjusted 'off-on' rigidity scores were 1.0 for the cZI group (76% reduction), 2.0 for group 2 (52% reduction) and 2.1 for group 1 (50% reduction) (P-value for trend = 0.002). Bradykinesia was more markedly improved in the cZI group (65%) compared to group 2 (56%) or STN group (59%) (P-value for trend = 0.17). There were no statistically significant differences in the dyskinesia scores, L-dopa medication reduction and stimulation parameters between the three groups. Stimulation related complications were seen in some group 2 patients. High frequency stimulation of the cZI results in greater improvement in contralateral motor scores in Parkinson's disease patients than stimulation of the STN. We discuss the implications of this finding and the potential role played by the ZI in Parkinson's disease.", "title": "" }, { "docid": "06502355f6db37b73806e9e57476e749", "text": "BACKGROUND\nBecause the trend of pharmacotherapy is toward controlling diet rather than administration of drugs, in our study we examined the probable relationship between Creatine (Cr) or Whey (Wh) consumption and anesthesia (analgesia effect of ketamine). Creatine and Wh are among the most favorable supplements in the market. Whey is a protein, which is extracted from milk and is a rich source of amino acids. Creatine is an amino acid derivative that can change to ATP in the body. Both of these supplements result in Nitric Oxide (NO) retention, which is believed to be effective in N-Methyl-D-aspartate (NMDA) receptor analgesia.\n\n\nOBJECTIVES\nThe main question of this study was whether Wh and Cr are effective on analgesic and anesthetic characteristics of ketamine and whether this is related to NO retention or amino acids' features.\n\n\nMATERIALS AND METHODS\nWe divided 30 male Wistar rats to three (n = 10) groups; including Cr, Wh and sham (water only) groups. Each group was administered (by gavage) the supplements for an intermediate dosage during 25 days. After this period, they became anesthetized using a Ketamine-Xylazine (KX) and their time to anesthesia and analgesia, and total sleep time were recorded.\n\n\nRESULTS\nData were analyzed twice using the SPSS 18 software with Analysis of Variance (ANOVA) and post hoc test; first time we expunged the rats that didn't become anesthetized and the second time we included all of the samples. There was a significant P-value (P < 0.05) for total anesthesia time in the second analysis. Bonferroni multiple comparison indicated that the difference was between Cr and Sham groups (P < 0.021).\n\n\nCONCLUSIONS\nThe data only indicated that there might be a significant relationship between Cr consumption and total sleep time. Further studies, with rats of different gender and different dosage of supplement and anesthetics are suggested.", "title": "" }, { "docid": "5bf2c4a187b35ad5c4e69aef5eb9ffea", "text": "In the last decade, the research of the usability of mobile phones has been a newly evolving area with few established methodologies and realistic practices that ensure capturing usability in evaluation. Thus, there exists growing demand to explore appropriate evaluation methodologies that evaluate the usability of mobile phones quickly as well as comprehensively. This study aims to develop a task-based usability checklist based on heuristic evaluations in views of mobile phone user interface (UI) practitioners. A hierarchical structure of UI design elements and usability principles related to mobile phones were developed and then utilized to develop the checklist. To demonstrate the practical effectiveness of the proposed checklist, comparative experiments were conducted on the usability checklist and usability testing. The majority of usability problems found by usability testing and additional problems were discovered by the proposed checklist. It is expected that the usability checklist proposed in this study could be used quickly and efficiently by usability practitioners to evaluate the mobile phone UI in the middle of the mobile phone development process.", "title": "" }, { "docid": "35ae4e59fd277d57c2746dfccf9b26b0", "text": "In the field of saliency detection, many graph-based algorithms heavily depend on the accuracy of the pre-processed superpixel segmentation, which leads to significant sacrifice of detail information from the input image. In this paper, we propose a novel bottom-up saliency detection approach that takes advantage of both region-based features and image details. To provide more accurate saliency estimations, we first optimize the image boundary selection by the proposed erroneous boundary removal. By taking the image details and region-based estimations into account, we then propose the regularized random walks ranking to formulate pixel-wised saliency maps from the superpixel-based background and foreground saliency estimations. Experiment results on two public datasets indicate the significantly improved accuracy and robustness of the proposed algorithm in comparison with 12 state-of-the-art saliency detection approaches.", "title": "" }, { "docid": "cd3d9bb066729fc7107c0fef89f664fe", "text": "The extended contact hypothesis proposes that knowledge that an in-group member has a close relationship with an out-group member can lead to more positive intergroup attitudes. Proposed mechanisms are the in-group or out-group member serving as positive exemplars and the inclusion of the out-group member's group membership in the self. In Studies I and 2, respondents knowing an in-group member with an out-group friend had less negative attitudes toward that out-group, even controlling for disposition.il variables and direct out-group friendships. Study 3, with constructed intergroup-conflict situations (on the robbers cave model). found reduced negative out-group attitudes after participants learned of cross-group friendships. Study 4, a minimal group experiment, showed less negative out-group attitudes for participants observing an apparent in-group-out-group friendship.", "title": "" }, { "docid": "f04682957e97b8ccb4f40bf07dde2310", "text": "This paper introduces a dataset gathered entirely in urban scenarios with a car equipped with one stereo camera and five laser scanners, among other sensors. One distinctive feature of the present dataset is the existence of high-resolution stereo images grabbed at high rate (20 fps) during a 36.8 km trajectory, which allows the benchmarking of a variety of computer vision techniques. We describe the employed sensors and highlight some applications which could be benchmarked with the presented work. Both plain text and binary files are provided, as well as open source tools for working with the binary versions. The dataset is available for download in http://www.mrpt.org/MalagaUrbanDataset.", "title": "" }, { "docid": "644d2fcc7f2514252c2b9da01bb1ef42", "text": "We now described an interesting application of SVD to text do cuments. Suppose we represent documents as a bag of words, soXij is the number of times word j occurs in document i, for j = 1 : W andi = 1 : D, where W is the number of words and D is the number of documents. To find a document that contains a g iven word, we can use standard search procedures, but this can get confuse d by ynonomy (different words with the same meaning) andpolysemy (same word with different meanings). An alternative approa ch is to assume that X was generated by some low dimensional latent representation X̂ ∈ IR, whereK is the number of latent dimensions. If we compare documents in the latent space, we should get improved retrie val performance, because words of similar meaning get mapped to similar low dimensional locations. We can compute a low dimensional representation of X by computing the SVD, and then taking the top k singular values/ vectors: 1", "title": "" }, { "docid": "e289d20455fd856ce4cf72589b3e206b", "text": "Transfer learning is a vital technique that generalizes models trained for one setting or task to other settings or tasks. For example in speech recognition, an acoustic model trained for one language can be used to recognize speech in another language, with little or no re-training data. Transfer learning is closely related to multi-task learning (cross-lingual vs. multilingual), and is traditionally studied in the name of `model adaptation'. Recent advance in deep learning shows that transfer learning becomes much easier and more effective with high-level abstract features learned by deep models, and the `transfer' can be conducted not only between data distributions and data types, but also between model structures (e.g., shallow nets and deep nets) or even model types (e.g., Bayesian models and neural models). This review paper summarizes some recent prominent research towards this direction, particularly for speech and language processing. We also report some results from our group and highlight the potential of this very interesting research field1.", "title": "" } ]
scidocsrr
294a04f9ad01b5739b6a2baa07f59c3a
Research Directions on Semantic Web and Education
[ { "docid": "49f68a9534a602074066948a13164ad4", "text": "Recent developments in Web technologies and using AI techniques to support efforts in making the Web more intelligent and provide higher-level services to its users have opened the door to building the Semantic Web. That fact has a number of important implications for Web-based education, since Web-based education has become a very important branch of educational technology. Classroom independence and platform independence of Web-based education, availability of authoring tools for developing Web-based courseware, cheap and efficient storage and distribution of course materials, hyperlinks to suggested readings, digital libraries, and other sources of references relevant for the course are but a few of a number of clear advantages of Web-based education. However, there are several challenges in improving Web-based education, such as providing for more adaptivity and intelligence. Developments in the Semantic Web, while contributing to the solution to these problems, also raise new issues that must be considered if we are to progress. This paper surveys the basics of the Semantic Web and discusses its importance in future Web-based educational applications. Instead of trying to rebuild some aspects of a human brain, we are going to build a brain of and for humankind. D. Fensel and M.A. Musen (Fensel & Musen, 2001)", "title": "" } ]
[ { "docid": "eb2d29417686cc86a45c33694688801f", "text": "We present a method to incorporate global orientation information from the sun into a visual odometry pipeline using only the existing image stream, where the sun is typically not visible. We leverage recent advances in Bayesian Convolutional Neural Networks to train and implement a sun detection model that infers a three-dimensional sun direction vector from a single RGB image. Crucially, our method also computes a principled uncertainty associated with each prediction, using a Monte Carlo dropout scheme. We incorporate this uncertainty into a sliding window stereo visual odometry pipeline where accurate uncertainty estimates are critical for optimal data fusion. Our Bayesian sun detection model achieves a median error of approximately 12 degrees on the KITTI odometry benchmark training set, and yields improvements of up to 42% in translational ARMSE and 32% in rotational ARMSE compared to standard VO. An open source implementation of our Bayesian CNN sun estimator (Sun-BCNN) using Caffe is available at https://github.com/utiasSTARS/sun-bcnn-vo.", "title": "" }, { "docid": "552d253f8cce654dd5ea289ab9520a4c", "text": "Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online's data policy on reuse of materials please consult the policies page. This paper is a systematic review of the literature on organizational learning and knowledge with relevance to public service organizations. Organizational learning and knowledge are important to public sector organizations, which share complex external challenges with private organizations, but have different drivers and goals for knowledge. The evidence shows that the concepts of organizational learning and knowledge are under-researched in relation to the public sector and, importantly, this raises wider questions about the extent to which context is taken into consideration in terms of learning and knowledge more generally across all sectors. A dynamic model of organizational learning within and across organizational boundaries is developed that depends on four sets of factors: features of the source organization; features of the recipient organization; the characteristics of the relationship between organizations; and the environmental context. The review concludes, first, that defining 'organization' is an important element of understanding organizational learning and knowledge. Second, public organizations constitute an important, distinctive context for the study of organizational learning and knowledge. Third, there continues to be an over-reliance on the private sector as the principal source of theoretical understanding and empirical research and this is conceptually limiting for the understanding of organizational learning and knowledge. Fourth, differences as well as similarities between organizational sectors require conceptualization and research that acknowledge sector-specific aims, values and structures. Finally, it is concluded that frameworks for explaining processes of organizational learning at different levels need to be sufficiently dynamic and complex to accommodate public organizations.", "title": "" }, { "docid": "6b4a4e5271f5a33d3f30053fc6c1a4ff", "text": "Based on environmental, legal, social, and economic factors, reverse logistics and closed-loop supply chain issues have attracted attention among both academia and practitioners. This attention is evident by the vast number of publications in scientific journals which have been published in recent years. Hence, a comprehensive literature review of recent and state-of-the-art papers is vital to draw a framework of the past, and to shed light on future directions. The aim of this paper is to review recently published papers in reverse logistic and closed-loop supply chain in scientific journals. A total of 382 papers published between January 2007 and March 2013 are selected and reviewed. The papers are then analyzed and categorized to construct a useful foundation of past research. Finally, gaps in the literature are identified to clarify and to suggest future research opportunities. 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "c8977fe68b265b735ad4261f5fe1ec25", "text": "We present ACQUINE - Aesthetic Quality Inference Engine, a publicly accessible system which allows users to upload their photographs and have them rated automatically for aesthetic quality. The system integrates a support vector machine based classifier which extracts visual features on the fly and performs real-time classification and prediction. As the first publicly available tool for automatically determining the aesthetic value of an image, this work is a significant first step in recognizing human emotional reaction to visual stimulus. In this paper, we discuss fundamentals behind this system, and some of the challenges faced while creating it. We report statistics generated from over 140,000 images uploaded by Web users. The system is demonstrated at http://acquine.alipr.com.", "title": "" }, { "docid": "dc9abfd745d4267a5fcd66ce1d977acb", "text": "Advances in information technology and its widespread growth in several areas of business, engineering, medical, and scientific studies are resulting in information/data explosion. Knowledge discovery and decision-making from such rapidly growing voluminous data are a challenging task in terms of data organization and processing, which is an emerging trend known as big data computing, a new paradigm that combines large-scale compute, new data-intensive techniques, and mathematical models to build data analytics. Big data computing demands a huge storage and computing for data curation and processing that could be delivered from on-premise or clouds infrastructures. This paper discusses the evolution of big data computing, differences between traditional data warehousing and big data, taxonomy of big data computing and underpinning technologies, integrated platform of big data and clouds known as big data clouds, layered architecture and components of big data cloud, and finally open-technical challenges and future directions. Copyright © 2015 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "537d47c4bb23d9b60b164d747cb54cd9", "text": "Comprehending computer programs is one of the core software engineering activities. Software comprehension is required when a programmer maintains, reuses, migrates, reengineers, or enhances software systems. Due to this, a large amount of research has been carried out, in an attempt to guide and support software engineers in this process. Several cognitive models of program comprehension have been suggested, which attempt to explain how a software engineer goes about the process of understanding code. However, research has suggested that there is no one ‘all encompassing’ cognitive model that can explain the behavior of ‘all’ programmers, and that it is more likely that programmers, depending on the particular problem, will swap between models (Letovsky, 1986). This paper identifies the key components of program comprehension models, and attempts to evaluate currently accepted models in this framework. It also highlights the commonalities, conflicts, and gaps between models, and presents possibilities for future research, based on its findings.", "title": "" }, { "docid": "50cddaad75b7598bd9ce50163324e4cf", "text": "In this paper, we propose a multi-object tracking and reconstruction approach through measurement-level fusion of LiDAR and camera. The proposed method, regardless of object class, estimates 3D motion and structure for all rigid obstacles. Using an intermediate surface representation, measurements from both sensors are processed within a joint framework. We combine optical flow, surface reconstruction, and point-to-surface terms in a tightly-coupled non-linear energy function, which is minimized using Iterative Reweighted Least Squares (IRLS). We demonstrate the performance of our model on different datasets (KITTI with Velodyne HDL-64E and our collected data with 4-layer ScaLa Ibeo), and show an improvement in velocity error and crispness over state-of-the-art trackers.", "title": "" }, { "docid": "ecfa2ca992685dd0eda652f8aa021fb4", "text": "We investigate the parallelization of reinforcement learning algorithms using MapReduce, a popular parallel computing framework. We present parallel versions of several dynamic programming algorithms, including policy evaluation, policy iteration, and off-policy updates. Furthermore, we design parallel reinforcement learning algorithms to deal with large scale problems using linear function approximation, including model-based projection, least squares policy iteration, temporal difference learning and recent gradient temporal difference learning algorithms. We give time and space complexity analysis of the proposed algorithms. This study demonstrates how parallelization opens new avenues for solving large scale reinforcement learning problems.", "title": "" }, { "docid": "70dc7fe40f55e2b71b79d71d1119a36c", "text": "In undergoing this life, many people always try to do and get the best. New knowledge, experience, lesson, and everything that can improve the life will be done. However, many people sometimes feel confused to get those things. Feeling the limited of experience and sources to be better is one of the lacks to own. However, there is a very simple thing that can be done. This is what your teacher always manoeuvres you to do this one. Yeah, reading is the answer. Reading a book as this digital image processing principles and applications and other references can enrich your life quality. How can it be?", "title": "" }, { "docid": "2ffb20d66a0d5cb64442c2707b3155c6", "text": "A botnet is a network of compromised hosts that is under the control of a single, malicious entity, often called the botmaster. We present a system that aims to detect bot-infected machines, independent of any prior information about the command and control channels or propagation vectors, and without requiring multiple infections for correlation. Our system relies on detection models that target the characteristic fact that every bot receives commands from the botmaster to which it responds in a specific way. These detection models are generated automatically from network traffic traces recorded from actual bot instances. We have implemented the proposed approach and demonstrate that it can extract effective detection models for a variety of different bot families. These models are precise in describing the activity of bots and raise very few false positives.", "title": "" }, { "docid": "53afafd2fc1087989a975675ff4098d8", "text": "The sixth generation of IEEE 802.11 wireless local area networks is under developing in the Task Group 802.11ax. One main physical layer (PHY) novel feature in the IEEE 802.11ax amendment is the specification of orthogonal frequency division multiplexing (OFDM) uplink multi-user multiple-input multiple-output (UL MU-MIMO) techniques. A challenge issue to implement UL MU-MIMO in OFDM PHY is the mitigation of the relative carrier frequency offset (CFO), which can cause intercarrier interference and rotation of the constellation of received symbols, and, consequently, degrading the system performance dramatically if it is not properly mitigated. In this paper, we show that a frequency domain CFO estimation and correction scheme implemented at both transmitter (Tx) and receiver (Rx) coupled with pre-compensation approach at the Tx can decrease the negative effects of the relative CFO.", "title": "" }, { "docid": "31c08c533cd4d971ec0899762829350e", "text": "Design of the 0.6–50 GHz ultra-wideband (UWB) double-ridged horn antenna (DRHA) is presented in this paper. This work focuses on several upgrades in the model to improve its performance: by adding absorber and perforations in coaxial to waveguide launcher, Luneburg dielectric lens at the aperture of the horn radiation pattern at upper end of the band and voltage standing wave ratio (VSWR) are improved. Radiation pattern and VSWR of new design are compared with antenna before modifications. The improved DRHA has VSWR less than 1.5 at the band from 1 GHz and the main lobe remains along the antenna axis at high frequencies of the band.", "title": "" }, { "docid": "a50ec2ab9d5d313253c6656049d608b3", "text": "A cluster algorithm for graphs called the Markov Cluster algorithm (MCL algorithm) is introduced. The algorithm provides basically an interface to an algebraic process de ned on stochastic matrices, called the MCL process. The graphs may be both weighted (with nonnegative weight) and directed. Let G be such a graph. The MCL algorithm simulates ow in G by rst identifying G in a canonical way with a Markov graph G1. Flow is then alternatingly expanded and contracted, leading to a row of Markov Graphs G(i). Flow expansion corresponds with taking the k power of a stochastic matrix, where k 2 IN . Flow contraction corresponds with a parametrized operator r, r 0, which maps the set of (column) stochastic matrices onto itself. The image rM is obtained by raising each entry in M to the r th power and rescaling each column to have sum 1 again. The heuristic underlying this approach is the expectation that ow between dense regions which are sparsely connected will evaporate. The invariant limits of the process are easily derived and in practice the process converges very fast to such a limit, the structure of which has a generic interpretation as an overlapping clustering of the graph G. Overlap is limited to cases where the input graph has a symmetric structure inducing it. The contraction and expansion parameters of the MCL process in uence the granularity of the output. The algorithm is space and time e cient and lends itself to drastic scaling. This report describes the MCL algorithm and process, convergence towards equilibrium states, interpretation of the states as clusterings, and implementation and scalability. The algorithm is introduced by rst considering several related proposals towards graph clustering, of both combinatorial and probabilistic nature. 2000 Mathematics Subject Classi cation: 05B20, 15A48, 15A51, 62H30, 68R10, 68T10, 90C35.", "title": "" }, { "docid": "84e8986eff7cb95808de8df9ac286e37", "text": "The purpose of this thesis is to describe one-shot-learning gesture recognition systems developed on the ChaLearn Gesture Dataset [3]. We use RGB and depth images and combine appearance (Histograms of Oriented Gradients) and motion descriptors (Histogram of Optical Flow) for parallel temporal segmentation and recognition. The Quadratic-Chi distance family is used to measure differences between histograms to capture cross-bin relationships. We also propose a new algorithm for trimming videos — to remove all the unimportant frames from videos. Our two methods both outperform other published methods and help narrow down the gap between human performance and algorithms on this task. The code has been made publicly available in the MLOSS repository.", "title": "" }, { "docid": "e8df1006565902d1b2f5189a02944bca", "text": "A research and development collaboration has been started with the goal of producing a prototype hadron calorimeter section for the purpose of proving the Particle Flow Algorithm concept for the International Linear Collider. Given the unique requirements of a Particle Flow Algorithm calorimeter, custom readout electronics must be developed to service these detectors. This paper introduces the DCal or Digital Calorimetry Chip, a custom integrated circuit developed in a 0.25um CMOS process specifically for this International Linear Collider project. The DCal is capable of handling 64 channels, producing a 1-bit Digital-to-Analog conversion of the input (i.e. hit/no hit). It maintains a 24-bit timestamp and is capable of operating either in an externally triggered mode or in a self-triggered mode. Moreover, it is capable of operating either with or without a pipeline delay. Finally, in order to permit the testing of different calorimeter technologies, its analog front end is capable of servicing Particle Flow Algorithm calorimeters made from either Resistive Plate Chambers or Gaseous Electron Multipliers.", "title": "" }, { "docid": "a4d253d6194a9a010660aedb564be39a", "text": "This work on GGS-NN is motivated by the program verification application, where we need to analyze dynamic data structures created in the heap. On a very high level, in this application a machine learning model analyzes the heap states (a graph with memory nodes and pointers as edges) during the execution of a program and comes up with logical formulas that describes the heap. These logical formulas are then fed into a theorem prover to prove the correctness of the program. Problem-specific node annotations are used to initialize .", "title": "" }, { "docid": "44941e8f5b703bcacb51b6357cba7633", "text": "Convolutional neural networks provide visual features that perform remarkably well in many computer vision applications. However, training these networks requires significant amounts of supervision. This paper introduces a generic framework to train deep networks, end-to-end, with no supervision. We propose to fix a set of target representations, called Noise As Targets (NAT), and to constrain the deep features to align to them. This domain agnostic approach avoids the standard unsupervised learning issues of trivial solutions and collapsing of features. Thanks to a stochastic batch reassignment strategy and a separable square loss function, it scales to millions of images. The proposed approach produces representations that perform on par with state-of-the-art unsupervised methods on ImageNet and PASCAL VOC.", "title": "" }, { "docid": "d8259846c9da256fb5f68537517fe55a", "text": "Several versions of the Daum-Huang (DH) filter have been introduced recently to address the task of discrete-time nonlinear filtering. The filters propagate a particle set over time to track the system state, but, in contrast to conventional particle filters, there is no proposal density or importance sampling involved. Particles are smoothly migrated using a particle flow derived from a log-homotopy relating the prior and the posterior. Impressive performance has been demonstrated for a wide range of systems, but the implemented algorithms rely on an extended/unscented Kalman filter (EKF/UKF) that is executed in parallel. We illustrate through simulation that the performance of the exact flow DH filter can be compromised when the UKF and EKF fail. By introducing simple but important modifications to the exact flow DH filter implementation, the performance can be improved dramatically.", "title": "" }, { "docid": "b93a476642276ddc0ff956e0434a9c36", "text": "In this paper, we present a cartoon face generation method that stands on a component-based facial feature extraction approach. Given a frontal face image as an input, our proposed system has the following stages. First, face features are extracted using an extended Active Shape Model. Outlines of the components are locally modified using edge detection, template matching and Hermit interpolation. This modification enhances the diversity of output and accuracy of the component matching required for cartoon generation. Second, to bring cartoon-specific features such as shadows, highlights and, especially, stylish drawing, an array of various face photographs and corresponding hand-drawn cartoon faces are collected. These cartoon templates are automatically decomposed into cartoon components using our proposed method for parameterizing cartoon samples, which is fast and simple. Then, using shape matching methods, the appropriate cartoon component is selected and deformed to fit the input face. Finally, a cartoon face is rendered in a vector format using the rendering rules of the selected template. Experimental results demonstrate effectiveness of our approach in generating life-like cartoon faces.", "title": "" }, { "docid": "f11ff738aaf7a528302e6ec5ed99c43c", "text": "Vehicles equipped with GPS localizers are an important sensory device for examining people’s movements and activities. Taxis equipped with GPS localizers serve the transportation needs of a large number of people driven by diverse needs; their traces can tell us where passengers were picked up and dropped off, which route was taken, and what steps the driver took to find a new passenger. In this article, we provide an exhaustive survey of the work on mining these traces. We first provide a formalization of the data sets, along with an overview of different mechanisms for preprocessing the data. We then classify the existing work into three main categories: social dynamics, traffic dynamics and operational dynamics. Social dynamics refers to the study of the collective behaviour of a city’s population, based on their observed movements; Traffic dynamics studies the resulting flow of the movement through the road network; Operational dynamics refers to the study and analysis of taxi driver’s modus operandi. We discuss the different problems currently being researched, the various approaches proposed, and suggest new avenues of research. Finally, we present a historical overview of the research work in this field and discuss which areas hold most promise for future research.", "title": "" } ]
scidocsrr
afb66dec6f13e397a3395d347ce77ad8
Learning Sparse Neural Networks through L0 Regularization
[ { "docid": "51e89cbff8016cedb6b097687b3d2f91", "text": "In this note we present a generative model of natural images c on isting of a deep hierarchy of layers of latent random variables, each of whic h follows a new type of distribution that we call rectified Gaussian . These rectified Gaussian units allow spike-and-slab type sparsity, while retaining the diff erentiability necessary for efficient stochastic gradient variational inference. To le arn the parameters of the new model, we approximate the posterior of the latent variab les with a variational auto-encoder. Rather than making the usual mean-field assum ption however, the encoder parameterizes a new type of structured variational approximation that retains the prior dependencies of the generative model. Using this structured posterior approximation, we are able to perform joint training of deep models with many layers of latent random variables, without having to re so t to stacking or other layerwise training procedures. 1 A structured variational auto-encoder model We propose a directed generative model consisting of a hiera rchy of layers of latent features z , z, . . . , z, where the features in each layer are generated independent ly co ditional on the features in the layer above, i.e. z i ∼ p(z j i |z ). For the conditional distribution p() we propose what we call therectified Gaussian distribution RG(μji , σ j i ). We can define this distribution by describing how we can sample from it: Definition (Rectified Gaussian distribution ). If ǫ ∼ N(0, 1), andz i = maximum(μ j i + σ j i ǫ, 0) thenz j i ∼ RG(μ j i , σ j i ). The rectified Gaussian distribution is thus a mixture of a poi nt mass at zero, and a truncated Gaussian distribution with support on the positive real line. Bo th the mass at zero and the shape of the truncated Gaussian component are determined by the same par amete s. Because of this property, the random drawz i is differentiable in(μ j i , σ j i ) for fixed ǫ, a property we will exploit later to perform efficient stochastic gradient variational inference. For the top layer of latent features z, we defineμ to be a learnable parameter vector. The standard deviationsσ i of the top layer are fixed at 1. After that, the parameters of ea ch l yer are recursively set to be μ = bμ +W j μ · z , σ = exp ( b j σ +W j σ · z j−1 ) , wherebμ andb j σ are (column) parameter vectors, W j μ andW j σ are parameter matrices, and · efines the matrix-vector dot product. The exponential function exp() is applied elementwise. After generating the last layer of latent features z, we generate the observed data x from an appropriate conditional distributionp(x|z). For example, for binary data we use independent Bernoulli distributions, where the probabilities are given by applyi ng the logistic sigmoid to another linear transformation of the latent features z. For continuous data we could use independent Gaussian", "title": "" } ]
[ { "docid": "6c53d0939d81e9bbe9a2262733d22c56", "text": "This paper investigates the effect of the inlet configuration on cooling for an air-cooled axial-flux permanent-magnet (AFPM) machine. Temperature rises in the stator were measured and compared with results predicted using computational fluid dynamic (CFD) methods linked to a detailed machine loss characterization. It is found that an improved inlet design can significantly reduce the stator temperature rises. Comparison between the validated CFD model results and the values obtained from heat transfer correlations addresses the suitability of those correlations proposed specifically for AFPM machines.", "title": "" }, { "docid": "a6df2f269603c26d72431e52e242384a", "text": "To achieve denser 3D ear model from less controlled 2D image, we explore a 3D Ear Morphable Model (3DEMM) for 3D ear reconstruction using a single 2D ear image. Considering the unique structure of ear, we propose a novel dense corresponding method. The proposed method can overcome the shortcoming of optical flow based method and achieve pixel level dense correspondences based on physiological features of ear without choosing a reference ear. Novel 3D ear shape can be recovered from a single ear image based on the proposed 3D ear morphable model. Extensive experimental results have shown that our proposed method can obtain denser 3D ear model with lower cost and higher efficiency than existing methods.", "title": "" }, { "docid": "a7317f06cf34e501cb169bdf805e7e34", "text": "It's natural to promote your best and brightest, especially when you think they may leave for greener pastures if you don't continually offer them new challenges and rewards. But promoting smart, ambitious young managers too quickly often robs them of the chance to develop the emotional competencies that come with time and experience--competencies like the ability to negotiate with peers, regulate emotions in times of crisis, and win support for change. Indeed, at some point in a manager's career--usually at the vice president level--raw talent and ambition become less important than the ability to influence and persuade, and that's the point at which the emotionally immature manager will lose his effectiveness. This article argues that delaying a promotion can sometimes be the best thing a senior executive can do for a junior manager. The inexperienced manager who is given time to develop his emotional competencies may be better prepared for the interpersonal demands of top-level leadership. The authors recommend that senior executives employ these strategies to help boost their protégés' people skills: sharpen the 360-degree feedback process, give managers cross-functional assignments to improve their negotiation skills, make the development of emotional competencies mandatory, make emotional competencies a performance measure, and encourage managers to develop informal learning partnerships with peers and mentors. Delaying a promotion can be difficult given the steadfast ambitions of many junior executives and the hectic pace of organizational life. It may mean going against the norm of promoting people almost exclusively on smarts and business results. It may also mean contending with the disappointment of an esteemed subordinate. But taking the time to build people's emotional competencies isn't an extravagance; it's critical to developing effective leaders.", "title": "" }, { "docid": "a21a9edde53c479bda2bd9bef3db5f65", "text": "Two-dimensional (2-D) face recognition (FR) is of interest in many verification (1:1 matching) and identification (1:N matching) applications because of its nonintrusive nature and because digital cameras are becoming ubiquitous. However, the performance of 2-D FR systems can be degraded by natural factors such as expressions, illuminations, pose, and aging. Several FR algorithms have been proposed to deal with the resulting appearance variability. However, most of these methods employ features derived in the image or the space domain whereas there are benefits to working in the spatial frequency domain (i.e., the 2-D Fourier transforms of the images). These benefits include shift-invariance, graceful degradation, and closed-form solutions. We discuss the use of spatial frequency domain methods (also known as correlation filters or correlation pattern recognition) for FR and illustrate the advantages. However, correlation filters can be computationally demanding due to the need for computing 2-D Fourier transforms and may not match well for large-scale FR problems such as in the Face Recognition Grand Challenge (FRGC) phase-II experiments that require the computation of millions of similarity metrics. We will discuss a new method [called the class-dependence feature analysis (CFA)] that reduces the computational complexity of correlation pattern recognition and show the results of applying CFA to the FRGC phase-II data", "title": "" }, { "docid": "c59e72c374b3134e347674dccb86b0a4", "text": "Lane detection and tracking and departure warning systems are important components of Intelligent Transportation Systems. They have particularly attracted great interest from industry and academia. Many architectures and commercial systems have been proposed in the literature. In this paper, we discuss the design of such systems regarding the following stages: pre-processing, detection, and tracking. For each stage, a short description of its working principle as well as their advantages and shortcomings are introduced. Our paper may possibly help in designing new systems that overcome and improve the shortcomings of current architectures.", "title": "" }, { "docid": "b715ca28f59e8a16dad408f4d29aa9c6", "text": "Networks are a fundamental tool for understanding and modeling complex systems in physics, biology, neuroscience, engineering, and social science. Many networks are known to exhibit rich, lower-order connectivity patterns that can be captured at the level of individual nodes and edges. However, higher-order organization of complex networks—at the level of small network subgraphs—remains largely unknown. Here, we develop a generalized framework for clustering networks on the basis of higher-order connectivity patterns. This framework provides mathematical guarantees on the optimality of obtained clusters and scales to networks with billions of edges. The framework reveals higher-order organization in a number of networks, including information propagation units in neuronal networks and hub structure in transportation networks. Results show that networks exhibit rich higher-order organizational structures that are exposed by clustering based on higher-order connectivity patterns.", "title": "" }, { "docid": "f0f432edbfd66ae86621c9888d04249d", "text": "Facial retouching is widely used in media and entertainment industry. Professional software usually require a minimum level of user expertise to achieve the desirable results. In this paper, we present an algorithm to detect facial wrinkles/imperfection. We believe that any such algorithm would be amenable to facial retouching applications. The detection of wrinkles/imperfections can allow these skin features to be processed differently than the surrounding skin without much user interaction. For detection, Gabor filter responses along with texture orientation field are used as image features. A bimodal Gaussian mixture model (GMM) represents distributions of Gabor features of normal skin versus skin imperfections. Then, a Markov random field model is used to incorporate the spatial relationships among neighboring pixels for their GMM distributions and texture orientations. An expectation-maximization algorithm then classifies skin versus skin wrinkles/imperfections. Once detected automatically, wrinkles/imperfections are removed completely instead of being blended or blurred. We propose an exemplar-based constrained texture synthesis algorithm to inpaint irregularly shaped gaps left by the removal of detected wrinkles/imperfections. We present results conducted on images downloaded from the Internet to show the efficacy of our algorithms.", "title": "" }, { "docid": "6f9ea15d361db7a92bdd349379b105cb", "text": "In this paper, we introduce and investigate a sparse additive model for subspace clustering problems. Our approach, named SASC (Sparse Additive Subspace Clustering), is essentially a functional extension of the Sparse Subspace Clustering (SSC) of Elhamifar & Vidal [7] to the additive nonparametric setting. To make our model computationally tractable, we express SASC in terms of a finite set of basis functions, and thus the formulated model can be estimated via solving a sequence of grouped Lasso optimization problems. We provide theoretical guarantees on the subspace recovery performance of our model. Empirical results on synthetic and real data demonstrate the effectiveness of SASC for clustering noisy data points into their original subspaces.", "title": "" }, { "docid": "b6e15d3931080de9a8f92d5b6e4c19e0", "text": "A low-profile, electrically small antenna with omnidirectional vertically polarized radiation similar to a short monopole antenna is presented. The antenna features less than lambda/40 dimension in height and lambda/10 or smaller in lateral dimension. The antenna is matched to a 50 Omega coaxial line without the need for external matching. The geometry of the antenna is derived from a quarter-wave transmission line resonator fed at an appropriate location to maximize current through the short-circuited end. To improve radiation from the vertical short-circuited pin, the geometry is further modified through superposition of additional resonators placed in a parallel arrangement. The lateral dimension of the antenna is miniaturized by meandering and turning the microstrip lines into form of a multi-arm spiral. The meandering between the short-circuited end and the feed point also facilitates the impedance matching. Through this technique, spurious horizontally polarized radiation is also minimized and a radiation pattern similar to a short dipole is achieved. The antenna is designed, fabricated and measured. Parametric studies are performed to explore further size reduction and performance improvements. Based on the studies, a dual-band antenna with enhanced gain is realized. The measurements verify that the proposed fabricated antennas feature excellent impedance match, omnidirectional radiation in the horizontal plane and low levels of cross-polarization.", "title": "" }, { "docid": "18e82265b8bd07e0265069394c7c2e78", "text": "Traditional university benches are gradually being replaced with the intensification of Open and Distance Learning (ODL) mode. With more courses going online, the main challenge is to bridge the `separation' between distance learners and tutors. In this respect the Discussion Board (DB) is gathering great interest and has become the locus of considerable research. The objective of this paper is to assess the readiness of ODL learners to embrace the discussion board as an e-learning platform using multi-pronged qualitative approaches. It investigates if learning expectations are legitimate and if the Mauritian ODL learner is ready to break away with traditional practices and become beneficiaries of a learning system which is highly mediated by the functionalities of ICT. This case study involving ten students reading for a Master's degree at the Open University of Mauritius revealed that the DB is a valuable learning tool with great potential for peer tutoring, tutor/learner interaction and development of reflective and analytical skills. On the other hand there is a need for both tutors and leaners to familiarize themselves with the functionalities of this tool. This study purports that learners must be encouraged to develop their writing skills in order to make the most of the DB platform. The acronym “TELEPHONE” is proposed in this paper to describe some key facets of DB and online learning.", "title": "" }, { "docid": "2e9d6ad38bd51fbd7af165e4b9262244", "text": "BACKGROUND\nThe assessment of blood lipids is very frequent in clinical research as it is assumed to reflect the lipid composition of peripheral tissues. Even well accepted such relationships have never been clearly established. This is particularly true in ophthalmology where the use of blood lipids has become very common following recent data linking lipid intake to ocular health and disease. In the present study, we wanted to determine in humans whether a lipidomic approach based on red blood cells could reveal associations between circulating and tissue lipid profiles. To check if the analytical sensitivity may be of importance in such analyses, we have used a double approach for lipidomics.\n\n\nMETHODOLOGY AND PRINCIPAL FINDINGS\nRed blood cells, retinas and optic nerves were collected from 9 human donors. The lipidomic analyses on tissues consisted in gas chromatography and liquid chromatography coupled to an electrospray ionization source-mass spectrometer (LC-ESI-MS). Gas chromatography did not reveal any relevant association between circulating and ocular fatty acids except for arachidonic acid whose circulating amounts were positively associated with its levels in the retina and in the optic nerve. In contrast, several significant associations emerged from LC-ESI-MS analyses. Particularly, lipid entities in red blood cells were positively or negatively associated with representative pools of retinal docosahexaenoic acid (DHA), retinal very-long chain polyunsaturated fatty acids (VLC-PUFA) or optic nerve plasmalogens.\n\n\nCONCLUSIONS AND SIGNIFICANCE\nLC-ESI-MS is more appropriate than gas chromatography for lipidomics on red blood cells, and further extrapolation to ocular lipids. The several individual lipid species we have identified are good candidates to represent circulating biomarkers of ocular lipids. However, further investigation is needed before considering them as indexes of disease risk and before using them in clinical studies on optic nerve neuropathies or retinal diseases displaying photoreceptors degeneration.", "title": "" }, { "docid": "7d0badaeeb94658690f0809c134d3963", "text": "Vascular tissue engineering is an area of regenerative medicine that attempts to create functional replacement tissue for defective segments of the vascular network. One approach to vascular tissue engineering utilizes seeding of biodegradable tubular scaffolds with stem (and/or progenitor) cells wherein the seeded cells initiate scaffold remodeling and prevent thrombosis through paracrine signaling to endogenous cells. Stem cells have received an abundance of attention in recent literature regarding the mechanism of their paracrine therapeutic effect. However, very little of this mechanistic research has been performed under the aegis of vascular tissue engineering. Therefore, the scope of this review includes the current state of TEVGs generated using the incorporation of stem cells in biodegradable scaffolds and potential cell-free directions for TEVGs based on stem cell secreted products. The current generation of stem cell-seeded vascular scaffolds are based on the premise that cells should be obtained from an autologous source. However, the reduced regenerative capacity of stem cells from certain patient groups limits the therapeutic potential of an autologous approach. This limitation prompts the need to investigate allogeneic stem cells or stem cell secreted products as therapeutic bases for TEVGs. The role of stem cell derived products, particularly extracellular vesicles (EVs), in vascular tissue engineering is exciting due to their potential use as a cell-free therapeutic base. EVs offer many benefits as a therapeutic base for functionalizing vascular scaffolds such as cell specific targeting, physiological delivery of cargo to target cells, reduced immunogenicity, and stability under physiological conditions. However, a number of points must be addressed prior to the effective translation of TEVG technologies that incorporate stem cell derived EVs such as standardizing stem cell culture conditions, EV isolation, scaffold functionalization with EVs, and establishing the therapeutic benefit of this combination treatment.", "title": "" }, { "docid": "4b7a885d463022a1792d99ff0c76be72", "text": "Emerging applications in sensor systems and network-wide IP traffic analysis present many technical challenges. They need distributed monitoring and continuous tracking of events. They have severe resource constraints not only at each site in terms of per-update processing time and archival space for highspeed streams of observations, but also crucially, communication constraints for collaborating on the monitoring task. These elements have been addressed in a series of recent works. A fundamental issue that arises is that one cannot make the \"uniqueness\" assumption on observed events which is present in previous works, since widescale monitoring invariably encounters the same events at different points. For example, within the network of an Internet Service Provider packets of the same flow will be observed in different routers; similarly, the same individual will be observed by multiple mobile sensors in monitoring wild animals. Aggregates of interest on such distributed environments must be resilient to duplicate observations. We study such duplicate-resilient aggregates that measure the extent of the duplication―how many unique observations are there, how many observations are unique―as well as standard holistic aggregates such as quantiles and heavy hitters over the unique items. We present accuracy guaranteed, highly communication-efficient algorithms for these aggregates that work within the time and space constraints of high speed streams. We also present results of a detailed experimental study on both real-life and synthetic data.", "title": "" }, { "docid": "73856ff677af144dc6cda69426646ce3", "text": "Humans demonstrate a remarkable ability to generate accurate and appropriate motor behavior under many different and often uncertain environmental conditions. We previously proposed a new modular architecture, the modular selection and identification for control (MOSAIC) model, for motor learning and control based on multiple pairs of forward (predictor) and inverse (controller) models. The architecture simultaneously learns the multiple inverse models necessary for control as well as how to select the set of inverse models appropriate for a given environment. It combines both feedforward and feedback sensorimotor information so that the controllers can be selected both prior to movement and subsequently during movement. This article extends and evaluates the MOSAIC architecture in the following respects. The learning in the architecture was implemented by both the original gradient-descent method and the expectation-maximization (EM) algorithm. Unlike gradient descent, the newly derived EM algorithm is robust to the initial starting conditions and learning parameters. Second, simulations of an object manipulation task prove that the architecture can learn to manipulate multiple objects and switch between them appropriately. Moreover, after learning, the model shows generalization to novel objects whose dynamics lie within the polyhedra of already learned dynamics. Finally, when each of the dynamics is associated with a particular object shape, the model is able to select the appropriate controller before movement execution. When presented with a novel shape-dynamic pairing, inappropriate activation of modules is observed followed by on-line correction.", "title": "" }, { "docid": "192f2c35f3aebd7ce744153ab0345ea4", "text": "Composition and distribution of planktonic protists were examined relative to microbial food web dynamics (growth, grazing, and nitrogen cycling rates) at the Old Woman Creek (OWC) National Estuarine Research Reserve during an episodic storm event in July 2003. More than 150 protistan taxa were identified based on morphology. Species richness and microbial biomass measured via microscopy and flow cytometry increased along a stream–lake (Lake Erie) transect and peaked at the confluence. Water column ammonium (NH 4 + ) uptake (0.06 to 1.82 μM N h–1) and regeneration (0.04 to 0.55 μM N h–1) rates, measured using 15NH 4 + isotope dilution, followed the same pattern. Large light/dark NH 4 + uptake differences were observed in the hypereutrophic OWC interior, but not at the phosphorus-limited Lake Erie site, reflecting the microbial community structural shift from net autotrophic to net heterotrophic. Despite this shift, microbial grazers (mostly choreotrich ciliates, taxon-specific growth rates up to 2.9 d–1) controlled nanophytoplankton and bacteria at all sites by consuming 76 to 110% and 56 to 97% of their daily production, respectively, in dilution experiments. Overall, distribution patterns and dynamics of microbial communities in OWC resemble those in marine estuaries, where plankton productivity increases along the river–sea gradient and reaches its maximum at the confluence.", "title": "" }, { "docid": "9dd3157c4c94c62e2577ace7f6c41629", "text": "BACKGROUND\nThere is a growing concern over the addictiveness of Social Media use. Additional representative indicators of impaired control are needed in order to distinguish presumed social media addiction from normal use.\n\n\nAIMS\n(1) To examine the existence of time distortion during non-social media use tasks that involve social media cues among those who may be considered at-risk for social media addiction. (2) To examine the usefulness of this distortion for at-risk vs. low/no-risk classification.\n\n\nMETHOD\nWe used a task that prevented Facebook use and invoked Facebook reflections (survey on self-control strategies) and subsequently measured estimated vs. actual task completion time. We captured the level of addiction using the Bergen Facebook Addiction Scale in the survey, and we used a common cutoff criterion to classify people as at-risk vs. low/no-risk of Facebook addiction.\n\n\nRESULTS\nThe at-risk group presented significant upward time estimate bias and the low/no-risk group presented significant downward time estimate bias. The bias was positively correlated with Facebook addiction scores. It was efficacious, especially when combined with self-reported estimates of extent of Facebook use, in classifying people to the two categories.\n\n\nCONCLUSIONS\nOur study points to a novel, easy to obtain, and useful marker of at-risk for social media addiction, which may be considered for inclusion in diagnosis tools and procedures.", "title": "" }, { "docid": "4ac06b70fc02c83cb676f5c479a0fe93", "text": "We propose a framework that captures the denotational probabilities of words and phrases by embedding them in a vector space, and present a method to induce such an embedding from a dataset of denotational probabilities. We show that our model successfully predicts denotational probabilities for unseen phrases, and that its predictions are useful for textual entailment datasets such as SICK and SNLI.", "title": "" }, { "docid": "511c90eadbbd4129fdf3ee9e9b2187d3", "text": "BACKGROUND\nPressure ulcers are associated with substantial health burdens but may be preventable.\n\n\nPURPOSE\nTo review the clinical utility of pressure ulcer risk assessment instruments and the comparative effectiveness of preventive interventions in persons at higher risk.\n\n\nDATA SOURCES\nMEDLINE (1946 through November 2012), CINAHL, the Cochrane Library, grant databases, clinical trial registries, and reference lists.\n\n\nSTUDY SELECTION\nRandomized trials and observational studies on effects of using risk assessment on clinical outcomes and randomized trials of preventive interventions on clinical outcomes.\n\n\nDATA EXTRACTION\nMultiple investigators abstracted and checked study details and quality using predefined criteria.\n\n\nDATA SYNTHESIS\nOne good-quality trial found no evidence that use of a pressure ulcer risk assessment instrument, with or without a protocolized intervention strategy based on assessed risk, reduces risk for incident pressure ulcers compared with less standardized risk assessment based on nurses' clinical judgment. In higher-risk populations, 1 good-quality and 4 fair-quality randomized trials found that more advanced static support surfaces were associated with lower risk for pressure ulcers compared with standard mattresses (relative risk range, 0.20 to 0.60). Evidence on the effectiveness of low-air-loss and alternating-air mattresses was limited, with some trials showing no clear differences from advanced static support surfaces. Evidence on the effectiveness of nutritional supplementation, repositioning, and skin care interventions versus usual care was limited and had methodological shortcomings, precluding strong conclusions.\n\n\nLIMITATION\nOnly English-language articles were included, publication bias could not be formally assessed, and most studies had methodological shortcomings.\n\n\nCONCLUSION\nMore advanced static support surfaces are more effective than standard mattresses for preventing ulcers in higher-risk populations. The effectiveness of formal risk assessment instruments and associated intervention protocols compared with less standardized assessment methods and the effectiveness of other preventive interventions compared with usual care have not been clearly established.", "title": "" }, { "docid": "fcaeb514732aa0a56dd8cabf8f1f2fd4", "text": "Several different factors contribute to injury severity in traffic accidents, such as driver characteristics, highway characteristics, vehicle characteristics, accidents characteristics, and atmospheric factors. This paper shows the possibility of using Bayesian Networks (BNs) to classify traffic accidents according to their injury severity. BNs are capable of making predictions without the need for pre assumptions and are used to make graphic representations of complex systems with interrelated components. This paper presents an analysis of 1536 accidents on rural highways in Spain, where 18 variables representing the aforementioned contributing factors were used to build 3 different BNs that classified the severity of accidents into slightly injured and killed or severely injured. The variables that best identify the factors that are associated with a killed or seriously injured accident (accident type, driver age, lighting and number of injuries) were identified by inference.", "title": "" }, { "docid": "adc982af47186e4cb48c9b61f3a55a45", "text": "Single trial electroencephalogram classification is indispensable in online brain–computer interfaces (BCIs) A classification method called adaptive Kernel Fisher Support Vector Machine (KF-SVM) is designed and applied to single trial EEG classification in BCIs. The adaptive KF-SVM algorithm combines adaptive idea, SVM and within-class scatter inspired from kernel fisher. Firstly, the within-class scatter matrix of a feature vector is calculated. And to construct a new kernel, this scatter is incorporated into the kernel function of SVM. Ultimately, the recognition result is calculated by the SVM whose kernel has been changed. The proposed algorithm simultaneously maximizes the discrimination between classes and also considers the within-class dissimilarities, which avoids some disadvantages of traditional SVM. In addition, the within-class scatter matrix of adaptive KF-SVM is updated trial by trail, which enhances the online adaptation of BCIs. Based on the EEG data recorded from seven subjects, the new approach achieved higher classification accuracies than the standard SVM, KF-SVM and adaptive linear classifier. The proposed scheme achieves the average performance improvement of 5.8%,5.2% and 3.7% respectively compared to other three schemes.", "title": "" } ]
scidocsrr
1d2a6d4e10f7e8ada8db1fa5f5db1e2f
A Comparitive Survey of ANN and Hybrid HMM/ANN Architectures for Robust Speech Recognition
[ { "docid": "9ff93724f532730a7507f8fc9639004e", "text": "It is well known that the \"musical noise\" encountered in most frequency domain speech enhancement algorithms is partially due to the large variance estimates of the spectra. To address this issue, we propose in this paper the use of low-variance spectral estimators based on wavelet thresholding the multitaper spectra for speech enhancement. A short-time spectral amplitude estimator is derived which incorporates the wavelet-thresholded multitaper spectra. Listening tests showed that the use of multitaper spectrum estimation combined with wavelet thresholding suppressed the musical noise and yielded better quality than the subspace and MMSE algorithms.", "title": "" } ]
[ { "docid": "f670b91f8874c2c2db442bc869889dbd", "text": "This paper summarizes lessons learned from the first Amazon Picking Challenge in which 26 international teams designed robotic systems that competed to retrieve items from warehouse shelves. This task is currently performed by human workers, and there is hope that robots can someday help increase efficiency and throughput while lowering cost. We report on a 28-question survey posed to the teams to learn about each team’s background, mechanism design, perception apparatus, planning and control approach. We identify trends in this data, correlate it with each team’s success in the competition, and discuss observations and lessons learned. Note to Practitioners: Abstract—Perception, motion planning, grasping, and robotic system engineering has reached a level of maturity that makes it possible to explore automating simple warehouse tasks in semi-structured environments that involve high-mix, low-volume picking applications. This survey summarizes lessons learned from the first Amazon Picking Challenge, highlighting mechanism design, perception, and motion planning algorithms, as well as software engineering practices that were most successful in solving a simplified order fulfillment task. While the choice of mechanism mostly affects execution speed, the competition demonstrated the systems challenges of robotics and illustrated the importance of combining reactive control with deliberative planning.", "title": "" }, { "docid": "522226d646559018812b7fec8eed26a1", "text": "Diabetes represents one of the most common and debilitating conditions seen among Kaiser Permanente (KP) members. Because care often involves multiple providers and because follow-up requires persistence by patients and clinicians alike, ideal outcomes are often difficult to achieve. Management of diabetes therefore offers an excellent opportunity to practice population management–-a systems approach designed to ensure excellent care. Accordingly, through a broad KP collaboration, the Care Management Institute (CMI) developed a comprehensive approach to adult diabetes care: the Integrated Diabetes Care (IDC) Program. The IDC Program has three elements: an internally published report, Clinical Practice Guidelines for Adult Diabetes Care; a set of tools for applying population management and patient empowerment concepts; and an outcomes measurement component, ie, instruments for evaluating IDC Program impact and gathering feedback. In this article, we describe the IDC Program and the process by which it was developed. Included are specific examples of the tools and how they can be used at the population level and by individual clinicians in caring for patients. (top right) NEIL A. SOLOMON, MD, is the Clinical Strategies Consultant in the Care Management Institute at the Program Offices of Kaiser Permanente. His work focuses on improving quality and efficiency through the development of disease management strategies and other population-based health care innovations. He also works with Permanente physician leaders to review information on practice variation and to disseminate successful practices for internal clinical management improvement.", "title": "" }, { "docid": "e62fd95ccd6c10960acc7358ad0a5071", "text": "The view information of a chest X-ray (CXR), such as frontal or lateral, is valuable in computer aided diagnosis (CAD) of CXRs. For example, it helps for the selection of atlas models for automatic lung segmentation. However, very often, the image header does not provide such information. In this paper, we present a new method for classifying a CXR into two categories: frontal view vs. lateral view. The method consists of three major components: image pre-processing, feature extraction, and classification. The features we selected are image profile, body size ratio, pyramid of histograms of orientation gradients, and our newly developed contour-based shape descriptor. The method was tested on a large (more than 8,200 images) CXR dataset hosted by the National Library of Medicine. The very high classification accuracy (over 99% for 10-fold cross validation) demonstrates the effectiveness of the proposed method.", "title": "" }, { "docid": "f9580093dcf61a9d6905265cfb3a0d32", "text": "The rapid adoption of electronic health records (EHR) provides a comprehensive source for exploratory and predictive analytic to support clinical decision-making. In this paper, we investigate how to utilize EHR to tailor treatments to individual patients based on their likelihood to respond to a therapy. We construct a heterogeneous graph which includes two domains (patients and drugs) and encodes three relationships (patient similarity, drug similarity, and patient-drug prior associations). We describe a novel approach for performing a label propagation procedure to spread the label information representing the effectiveness of different drugs for different patients over this heterogeneous graph. The proposed method has been applied on a real-world EHR dataset to help identify personalized treatments for hypercholesterolemia. The experimental results demonstrate the effectiveness of the approach and suggest that the combination of appropriate patient similarity and drug similarity analytics could lead to actionable insights for personalized medicine. Particularly, by leveraging drug similarity in combination with patient similarity, our method could perform well even on new or rarely used drugs for which there are few records of known past performance.", "title": "" }, { "docid": "c6e6099599be3cd2d1d87c05635f4248", "text": "PURPOSE\nThe Food Cravings Questionnaires are among the most often used measures for assessing the frequency and intensity of food craving experiences. However, there is a lack of studies that have examined specific cut-off scores that may indicate pathologically elevated levels of food cravings.\n\n\nMETHODS\nReceiver-Operating-Characteristic analysis was used to determine sensitivity and specificity of scores on the Food Cravings Questionnaire-Trait-reduced (FCQ-T-r) for discriminating between individuals with (n = 43) and without (n = 389) \"food addiction\" as assessed with the Yale Food Addiction Scale 2.0.\n\n\nRESULTS\nA cut-off score of 50 on the FCQ-T-r discriminated between individuals with and without \"food addiction\" with high sensitivity (85%) and specificity (93%).\n\n\nCONCLUSIONS\nFCQ-T-r scores of 50 and higher may indicate clinically relevant levels of trait food craving.\n\n\nLEVEL OF EVIDENCE\nLevel V, descriptive study.", "title": "" }, { "docid": "5fac7f2e0fed381eb713894d52722d22", "text": "Biodiesel is biodegradable, less CO2 and NOx emissions. Continuous use of petroleum sourced fuels is now widely recognized as unsustainable because of depleting supplies and the contribution of these fuels to the accumulation of carbon dioxide in the environment. Renewable, carbon neutral, transport fuels are necessary for environmental and economic sustainability. Algae have emerged as one of the most promising sources for biodiesel production. It can be inferred that algae grown in CO2-enriched air can be converted to oily substances. Such an approach can contribute to solve major problems of air pollution resulting from CO2 evolution and future crisis due to a shortage of energy sources. This study was undertaken to know the proper transesterification, amount of biodiesel production (ester) and physical properties of biodiesel. In this study we used common species Oedogonium and Spirogyra to compare the amount of biodiesel production. Algal oil and biodiesel (ester) production was higher in Oedogonium than Spirogyra sp. However, biomass (after oil extraction) was higher in Spirogyra than Oedogonium sp. Sediments (glycerine, water and pigments) was higher in Spirogyra than Oedogonium sp. There was no difference of pH between Spirogyra and Oedogonium sp. These results indicate that biodiesel can be produced from both species and Oedogonium is better source than Spirogyra sp.", "title": "" }, { "docid": "3435041805c5cb2629d70ff909c10637", "text": "Synchronized stochastic gradient descent (SGD) optimizers with data parallelism are widely used in training large-scale deep neural networks. Although using larger mini-batch sizes can improve the system scalability by reducing the communication-to-computation ratio, it may hurt the generalization ability of the models. To this end, we build a highly scalable deep learning training system for dense GPU clusters with three main contributions: (1) We propose a mixed-precision training method that significantly improves the training throughput of a single GPU without losing accuracy. (2) We propose an optimization approach for extremely large minibatch size (up to 64k) that can train CNN models on the ImageNet dataset without losing accuracy. (3) We propose highly optimized all-reduce algorithms that achieve up to 3x and 11x speedup on AlexNet and ResNet-50 respectively than NCCL-based training on a cluster with 1024 Tesla P40 GPUs. On training ResNet-50 with 90 epochs, the state-of-the-art GPU-based system with 1024 Tesla P100 GPUs spent 15 minutes and achieved 74.9% top-1 test accuracy, and another KNL-based system with 2048 Intel KNLs spent 20 minutes and achieved 75.4% accuracy. Our training system can achieve 75.8% top-1 test accuracy in only 6.6 minutes using 2048 Tesla P40 GPUs. When training AlexNet with 95 epochs, our system can achieve 58.7% top-1 test accuracy within 4 minutes, which also outperforms all other existing systems.", "title": "" }, { "docid": "3bad6f7bf3680d33eca19f924fa9084a", "text": "Deep Learning models are vulnerable to adversarial examples, i.e. images obtained via deliberate imperceptible perturbations, such that the model misclassifies them with high confidence. However, class confidence by itself is an incomplete picture of uncertainty. We therefore use principled Bayesian methods to capture model uncertainty in prediction for observing adversarial misclassification. We provide an extensive study with different Bayesian neural networks attacked in both white-box and black-box setups. The behaviour of the networks for noise, attacks and clean test data is compared. We observe that Bayesian neural networks are uncertain in their predictions for adversarial perturbations, a behaviour similar to the one observed for random Gaussian perturbations. Thus, we conclude that Bayesian neural networks can be considered for detecting adversarial examples.", "title": "" }, { "docid": "71260dbe5738afd4285f8e1e0e0571ad", "text": "Software tools have been used in software development for a long time now. They are used for, among other things, performance analysis, testing and verification, debugging and building applications. Software tools can be very simple and lightweight, e.g. linkers, or very large and complex, e.g. computer-assisted software engineering (CASE) tools and integrated development environments (IDEs). Some tools support particular phases of the project cycle while others can be used with a speicfic software development model or technology. Some aspects of software development, like risk management, are done throughout the whole project from inception to commissioning. The aim of this paper is to demonstrate the need for an intelligent risk assessment and management tool for both agile or traditional (or their combination) methods in software development. The authors propose a model, whose development is subject of further research, which can be investigated for use in developing intelligent risk management tools", "title": "" }, { "docid": "c8a16019564d99007efd88ca23d44d30", "text": "Cardiac masses are rare entities that can be broadly categorized as either neoplastic or non-neoplastic. Neoplastic masses include benign and malignant tumors. In the heart, metastatic tumors are more common than primary malignant tumors. Whether incidentally found or diagnosed as a result of patients' symptoms, cardiac masses can be identified and further characterized by a range of cardiovascular imaging options. While echocardiography remains the first-line imaging modality, cardiac computed tomography (cardiac CT) has become an increasingly utilized modality for the assessment of cardiac masses, especially when other imaging modalities are non-diagnostic or contraindicated. With high isotropic spatial and temporal resolution, fast acquisition times, and multiplanar image reconstruction capabilities, cardiac CT offers an alternative to cardiovascular magnetic resonance imaging in many patients. Additionally, cardiac masses may be incidentally discovered during cardiac CT for other reasons, requiring imagers to understand the unique features of a diverse range of cardiac masses. Herein, we define the characteristic imaging features of commonly encountered and selected cardiac masses and define the role of cardiac CT among noninvasive imaging options.", "title": "" }, { "docid": "723615bf8a056678c65e4d8adb831bb4", "text": "This paper presents a comparative analysis of different pedestrian dataset characteristics. The main goal of the research is to determine what characteristics are desirable for improved training and validation of pedestrian detectors and classifiers. The work focuses on those aspects of the dataset which affect classification success using the most common boosting methods. Dataset characteristics such as image size, aspect ratio, geometric variance and the relative scale of positive class instances (pedestrians) within the training window form an integral part of classification success. This paper will examine the effects of varying these dataset characteristics with a view to determining the recommended attributes of a high quality and challenging dataset. While the primary focus is on characteristics of the positive training dataset, some discussion of desirable attributes for the negative dataset is important and is therefore included. This paper also serves to publish our current pedestrian dataset in various forms for non-commercial use by the scientific community. We believe the published dataset to be one of the largest, most flexible, and representative datasets available for pedestrian/person detection tasks.", "title": "" }, { "docid": "d13145bc68472ed9a06bafd86357c5dd", "text": "Modeling cloth with fiber-level geometry can produce highly realistic details. However, rendering fiber-level cloth models not only has a high memory cost but it also has a high computation cost even for offline rendering applications. In this paper we present a real-time fiber-level cloth rendering method for current GPUs. Our method procedurally generates fiber-level geometric details on-the-fly using yarn-level control points for minimizing the data transfer to the GPU. We also reduce the rasterization operations by collectively representing the fibers near the center of each ply that form the yarn structure. Moreover, we employ a level-of-detail strategy to minimize or completely eliminate the generation of fiber-level geometry that would have little or no impact on the final rendered image. Furthermore, we introduce a simple yarn-level ambient occlusion approximation and self-shadow computation method that allows lighting with self-shadows using relatively low-resolution shadow maps. We demonstrate the effectiveness of our approach by comparing our simplified fiber geometry to procedurally generated references and display knitwear containing more than a hundred million individual fiber curves at real-time frame rates with shadows and ambient occlusion.", "title": "" }, { "docid": "08025e6ed1ee71596bdc087bfd646eac", "text": "A method is presented for computing an orthonormal set of eigenvectors for the discrete Fourier transform (DFT). The technique is based on a detailed analysis of the eigenstructure of a special matrix which commutes with the DFT. It is also shown how fractional powers of the DFT can be efficiently computed, and possible applications to multiplexing and transform coding are suggested. T", "title": "" }, { "docid": "a7b5bfb508b577fe98ececadf0820e3f", "text": "Endoscopic polypectomy is currently one of the most effective interventions for the prevention of colorectal cancer (CRC). Although the most common carcinoma precursor is the tubular adenoma, the detection, diagnosis, and follow-up of serrated precursors are also of clinical importance since the serrated pathway is implicated in about 30% of CRC [1, 2]. Serrated polyps can be categorized into three groups: the frequently encountered hyperplastic polyps (HPs) which are flat and distal; sessile serrated adenomas/polyps (SSA/Ps) which are flat, proximal, and account for about 10% of all serrated polyps; and traditional serrated adenomas (TSAs) which are distal, protruding, and account for a small percentage of serrated polyps [3]. HPs are further divided into microvesicular (MVHP) and goblet cell-rich (GCHP) types and are believed to be the precursors of SSA/Ps and TSAs, respectively. Although by definition SSA/Ps are non-dysplastic, they acquire cytologic dysplasia as they progress to CRC. Conversely, all TSAs harbor cytologic dysplasia. Thus, both SSA/P and TSA are established precursors of CRC. Detection and diagnosis of TSAs are straightforward given their unique endoscopic and histologic features. In contrast, HPs and SSA/Ps have similar endoscopic appearances and overlapping histologic features, complicating endoscopic and pathologic differentiation between the two lesions [4, 5]. Furthermore, based on recent data, distinguishing SSA/Ps from HPs may also have some significance for metachronous risk assessment [6], and therefore, differentiating HPs from SSA/Ps is a common concern for endoscopists. A simple, accurate, and reproducible way to endoscopically distinguish HPs from SSA/Ps would aid endoscopists in their efforts to identify and remove all serrated lesions with malignant potential. Furthermore, these methods would be helpful in implementation of new paradigms in which diminutive polyps are optically diagnosed and either not removed or resected and not recovered for subsequent pathologic examination [7]. The American Society for Gastrointestinal Endoscopy (ASGE) Technology Committee’s “Preservation and Incorporation of Valuable endoscopic Innovations” (PIVI) paper provides recommendations for adoption of new technologies or strategies into clinical practice, which can be used to optically diagnose diminutive (≤ 5 mm) polyps [8]. Although developed for adenomas, strategies for optical diagnosis can also be applied by endoscopists for serrated polyps. For example, use of the “diagnose and leave” strategy for serrated polyps could decrease the risk and cost of colonoscopy by obviating the need for polypectomy in polyps that were endoscopically diagnosed as HPs [9]. Alternatively, the “resect and discard” could decrease cost by eliminating the need for pathologic interpretation of serrated polyps that endoscopists were confident were HPs and not SSA/Ps. In this month’s issue of Digestive Diseases and Sciences [10], Aoki et al. in Sapporo, Japan, characterized serrated polyps, conventional adenomas, and CRC using endoscopic, pathologic, and molecular features. In addition to size and location, trained endoscopists used the Paris classification [11] to characterize the shape of the lesions. Magnification chromoendoscopy that has gained popularity in the Far East enables visualization of the “pit pattern” which reflects the colonic pit structure and enables the differentiation of the many types of polyps. The pit patterns are currently categorized by the Kudo classification [12, 13] where Type I indicates normal mucosa, Type II is consistent with HP, and Types III, IV, and V are consistent with dysplastic changes. Disclaimer The contents of this work do not represent the views of the Department of Veterans Affairs or the United States Government.", "title": "" }, { "docid": "e17c5945d67c504725e9027c6aa6d4e7", "text": "A vast amount of digital document material is continuously being produced as part of major digitization efforts around the world. In this context, generic and efficient automatic solutions for document image understanding represent a stringent necessity. We propose a generic framework for document image understanding systems, usable for practically any document types available in digital form. Following the introduced workflow, we shift our attention to each of the following processing stages in turn: quality assurance, image enhancement, color reduction and binarization, skew and orientation detection, page segmentation and logical layout analysis. We review the state of the art in each area, identify current deficiencies, point out promising directions and give specific guidelines for future investigation. We address some of the identified issues by means of novel algorithmic solutions putting special focus on generality, computational efficiency and the exploitation of all available sources of information. More specifically, we introduce the following original methods: a fully automatic detection of color reference targets in digitized material, accurate foreground extraction from color historical documents, font enhancement for hot metal typesetted prints, a theoretically optimal solution for the document binarization problem from both computational complexityand threshold selection point of view, a layout-independent skew and orientation detection, a robust and versatile page segmentation method, a semi-automatic front page detection algorithm and a complete framework for article segmentation in periodical publications. The proposed methods are experimentally evaluated on large datasets consisting of real-life heterogeneous document scans. The obtained results show that a document understanding system combining these modules is able to robustly process a wide variety of documents with good overall accuracy.", "title": "" }, { "docid": "37a833b09bdf74b1fed7f20dd4ff699f", "text": "The blood and lymphatic systems are the two major circulatory systems in our body. Although the blood system has been studied extensively, the lymphatic system has received much less scientific and medical attention because of its elusive morphology and mysterious pathophysiology. However, a series of landmark discoveries made in the past decade has begun to change the previous misconception of the lymphatic system to be secondary to the more essential blood vascular system. In this article, we review the current understanding of the development and pathology of the lymphatic system. We hope to convince readers that the lymphatic system is no less essential than the blood circulatory system for human health and well-being.", "title": "" }, { "docid": "6f0ffda347abfd11dc78c0b76ceb11f8", "text": "A previous study of 22 medical patients with DSM-III-R-defined anxiety disorders showed clinically and statistically significant improvements in subjective and objective symptoms of anxiety and panic following an 8-week outpatient physician-referred group stress reduction intervention based on mindfulness meditation. Twenty subjects demonstrated significant reductions in Hamilton and Beck Anxiety and Depression scores postintervention and at 3-month follow-up. In this study, 3-year follow-up data were obtained and analyzed on 18 of the original 22 subjects to probe long-term effects. Repeated measures analysis showed maintenance of the gains obtained in the original study on the Hamilton [F(2,32) = 13.22; p < 0.001] and Beck [F(2,32) = 9.83; p < 0.001] anxiety scales as well as on their respective depression scales, on the Hamilton panic score, the number and severity of panic attacks, and on the Mobility Index-Accompanied and the Fear Survey. A 3-year follow-up comparison of this cohort with a larger group of subjects from the intervention who had met criteria for screening for the original study suggests generalizability of the results obtained with the smaller, more intensively studied cohort. Ongoing compliance with the meditation practice was also demonstrated in the majority of subjects at 3 years. We conclude that an intensive but time-limited group stress reduction intervention based on mindfulness meditation can have long-term beneficial effects in the treatment of people diagnosed with anxiety disorders.", "title": "" }, { "docid": "97ac42e91c4f9fa9e20e6b6e8d3f8421", "text": "Wrist worn wearable computing devices are ideally suited for presenting notifications through haptic stimuli as they are always in direct contact with the user's skin. While prior work has explored the feasibility of haptic notifications, we highlight a lack of empirical studies on thermal and pressure feedback in the context of wearable devices. This paper introduces prototypes for thermal and pressure (squeeze) feedback on the wrist. It then presents a study characterizing recognition performance with thermal and pressure cues against baseline performance with vibrations.", "title": "" }, { "docid": "88a10ea3bae30f371c3f6276beff9e58", "text": "This research is a part of smart farm system in the framework of precision agriculture. The system was installed and tested over a year. The tractor tracking system employs the Global Positioning System (GPS) and ZigBee wireless network based on mesh topology to make the system communicate covering a large area. Router nodes are used for re-transmission of data in the network. A software was developed for acquiring data from tractor, storing data and displaying in real time on a web site.", "title": "" }, { "docid": "14baf30e1bdf7e31082fc2f1be8ea01c", "text": "Different concentrations (3, 30, 300, and 3000 mg/L of culture fluid) of garlic oil (GAR), diallyl sulfide (DAS), diallyl disulfide (DAD), allicin (ALL), and allyl mercaptan (ALM) were incubated for 24 h in diluted ruminal fluid with a 50:50 forage:concentrate diet (17.7% crude protein; 30.7% neutral detergent fiber) to evaluate their effects on rumen microbial fermentation. Garlic oil (30 and 300 mg/L), DAD (30 and 300 mg/L), and ALM (300 mg/L) resulted in lower molar proportion of acetate and higher proportions of propionate and butyrate. In contrast, at 300 mg/L, DAS only increased the proportion of butyrate, and ALL had no effects on volatile fatty acid proportions. In a dual-flow continuous culture of rumen fluid fed the same 50:50 forage:concentrate diet, addition of GAR (312 mg/L), DAD (31.2 and 312 mg/L), and ALM (31.2 and 312 mg/L) resulted in similar changes to those observed in batch culture, with the exception of the lack of effect of DAD on the proportion of propionate. In a third in vitro study, the potential of GAR (300 mg/L), DAD (300 mg/L), and ALM (300 mg/L) to decrease methane production was evaluated. Treatments GAR, DAD, and ALM resulted in a decrease in methane production of 73.6, 68.5, and 19.5%, respectively, compared with the control. These results confirm the ability of GAR, DAD, and ALM to decrease methane production, which may help to improve the efficiency of energy use in the rumen.", "title": "" } ]
scidocsrr
359eb65bdd0ebf6d9cc212b42f53cbba
Virtual Network Function placement for resilient Service Chain provisioning
[ { "docid": "182bb07fb7dbbaf17b6c7a084f1c4fb2", "text": "Network Functions Virtualization (NFV) is an upcoming paradigm where network functionality is virtualized and split up into multiple building blocks that can be chained together to provide the required functionality. This approach increases network flexibility and scalability as these building blocks can be allocated and reallocated at runtime depending on demand. The success of this approach depends on the existence and performance of algorithms that determine where, and how these building blocks are instantiated. In this paper, we present and evaluate a formal model for resource allocation of virtualized network functions within NFV environments, a problem we refer to as Virtual Network Function Placement (VNF-P). We focus on a hybrid scenario where part of the services may be provided by dedicated physical hardware, and where part of the services are provided using virtualized service instances. We evaluate the VNF-P model using a small service provider scenario and two types of service chains, and evaluate its execution speed. We find that the algorithms finish in 16 seconds or less for a small service provider scenario, making it feasible to react quickly to changing demand.", "title": "" }, { "docid": "cbe9729b403a07386a76447c4339c5f3", "text": "Network appliances perform different functions on network flows and constitute an important part of an operator's network. Normally, a set of chained network functions process network flows. Following the trend of virtualization of networks, virtualization of the network functions has also become a topic of interest. We define a model for formalizing the chaining of network functions using a context-free language. We process deployment requests and construct virtual network function graphs that can be mapped to the network. We describe the mapping as a Mixed Integer Quadratically Constrained Program (MIQCP) for finding the placement of the network functions and chaining them together considering the limited network resources and requirements of the functions. We have performed a Pareto set analysis to investigate the possible trade-offs between different optimization objectives.", "title": "" } ]
[ { "docid": "2a09d97b350fa249fc6d4bbf641697e2", "text": "The goal of this study was to investigate the effect of lead and the influence of chelating agents,meso 2, 3-dimercaptosuccinic acid (DMSA) and D-Penicillamine, on the biochemical contents of the brain tissues of Catla catla fingerlings by Fourier Transform Infrared Spectroscopy. FT-IR spectra revealed significant differences in absorbance intensities between control and lead-intoxicated brain tissues, reflecting a change in protein and lipid contents in the brain tissues due to lead toxicity. In addition, the administration of chelating agents, DMSA and D-Penicillamine, improved the protein and lipid contents in the brain tissues compared to lead-intoxicated tissues. Further, DMSA was more effective in reducing the body burden of lead. The protein secondary structure analysis revealed that lead intoxication causes an alteration in protein profile with a decrease in α-helix and an increase in β-sheet structure of Catla catla brain. In conclusion, the study demonstrated that FT-IR spectroscopy could differentiate the normal and lead-intoxicated brain tissues due to intrinsic differences in intensity.", "title": "" }, { "docid": "0612db6f5e30d37122d37b26e2a2bb0a", "text": "This paper presents a novel approach to procedural generation of urban maps for First Person Shooter (FPS) games. A multi-agent evolutionary system is employed to place streets, buildings and other items inside the Unity3D game engine, resulting in playable video game levels. A computational agent is trained using machine learning techniques to capture the intent of the game designer as part of the multi-agent system, and to enable a semi-automated aesthetic selection for the underlying genetic algorithm.", "title": "" }, { "docid": "7844d2e53deba7bcfef03f06a6bced59", "text": "In power line communications (PLCs), the multipath-induced dispersion and the impulsive noise are the two fundamental impediments in the way of high-integrity communications. The conventional orthogonal frequency-division multiplexing (OFDM) system is capable of mitigating the multipath effects in PLCs, but it fails to suppress the impulsive noise effects. Therefore, in order to mitigate both the multipath effects and the impulsive effects in PLCs, in this paper, a compressed impairment sensing (CIS)-assisted and interleaved-double-FFT (IDFFT)-aided system is proposed for indoor broadband PLC. Similar to classic OFDM, data symbols are transmitted in the time-domain, while the equalization process is employed in the frequency domain in order to achieve the maximum attainable multipath diversity gain. In addition, a specifically designed interleaver is employed in the frequency domain in order to mitigate the impulsive noise effects, which relies on the principles of compressed sensing (CS). Specifically, by taking advantage of the interleaving process, the impairment impulsive samples can be estimated by exploiting the principle of CS and then cancelled. In order to improve the estimation performance of CS, we propose a beneficial pilot design complemented by a pilot insertion scheme. Finally, a CIS-assisted detector is proposed for the IDFFT system advocated. Our simulation results show that the proposed CIS-assisted IDFFT system is capable of achieving a significantly improved performance compared with the conventional OFDM. Furthermore, the tradeoffs to be struck in the design of the CIS-assisted IDFFT system are also studied.", "title": "" }, { "docid": "3f7c6490ccb6d95bd22644faef7f452f", "text": "A blockchain is a distributed, decentralised database of records of digital events (transactions) that took place and were shared among the participating parties. Each transaction in the public ledger is verified by consensus of a majority of the participants in the system. Bitcoin may not be that important in the future, but blockchain technology's role in Financial and Non-financial world can't be undermined. In this paper, we provide a holistic view of how Blockchain technology works, its strength and weaknesses, and its role to change the way the business happens today and tomorrow.", "title": "" }, { "docid": "5ebdda11fbba5d0633a86f2f52c7a242", "text": "What is index modulation (IM)? This is an interesting question that we have started to hear more and more frequently over the past few years. The aim of this paper is to answer this question in a comprehensive manner by covering not only the basic principles and emerging variants of IM, but also reviewing the most recent as well as promising advances in this field toward the application scenarios foreseen in next-generation wireless networks. More specifically, we investigate three forms of IM: spatial modulation, channel modulation and orthogonal frequency division multiplexing (OFDM) with IM, which consider the transmit antennas of a multiple-input multiple-output system, the radio frequency mirrors (parasitic elements) mounted at a transmit antenna and the subcarriers of an OFDM system for IM techniques, respectively. We present the up-to-date advances in these three promising frontiers and discuss possible future research directions for IM-based schemes toward low-complexity, spectrum- and energy-efficient next-generation wireless networks.", "title": "" }, { "docid": "76a9799863bd944fb969539e8817cccd", "text": "This paper investigates the application of non-orthogonal multiple access (NOMA) in millimeter wave (mm-Wave) communications by exploiting beamforming, user scheduling, and power allocation. Random beamforming is invoked for reducing the feedback overhead of the considered system. A non-convex optimization problem for maximizing the sum rate is formulated, which is proved to be NP-hard. The branch and bound approach is invoked to obtain the $\\epsilon$ -optimal power allocation policy, which is proved to converge to a global optimal solution. To elaborate further, a low-complexity suboptimal approach is developed for striking a good computational complexity-optimality tradeoff, where the matching theory and successive convex approximation techniques are invoked for tackling the user scheduling and power allocation problems, respectively. Simulation results reveal that: 1) the proposed low complexity solution achieves a near-optimal performance and 2) the proposed mm-Wave NOMA system is capable of outperforming conventional mm-Wave orthogonal multiple access systems in terms of sum rate and the number of served users.", "title": "" }, { "docid": "8b12c633e6c9fb177459bb9609afeb1a", "text": "Chronic osteomyelitis of the jaw is a rare entity in the healthy population of the developed world. It is normally associated with radiation and bisphosphonates ingestion and occurs in immunosuppressed individuals such as alcoholics or diabetics. Two cases are reported of chronic osteomyelitis in healthy individuals with no adverse medical conditions. The management of these cases are described.", "title": "" }, { "docid": "4dbbcaf264cc9beda8644fa926932d2e", "text": "It is relatively stress-free to write about computer games as nothing too much has been said yet, and almost anything goes. The situation is pretty much the same when it comes to writing about games and gaming in general. The sad fact with alarming cumulative consequences is that they are undertheorized; there are Huizinga, Caillois and Ehrmann of course, and libraries full of board game studies,in addition to game theory and bits and pieces of philosophy—most notably those of Wittgenstein— but they won’t get us very far with computer games. So if there already is or soon will be a legitimate field for computer game studies, this field is also very open to intrusions and colonisations from the already organized scholarly tribes. Resisting and beating them is the goal of our first survival game in this paper, as what these emerging studies need is independence, or at least relative independence.", "title": "" }, { "docid": "385922d94a35c37776ba816645e964c7", "text": "In this paper, we develop a unified vision system for small-scale aircraft, known broadly as Micro Air Vehicl es (MAVs), that not only addresses basic flight stability and control, but also enables more intelligent missions, such as ground o bject recognition and moving-object tracking. The proposed syst em defines a framework for real-time image feature extraction, horizon detection and sky/ground segmentation, and contex tual ground object detection. Multiscale Linear Discriminant Analysis (MLDA) defines the first stage of the vision system, and generates a multiscale description of images, incorporati ng both color and texture through a dynamic representation of image details. This representation is ideally suited for horizondetection and sky/ground segmentation of images, which we accomplish through the probabilistic representation of tree-structured belief networks (TSBN). Specifically, we propose incomplete meta TSBNs (IMTSBN) to accommodate the properties of our MLDA representation and to enhance the descriptive component of these statistical models. In the last stage of the vision processi ng, we seamlessly extend this probabilistic framework to perfo rm computationally efficient detection and recognition of obj ects in the segmented ground region, through the idea of visual contexts. By exploiting the concept of visual contexts, we c an quickly focus on candidate regions, where objects of intere st may be found, and then compute additional features through the Complex Wavelet Transform (CWT) and HSI color space for those regions, only. These additional features, while n ot necessary for global regions, are useful in accurate detect ion and recognition of smaller objects. Throughout, our approach is heavily influenced by real-time constraints and robustne ss to transient video noise.", "title": "" }, { "docid": "4520316ecef3051305e547d50fadbb7a", "text": "The increasing complexity and size of digital designs, in conjunction with the lack of a potent verification methodology that can effectively cope with this trend, continue to inspire engineers and academics in seeking ways to further automate design verification. In an effort to increase performance and to decrease engineering effort, research has turned to artificial intelligence (AI) techniques for effective solutions. The generation of tests for simulation-based verification can be guided by machine-learning techniques. In fact, recent advances demonstrate that embedding machine-learning (ML) techniques into a coverage-directed test generation (CDG) framework can effectively automate the test generation process, making it more effective and less error-prone. This article reviews some of the most promising approaches in this field, aiming to evaluate the approaches and to further stimulate more directed research in this area.", "title": "" }, { "docid": "9afc8df23892162a220b1804fe415a36", "text": "Social entrepreneurship is gradually becoming a crucial element in the worldwide discussion on volunteerism and civic commitment. It interleaves the passion of a common cause with industrial ethics and is notable and different from the present other types of entrepreneurship models due to its quest for mission associated influence. The previous few years have noticed a striking and surprising progress in the field of social entrepreneurship and has amplified attention ranging throughout all the diverse sectors. The critical difference between social and traditional entrepreneurship can be seen in the founding mission of the venture and the market impressions. Social entrepreneurs emphasize on ways to relieve or eradicate societal pressures and produce progressive externalities or public properties. This study focuses mainly on the meaning of social entrepreneurship to different genres and where does it stand in respect to other forms of entrepreneurship in today’s times.", "title": "" }, { "docid": "b51a1df32ce34ae3f1109a9053b4bc1f", "text": "Nowadays many automobile manufacturers are switching to Electric Power Steering (EPS) for its advantages on performance and cost. In this paper, a mathematical model of a column type EPS system is established, and its state-space expression is constructed. Then three different control methods are implemented and performance, robustness and disturbance rejection properties of the EPS control systems are investigated. The controllers are tested via simulation and results show a modified Linear Quadratic Gaussian (LQG) controller can track the characteristic curve well and effectively attenuate external disturbances.", "title": "" }, { "docid": "f513a112b7fe4ffa2599a0f144b2e112", "text": "A defined software process is needed to provide organizations with a consistent framework for performing their work and improving the way they do it. An overall framework for modeling simplifies the task of producing process models, permits them to be tailored to individual needs, and facilitates process evolution. This paper outlines the principles of entity process models and suggests ways in which they can help to address some of the problems with more conventional approaches to modeling software processes.", "title": "" }, { "docid": "fce6ac500501d0096aac3513639c2627", "text": "Recent technological advances made necessary the use of the robots in various types of applications. Currently, the traditional robot-like scenarios dedicated to industrial applications with repetitive tasks, were replaced by applications which require human interaction. The main field of such applications concerns the rehabilitation and aid of elderly persons. In this study, we present a state-of-the-art of the main research advances in lower limbs actuated orthosis/wearable robots in the literature. This will include a review on researches covering full limb exoskeletons, lower limb exoskeletons and particularly the knee joint orthosis. Rehabilitation using treadmill based device and use of Functional Electrical Stimulation (FES) are also investigated. We discuss finally the challenges not yet solved such as issues related to portability, energy consumption, social constraints and high costs of theses devices.", "title": "" }, { "docid": "e79e94549bca30e3a4483f7fb9992932", "text": "The use of semantic technologies and Semantic Web ontologies in particular have enabled many recent developments in information integration, search engines, and reasoning over formalised knowledge. Ontology Design Patterns have been proposed to be useful in simplifying the development of Semantic Web ontologies by codifying and reusing modelling best practices. This thesis investigates the quality of Ontology Design Patterns. The main contribution of the thesis is a theoretically grounded and partially empirically evaluated quality model for such patterns including a set of quality characteristics, indicators, measurement methods and recommendations. The quality model is based on established theory on information system quality, conceptual model quality, and ontology evaluation. It has been tested in a case study setting and in two experiments. The main findings of this thesis are that the quality of Ontology Design Patterns can be identified, formalised and measured, and furthermore, that these qualities interact in such a way that ontology engineers using patterns need to make tradeoffs regarding which qualities they wish to prioritise. The developed model may aid them in making these choices. This work has been supported by Jönköping University. Department of Computer and Information Science Linköping University SE-581 83 Linköping, Sweden", "title": "" }, { "docid": "bd882f762be5a9cb67191a7092fc88e3", "text": "This study tested the criterion validity of the inventory, Mental Toughness 48, by assessing the correlation between mental toughness and physical endurance for 41 male undergraduate sports students. A significant correlation of .34 was found between scores for overall mental toughness and the time a relative weight could be held suspended. Results support the criterion-related validity of the Mental Toughness 48.", "title": "" }, { "docid": "fa604c528539ac5cccdbd341a9aebbf7", "text": "BACKGROUND\nAn understanding of p-values and confidence intervals is necessary for the evaluation of scientific articles. This article will inform the reader of the meaning and interpretation of these two statistical concepts.\n\n\nMETHODS\nThe uses of these two statistical concepts and the differences between them are discussed on the basis of a selective literature search concerning the methods employed in scientific articles.\n\n\nRESULTS/CONCLUSIONS\nP-values in scientific studies are used to determine whether a null hypothesis formulated before the performance of the study is to be accepted or rejected. In exploratory studies, p-values enable the recognition of any statistically noteworthy findings. Confidence intervals provide information about a range in which the true value lies with a certain degree of probability, as well as about the direction and strength of the demonstrated effect. This enables conclusions to be drawn about the statistical plausibility and clinical relevance of the study findings. It is often useful for both statistical measures to be reported in scientific articles, because they provide complementary types of information.", "title": "" }, { "docid": "0d6165524d748494a5c4d0d2f0675c42", "text": "In Saudi Arabia, breast cancer is diagnosed at advanced stage compared to Western countries. Nevertheless, the perceived barriers to delayed presentation have been poorly examined. Additionally, available breast cancer awareness data are lacking validated measurement tool. The aim of this study is to evaluate the level of breast cancer awareness and perceived barriers to seeking medical care among Saudi women, using internationally validated tool. A cross-sectional study was conducted among adult Saudi women attending a primary care center in Riyadh during February 2014. Data were collected using self-administered questionnaire based on the Breast Cancer Awareness Measure (CAM-breast). Out of 290 women included, 30 % recognized five or more (out of nine) non-lump symptoms of breast cancer, 31 % correctly identified the risky age of breast cancer (set as 50 or 70 years), 28 % reported frequent (at least once a month) breast checking. Considering the three items of the CAM-breast, only 5 % were completely aware while 41 % were completely unaware of breast cancer. The majority (94 %) reported one or more barriers. The most frequently reported barrier was the difficulty of getting a doctor appointment (39 %) followed by worries about the possibility of being diagnosed with breast cancer (31 %) and being too busy to seek medical help (26 %). We are reporting a major gap in breast cancer awareness and several logistic and emotional barriers to seeking medical care among adult Saudi women. The current findings emphasized the critical need for an effective national breast cancer education program to increase public awareness and early diagnosis.", "title": "" }, { "docid": "660f957b70e53819724e504ed3de0776", "text": "We propose several econometric measures of connectedness based on principalcomponents analysis and Granger-causality networks, and apply them to the monthly returns of hedge funds, banks, broker/dealers, and insurance companies. We find that all four sectors have become highly interrelated over the past decade, likely increasing the level of systemic risk in the finance and insurance industries through a complex and time-varying network of relationships. These measures can also identify and quantify financial crisis periods, and seem to contain predictive power in out-of-sample tests. Our results show an asymmetry in the degree of connectedness among the four sectors, with banks playing a much more important role in transmitting shocks than other financial institutions. & 2011 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "dd9f40db5e52817b25849282ffdafe26", "text": "Pattern classification methods based on learning-from-examples have been widely applied to character recognition from the 1990s and have brought forth significant improvements of recognition accuracies. This kind of methods include statistical methods, artificial neural networks, support vector machines, multiple classifier combination, etc. In this chapter, we briefly review the learning-based classification methods that have been successfully applied to character recognition, with a special section devoted to the classification of large category set. We then discuss the characteristics of these methods, and discuss the remaining problems in character recognition that can be potentially solved by machine learning methods.", "title": "" } ]
scidocsrr
6db4838b4e56194f920602d7790613af
Learning Text Representations for 500K Classification Tasks on Named Entity Disambiguation
[ { "docid": "115e5489516c76a75469732cfab3c0bb", "text": "The task of Named Entity Disambiguation is to map entity mentions in the document to their correct entries in some knowledge base. We present a novel graph-based disambiguation approach based on Personalized PageRank (PPR) that combines local and global evidence for disambiguation and effectively filters out noise introduced by incorrect candidates. Experiments show that our method outperforms state-of-the-art approaches by achieving 91.7% in microand 89.9% in macroaccuracy on a dataset of 27.8K named entity mentions.", "title": "" }, { "docid": "5b9d8b0786691f68659bcce2e6803cdb", "text": "We introduce SentEval, a toolkit for evaluating the quality of universal sentence representations. SentEval encompasses a variety of tasks, including binary and multi-class classification, natural language inference and sentence similarity. The set of tasks was selected based on what appears to be the community consensus regarding the appropriate evaluations for universal sentence representations. The toolkit comes with scripts to download and preprocess datasets, and an easy interface to evaluate sentence encoders. The aim is to provide a fairer, less cumbersome and more centralized way for evaluating sentence representations.", "title": "" } ]
[ { "docid": "555b07171f5305f7ae968d9a76d74ec3", "text": "The production of lithium-ion (Li-ion) batteries has been continually increasing since their first introduction into the market in 1991 because of their excellent performance, which is related to their high specific energy, energy density, specific power, efficiency, and long life. Li-ion batteries were first used for consumer electronics products such as mobile phones, camcorders, and laptop computers, followed by automotive applications that emerged during the last decade and are still expanding, and finally industrial applications including energy storage. There are four promising cell chemistries considered for energy storage applications: 1) LiMn2O4/graphite cell chemistry uses low-cost materials that are naturally abundant; 2) LiNi1-X-Y2CoXAlYO2/graphite cell chemistry has high specific energy and long life; 3) LiFePO4/graphite (or carbon) cell chemistry has good safety characteristics; and 4) Li4Ti5O12 is used as the negative electrode material in Li-ion batteries with long life and good safety features. However, each of the cell chemistries has some disadvantages, and the development of these technologies is still in progress. Therefore, it is too early to predict which cell chemistry will be the main candidate for energy storage applications, and we have to remain vigilant with respect to trends in technological progress and also consider changes in economic and social conditions before this can be determined.", "title": "" }, { "docid": "e5a936bbd9e6dc0189b7cc18268f0f87", "text": "A new method of obtaining amplitude modulation (AM) for determining target location with spinning reticles is presented. The method is based on the use of graded transmission capabilities. The AM spinning reticles previously presented were functions of three parameters: amplitude vs angle, amplitude vs radius, and phase. This paper presents these parameters along with their capabilities and limitations and shows that multiple parameters can be integrated into a single reticle. It is also shown that AM parameters can be combined with FM parameters in a single reticle. Also, a general equation is developed that relates the AM parameters to a reticle transmission equation.", "title": "" }, { "docid": "d6e76bfeeb127addcbe2eb77b1b0ad7e", "text": "The choice of modeling units is critical to automatic speech recognition (ASR) tasks. Conventional ASR systems typically choose context-dependent states (CD-states) or contextdependent phonemes (CD-phonemes) as their modeling units. However, it has been challenged by sequence-to-sequence attention-based models, which integrate an acoustic, pronunciation and language model into a single neural network. On English ASR tasks, previous attempts have already shown that the modeling unit of graphemes can outperform that of phonemes by sequence-to-sequence attention-based model. In this paper, we are concerned with modeling units on Mandarin Chinese ASR tasks using sequence-to-sequence attention-based models with the Transformer. Five modeling units are explored including context-independent phonemes (CI-phonemes), syllables, words, sub-words and characters. Experiments on HKUST datasets demonstrate that the lexicon free modeling units can outperform lexicon related modeling units in terms of character error rate (CER). Among five modeling units, character based model performs best and establishes a new state-of-the-art CER of 26.64% on HKUST datasets without a hand-designed lexicon and an extra language model integration, which corresponds to a 4.8% relative improvement over the existing best CER of 28.0% by the joint CTC-attention based encoder-decoder network.", "title": "" }, { "docid": "a23fd89da025d456f9fe3e8a47595c6a", "text": "Mobile devices are especially vulnerable nowadays to malware attacks, thanks to the current trend of increased app downloads. Despite the significant security and privacy concerns it received, effective malware detection (MD) remains a significant challenge. This paper tackles this challenge by introducing a streaminglized machine learning-based MD framework, StormDroid: (i) The core of StormDroid is based on machine learning, enhanced with a novel combination of contributed features that we observed over a fairly large collection of data set; and (ii) we streaminglize the whole MD process to support large-scale analysis, yielding an efficient and scalable MD technique that observes app behaviors statically and dynamically. Evaluated on roughly 8,000 applications, our combination of contributed features improves MD accuracy by almost 10% compared with state-of-the-art antivirus systems; in parallel our streaminglized process, StormDroid, further improves efficiency rate by approximately three times than a single thread.", "title": "" }, { "docid": "60a3ba5263067030434db976e6e121db", "text": "Background and Objective: Physical inactivity is the fourth leading risk factor for global mortality. Physical inactivity levels are rising in developing countries and Malaysia is of no exception. Malaysian Adult Nutrition Survey 2003 reported that the prevalence of physical inactivity was 39.7% and the prevalence was higher for women (42.6%) than men (36.7%). In Malaysia, the National Health and Morbidity Survey 2006 reported that 43.7% (5.5 million) of Malaysian adults were physically inactive. These statistics show that physically inactive is an important public health concern in Malaysia. College students have been found to have poor physical activity habits. The objective of this study was to identify the physical activity level among students of Asia Metropolitan University (AMU) in Malaysia.", "title": "" }, { "docid": "3f268b6048d534720cac533f04c2aa7e", "text": "This paper seeks a simple, cost effective and compact gate drive circuit for bi-directional switch of matrix converter. Principals of IGBT commutation and bi-directional switch commutation in matrix converters are reviewed. Three simple IGBT gate drive circuits are presented and simulated in PSpice and simulation results are approved by experiments in the end of this paper. Paper concludes with comparative numbers of gate drive costs.", "title": "" }, { "docid": "f2570d998f64f0362103f714da17c8da", "text": "software fault-tolerance, process replication failure masking, continuous availability, topology The ambition of fault-tolerant systems is to provide application transparent fault-tolerance at the same performance as a non-fault-tolerant system. Somersault is a library for developing distributed fault-tolerant software systems that comes close to achieving both goals. We describe Somersault and its properties, including: 1. Fault-tolerance — Somersault implements \" process mirroring \" within a group of processes called a recovery unit. Failure of individual group members is completely masked. 2. Abstraction — Somersault provides loss-less messaging between units. Recovery units and single processes are addressed uniformly as single entities. Recovery unit application code is unaware of replication. 3. High performance — The simple protocol provides throughput comparable to non-fault-tolerant processes at a low latency overhead. There is also sub-second failover time. 4. Compositionality — The same protocol is used to communicate between recovery units as between single processes, so any topology can be formed. 5. Scalability — Failure detection, failure recovery and general system performance are independent of the number of recovery units in a software system. Somersault has been developed at HP Laboratories. At the time of writing it is undergoing industrial trials. The ambition of fault-tolerant systems is to provide application transparent fault-tolerance at the same performance as a non-fault-tolerant system. Somersault is a library for developing distributed fault-tolerant software systems that comes close to achieving both goals. We describe Somersault and its properties, including: • Fault-tolerance – Somersault implements \" process mirroring \" within a group of processes called a recovery unit. Failure of individual group members is completely masked. • Abstraction – Somersault provides loss-less messaging between units. Recovery units and single processes are addressed uniformly as single entities. Recovery unit application code is unaware of replication. • High performance – The simple protocol provides throughput comparable to non-fault-tolerant processes at a low latency overhead. There is also sub-second failover time. • Compositionality – the same protocol is used to communicate between recovery units as between single processes, so any topology can be formed. • Scalability – failure detection, failure recovery and general system performance are independent of the number of recovery units in a software system. Somersault has been developed at HP laboratories. At the time of writing it is undergoing industrial trials.", "title": "" }, { "docid": "69f2773d7901ac9d477604a85fb6a591", "text": "We propose an expert-augmented actor-critic algorithm, which we evaluate on two environments with sparse rewards: Montezuma’s Revenge and a demanding maze from the ViZDoom suite. In the case of Montezuma’s Revenge, an agent trained with our method achieves very good results consistently scoring above 27,000 points (in many experiments beating the first world). With an appropriate choice of hyperparameters, our algorithm surpasses the performance of the expert data. In a number of experiments, we have observed an unreported bug in Montezuma’s Revenge which allowed the agent to score more than 800, 000 points.", "title": "" }, { "docid": "d4b6be1c4d8dd37b71bf536441449ad5", "text": "Why should wait for some days to get or receive the distributed computing fundamentals simulations and advanced topics book that you order? Why should you take it if you can get the faster one? You can find the same book that you order right here. This is it the book that you can receive directly after purchasing. This distributed computing fundamentals simulations and advanced topics is well known book in the world, of course many people will try to own it. Why don't you become the first? Still confused with the way?", "title": "" }, { "docid": "52bee48854d8eaca3b119eb71d79c22d", "text": "In this paper, we present a new combined approach for feature extraction, classification, and context modeling in an iterative framework based on random decision trees and a huge amount of features. A major focus of this paper is to integrate different kinds of feature types like color, geometric context, and auto context features in a joint, flexible and fast manner. Furthermore, we perform an in-depth analysis of multiple feature extraction methods and different feature types. Extensive experiments are performed on challenging facade recognition datasets, where we show that our approach significantly outperforms previous approaches with a performance gain of more than 15% on the most difficult dataset.", "title": "" }, { "docid": "9bc56456f770a1b928d97b8877682a82", "text": "Submodular optimization has found many applications in machine learning and beyond. We carry out the first systematic investigation of inference in probabilistic models defined through submodular functions, generalizing regular pairwise MRFs and Determinantal Point Processes. In particular, we present L-FIELD, a variational approach to general log-submodular and log-supermodular distributions based on suband supergradients. We obtain both lower and upper bounds on the log-partition function, which enables us to compute probability intervals for marginals, conditionals and marginal likelihoods. We also obtain fully factorized approximate posteriors, at the same computational cost as ordinary submodular optimization. Our framework results in convex problems for optimizing over differentials of submodular functions, which we show how to optimally solve. We provide theoretical guarantees of the approximation quality with respect to the curvature of the function. We further establish natural relations between our variational approach and the classical mean-field method. Lastly, we empirically demonstrate the accuracy of our inference scheme on several submodular models.", "title": "" }, { "docid": "20cc5c4aa870918f123e78490d5a5a73", "text": "The interest and demand for female genital rejuvenation surgery are steadily increasing. This report presents a concept of genital beautification consisting of labia minora reduction, labia majora augmentation by autologous fat transplantation, labial brightening by laser, mons pubis reduction by liposuction, and vaginal tightening if desired. Genital beautification was performed for 124 patients between May 2009 and January 2012 and followed up for 1 year to obtain data about satisfaction with the surgery. Of the 124 female patients included in the study, 118 (95.2 %) were happy and 4 (3.2 %) were very happy with their postoperative appearance. In terms of postoperative functionality, 84 patients (67.7 %) were happy and 40 (32.3 %) were very happy. Only 2 patients (1.6 %) were not satisfied with the aesthetic result of their genital beautification procedures, and 10 patients (8.1 %) experienced wound dehiscence. The described technique of genital beautification combines different aesthetic female genital surgery techniques. Like other aesthetic surgeries, these procedures are designed for the subjective improvement of the appearance and feelings of the patients. The effects of the operation are functional and psychological. They offer the opportunity for sexual stimulation and satisfaction. The complication rate is low. Superior aesthetic results and patient satisfaction can be achieved by applying this technique. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .", "title": "" }, { "docid": "e88295646837a58b058b359f71ab49f9", "text": "The learning capability of a neural network improves with increasing depth at higher computational costs. Wider layers with dense kernel connectivity patterns furhter increase this cost and may hinder real-time inference. We propose feature map and kernel level pruning for reducing the computational complexity of a deep convolutional neural network. Pruning feature maps reduces the width of a layer and hence does not need any sparse representation. Further, kernel pruning converts the dense connectivity pattern into a sparse one. Due to coarse nature, these pruning granularities can be exploited by GPUs and VLSI based implementations. We propose a simple and generic strategy to choose the least adversarial pruning masks for both granularities. The pruned networks are retrained which compensates the loss in accuracy. We obtain the best pruning ratios when we prune a network with both granularities. Experiments with the CIFAR-10 dataset show that more than 85% sparsity can be induced in the convolution layers with less than 1% increase in the missclassification rate of the baseline network.", "title": "" }, { "docid": "dcc9490a771e5b2758181424b0407306", "text": "An ultra-low power wake-up receiver for 2.4-GHz wireless sensor networks, based on a fast sampling method, is presented. A novel multi-branch receiver architecture covers a wide range of interferer scenarios for highly occupied radio channels. The scalability of current consumption versus data rate at a constant sensitivity is another useful feature that fits a multitude of applications, requiring both short reaction times and ultra-low power consumption. The 2.4-GHz OOK receiver comprises a 3-branch analog superheterodyne front-end and six digital 31-bit correlating decoders. It is fabricated in a 130-nm CMOS technology. The current consumption is 2.9 μA at 2.5 V supply voltage and a reaction time of 30 ms. The receiver sensitivity is -80 dBm. Among other sub-100 μW state-of-the-art receivers, the presented implementation shows the best reported sensitivity.", "title": "" }, { "docid": "1b314c55b86355e1fd0ef5d5ce9a89ba", "text": "3D printing technology is rapidly maturing and becoming ubiquitous. One of the remaining obstacles to wide-scale adoption is that the object to be printed must fit into the working volume of the 3D printer. We propose a framework, called Chopper, to decompose a large 3D object into smaller parts so that each part fits into the printing volume. These parts can then be assembled to form the original object. We formulate a number of desirable criteria for the partition, including assemblability, having few components, unobtrusiveness of the seams, and structural soundness. Chopper optimizes these criteria and generates a partition either automatically or with user guidance. Our prototype outputs the final decomposed parts with customized connectors on the interfaces. We demonstrate the effectiveness of Chopper on a variety of non-trivial real-world objects.", "title": "" }, { "docid": "3e9a214856235ef36a4dd2e9684543b7", "text": "Leaf area index (LAI) is a key biophysical variable that can be used to derive agronomic information for field management and yield prediction. In the context of applying broadband and high spatial resolution satellite sensor data to agricultural applications at the field scale, an improved method was developed to evaluate commonly used broadband vegetation indices (VIs) for the estimation of LAI with VI–LAI relationships. The evaluation was based on direct measurement of corn and potato canopies and on QuickBird multispectral images acquired in three growing seasons. The selected VIs were correlated strongly with LAI but with different efficiencies for LAI estimation as a result of the differences in the stabilities, the sensitivities, and the dynamic ranges. Analysis of error propagation showed that LAI noise inherent in each VI–LAI function generally increased with increasing LAI and the efficiency of most VIs was low at high LAI levels. Among selected VIs, the modified soil-adjusted vegetation index (MSAVI) was the best LAI estimator with the largest dynamic range and the highest sensitivity and overall efficiency for both crops. QuickBird image-estimated LAI with MSAVI–LAI relationships agreed well with ground-measured LAI with the root-mean-square-error of 0.63 and 0.79 for corn and potato canopies, respectively. LAI estimated from the high spatial resolution pixel data exhibited spatial variability similar to the ground plot measurements. For field scale agricultural applications, MSAVI–LAI relationships are easy-to-apply and reasonably accurate for estimating LAI. # 2007 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "0ba388167309c8821f0d6a1e9569f1eb", "text": "With advancement in science and technology, computing systems are becoming increasingly more complex with an increasing variety of heterogeneous software and hardware components. They are thus becoming increasingly more difficult to monitor, manage and maintain. Traditional approaches to system management have been largely based on domain experts through a knowledge acquisition process that translates domain knowledge into operating rules and policies. This has been well known and experienced as a cumber-some, labor intensive, and error prone process. In addition, this process is difficult to keep up with the rapidly changing environments. There is thus a pressing need for automatic and efficient approaches to monitor and manage complex computing systems.A popular approach to system management is based on analyzing system log files. However, some new aspects of the log files have been less emphasized in existing methods from data mining and machine learning community. The various formats and relatively short text messages of log files, and temporal characteristics in data representation pose new challenges. In this paper, we will describe our research efforts on mining system log files for automatic management. In particular, we apply text mining techniques to categorize messages in log files into common situations, improve categorization accuracy by considering the temporal characteristics of log messages, and utilize visualization tools to evaluate and validate the interesting temporal patterns for system management.", "title": "" }, { "docid": "8c0d3cfffb719f757f19bbb33412d8c6", "text": "In this paper, we present a parallel Image-to-Mesh Conversion (I2M) algorithm with quality and fidelity guarantees achieved by dynamic point insertions and removals. Starting directly from an image, it is able to recover the isosurface and mesh the volume with tetrahedra of good shape. Our tightly-coupled shared-memory parallel speculative execution paradigm employs carefully designed contention managers, load balancing, synchronization and optimizations schemes which boost the parallel efficiency with little overhead: our single-threaded performance is faster than CGAL, the state of the art sequential mesh generation software we are aware of. The effectiveness of our method is shown on Blacklight, the Pittsburgh Supercomputing Center's cache-coherent NUMA machine, via a series of case studies justifying our choices. We observe a more than 82% strong scaling efficiency for up to 64 cores, and a more than 95% weak scaling efficiency for up to 144 cores, reaching a rate of 14.7 Million Elements per second. To the best of our knowledge, this is the fastest and most scalable 3D Delaunay refinement algorithm.", "title": "" }, { "docid": "6d6390e51589f5258deeb420547dd63c", "text": "Solar and wind energy systems are omnipresent, freely available, environmental friendly, and they are considered as promising power generating sources due to their availability and topological advantages for local power generations. Hybrid solar–wind energy systems, uses two renewable energy sources, allow improving the system efficiency and power reliability and reduce the energy storage requirements for stand-alone applications. The hybrid solar–wind systems are becoming popular in remote area power generation applications due to advancements in renewable energy technologies and substantial rise in prices of petroleum products. This paper is to review the current state of the simulation, optimization and control technologies for the stand-alone hybrid solar–wind energy systems with battery storage. It is found that continued research and development effort in this area is still needed for improving the systems’ performance, establishing techniques for accurately predicting their output and reliably integrating them with other renewable or conventional power generation sources. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "05cc51de2e8ee6b6f61b62b6bca29a29", "text": "Advanced metering devices (smart meters) are being installed throughout electric networks in Germany (as well as in other parts of Europe and in the United States). Unfortunately, smart meters are able to become surveillance devices that monitor the behavior of the customers. This leads to unprecedented invasions of consumer privacy. The high-resolution energy consumption data which are transmitted to the utility company allow intrusive identification and monitoring of equipment within consumers’ homes (e. g., TV set, refrigerator, toaster, and oven). Our research shows that the analysis of the household’s electricity usage profile at a 0.5s−1 sample rate does reveal what channel the TV set in the household was displaying. It is also possible to identify (copyright-protected) audiovisual content in the power profile that is displayed on a CRT, a Plasma display TV or a LCD television set with dynamic backlighting. Our test results indicate that a 5 minutes-chunk of consecutive viewing without major interference by other appliances is sufficient to identify the content. Our investigation also reveals that the data transmitted via the Internet by the smart meter are unsigned and unencrypted. Our tests were performed on a sealed, operational smart meter used for electricity metering in a private home in North RhineWestphalia, Germany. Parameters for other television sets were obtained with an identical smart meter deployed in a university lab.", "title": "" } ]
scidocsrr
3dde2e750caa8624282518369f4f6a1f
Evaluating Display Fidelity and Interaction Fidelity in a Virtual Reality Game
[ { "docid": "c9b7832cd306fc022e4a376f10ee8fc8", "text": "This paper describes a study to assess the influence of a variety of factors on reported level of presence in immersive virtual environments. It introduces the idea of stacking depth, that is, where a participant can simulate the process of entering the virtual environment while already in such an environment, which can be repeated to several levels of depth. An experimental study including 24 subjects was carried out. Half of the subjects were transported between environments by using virtual head-mounted displays, and the other half by going through doors. Three other binary factors were whether or not gravity operated, whether or not the subject experienced a virtual precipice, and whether or not the subject was followed around by a virtual actor. Visual, auditory, and kinesthetic representation systems and egocentric/exocentric perceptual positions were assessed by a preexperiment questionnaire. Presence was assessed by the subjects as their sense of being there, the extent to which they experienced the virtual environments as more the presenting reality than the real world in which the experiment was taking place, and the extent to which the subject experienced the virtual environments as places visited rather than images seen. A logistic regression analysis revealed that subjective reporting of presence was significantly positively associated with visual and kinesthetic representation systems, and negatively with the auditory system. This was not surprising since the virtual reality system used was primarily visual. The analysis also showed a significant and positive association with stacking level depth for those who were transported between environments by using the virtual HMD, and a negative association for those who were transported through doors. Finally, four of the subjects moved their real left arm to match movement of the left arm of the virtual body displayed by the system. These four scored significantly higher on the kinesthetic representation system than the remainder of the subjects.", "title": "" }, { "docid": "467b4537bdc6a466909d819e67d0ebc1", "text": "We have created an immersive application for statistical graphics and have investigated what benefits it offers over more traditional data analysis tools. This paper presents a description of both the traditional data analysis tools and our virtual environment, and results of an experiment designed to determine if an immersive environment based on the XGobi desktop system provides advantages over XGobi for analysis of high-dimensional statistical data. The experiment included two aspects of each environment: three structure detection (visualization) tasks and one ease of interaction task. The subjects were given these tasks in both the C2 virtual environment and a workstation running XGobi. The experiment results showed an improvement in participants’ ability to perform structure detection tasks in the C2 to their performance in the desktop environment. However, participants were more comfortable with the interaction tools in the desktop", "title": "" } ]
[ { "docid": "76049ed267e9327412d709014e8e9ed4", "text": "A wireless massive MIMO system entails a large number (tens or hundreds) of base station antennas serving a much smaller number of users, with large gains in spectralefficiency and energy-efficiency compared with conventional MIMO technology. Until recently it was believed that in multicellular massive MIMO system, even in the asymptotic regime, as the number of service antennas tends to infinity, the performance is limited by directed inter-cellular interference. This interference results from unavoidable re-use of reverse-link training sequences (pilot contamination) by users in different cells. We devise a new concept that leads to the effective elimination of inter-cell interference in massive MIMO systems. This is achieved by outer multi-cellular precoding, which we call LargeScale Fading Precoding (LSFP). The main idea of LSFP is that each base station linearly combines messages aimed to users from different cells that re-use the same training sequence. Crucially, the combining coefficients depend only on the slowfading coefficients between the users and the base stations. Each base station independently transmits its LSFP-combined symbols using conventional linear precoding that is based on estimated fast-fading coefficients. Further, we derive estimates for downlink and uplink SINRs and capacity lower bounds for the case of massive MIMO systems with LSFP and a finite number of base station antennas.", "title": "" }, { "docid": "d65a047b3f381ca5039d75fd6330b514", "text": "This paper presents an enhanced algorithm for matching laser scan maps using histogram correlations. The histogram representation effectively summarizes a map's salient features such that pairs of maps can be matched efficiently without any prior guess as to their alignment. The histogram matching algorithm has been enhanced in order to work well in outdoor unstructured environments by using entropy metrics, weighted histograms and proper thresholding of quality metrics. Thus our large-scale scan-matching SLAM implementation has a vastly improved ability to close large loops in real-time even when odometry is not available. Our experimental results have demonstrated a successful mapping of the largest area ever mapped to date using only a single laser scanner. We also demonstrate our ability to solve the lost robot problem by localizing a robot to a previously built map without any prior initialization.", "title": "" }, { "docid": "39007b91989c42880ff96e7c5bdcf519", "text": "Feature selection has aroused considerable research interests during the last few decades. Traditional learning-based feature selection methods separate embedding learning and feature ranking. In this paper, we propose a novel unsupervised feature selection framework, termed as the joint embedding learning and sparse regression (JELSR), in which the embedding learning and sparse regression are jointly performed. Specifically, the proposed JELSR joins embedding learning with sparse regression to perform feature selection. To show the effectiveness of the proposed framework, we also provide a method using the weight via local linear approximation and adding the ℓ2,1-norm regularization, and design an effective algorithm to solve the corresponding optimization problem. Furthermore, we also conduct some insightful discussion on the proposed feature selection approach, including the convergence analysis, computational complexity, and parameter determination. In all, the proposed framework not only provides a new perspective to view traditional methods but also evokes some other deep researches for feature selection. Compared with traditional unsupervised feature selection methods, our approach could integrate the merits of embedding learning and sparse regression. Promising experimental results on different kinds of data sets, including image, voice data and biological data, have validated the effectiveness of our proposed algorithm.", "title": "" }, { "docid": "038db4d053ff795f35ae9731f6e27c9a", "text": "Intravascular injection leading to skin necrosis or blindness is the most serious complication of facial injection with fillers. It may be underreported and the outcome of cases are unclear. Early recognitions of the symptoms and signs may facilitate prompt treatment if it does occur avoiding the potential sequelae of intravascular injection. To determine the frequency of intravascular injection among experienced injectors, the outcomes of these intravascular events, and the management strategies. An internet-based survey was sent to 127 injectors worldwide who act as trainers for dermal fillers globally. Of the 52 respondents from 16 countries, 71 % had ≥11 years of injection experience, and 62 % reported one or more intravascular injections. The most frequent initial signs were minor livedo (63 % of cases), pallor (41 %), and symptoms of pain (37 %). Mildness/absence of pain was a feature of 47 % of events. Hyaluronidase (5 to >500 U) was used immediately on diagnosis to treat 86 % of cases. The most commonly affected areas were the nasolabial fold and nose (39 % each). Of all the cases, only 7 % suffered moderate scarring requiring surface treatments. Uneventful healing was the usual outcome, with 86 % being resolved within 14 days. Intravascular injection with fillers can occur even at the hands of experienced injectors. It may not be always associated with immediate pain or other classical symptoms and signs. Prompt effective management leads to favorable outcomes, and will prevent catastrophic consequences such as skin necrosis. Intravascular injection leading to blindness may not be salvageable and needs further study. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .", "title": "" }, { "docid": "7a9b9633243d84978d9e975744642e18", "text": "Our aim is to provide a pixel-level object instance labeling of a monocular image. We build on recent work [27] that trained a convolutional neural net to predict instance labeling in local image patches, extracted exhaustively in a stride from an image. A simple Markov random field model using several heuristics was then proposed in [27] to derive a globally consistent instance labeling of the image. In this paper, we formulate the global labeling problem with a novel densely connected Markov random field and show how to encode various intuitive potentials in a way that is amenable to efficient mean field inference [13]. Our potentials encode the compatibility between the global labeling and the patch-level predictions, contrast-sensitive smoothness as well as the fact that separate regions form different instances. Our experiments on the challenging KITTI benchmark [8] demonstrate that our method achieves a significant performance boost over the baseline [27].", "title": "" }, { "docid": "505a9b6139e8cbf759652dc81f989de9", "text": "SQL injection attacks, a class of injection flaw in which specially crafted input strings leads to illegal queries to databases, are one of the topmost threats to web applications. A Number of research prototypes and commercial products that maintain the queries structure in web applications have been developed. But these techniques either fail to address the full scope of the problem or have limitations. Based on our observation that the injected string in a SQL injection attack is interpreted differently on different databases. A characteristic diagnostic feature of SQL injection attacks is that they change the intended structure of queries issued. Pattern matching is a technique that can be used to identify or detect any anomaly packet from a sequential action. Injection attack is a method that can inject any kind of malicious string or anomaly string on the original string. Most of the pattern based techniques are used static analysis and patterns are generated from the attacked statements. In this paper, we proposed a detection and prevention technique for preventing SQL Injection Attack (SQLIA) using Aho–Corasick pattern matching algorithm. In this paper, we proposed an overview of the architecture. In the initial stage evaluation, we consider some sample of standard attack patterns and it shows that the proposed algorithm is works well against the SQL Injection Attack. Keywords—SQL Injection Attack; Pattern matching; Static Pattern; Dynamic Pattern", "title": "" }, { "docid": "5931169b6433d77496dfc638988399eb", "text": "Image annotation has been an important task for visual information retrieval. It usually involves a multi-class multi-label classification problem. To solve this problem, many researches have been conducted during last two decades, although most of the proposed methods rely on the training data with the ground truth. To prepare such a ground truth is an expensive and laborious task that cannot be easily scaled, and “semantic gaps” between low-level visual features and high-level semantics still remain. In this paper, we propose a novel approach, ontology based supervised learning for multi-label image annotation, where classifiers' training is conducted using easily gathered Web data. Moreover, it takes advantage of both low-level visual features and high-level semantic information of given images. Experimental results using 0.507 million Web images database show effectiveness of the proposed framework over existing method.", "title": "" }, { "docid": "940f460457b117c156b6e39e9586a0b9", "text": "The flipped classroom is an innovative pedagogical approach that focuses on learner-centered instruction. The purposes of this report were to illustrate how to implement the flipped classroom and to describe students' perceptions of this approach within 2 undergraduate nutrition courses. The template provided enables faculty to design before, during, and after class activities and assessments based on objectives using all levels of Bloom's taxonomy. The majority of the 142 students completing the evaluation preferred the flipped method compared with traditional pedagogical strategies. The process described in the report was successful for both faculty and students.", "title": "" }, { "docid": "85b3f55fffff67b9d3a0305b258dcd8e", "text": "Sézary syndrome (SS) has a poor prognosis and few guidelines for optimizing therapy. The US Cutaneous Lymphoma Consortium, to improve clinical care of patients with SS and encourage controlled clinical trials of promising treatments, undertook a review of the published literature on therapeutic options for SS. An overview of the immunopathogenesis and standardized review of potential current treatment options for SS including metabolism, mechanism of action, overall efficacy in mycosis fungoides and SS, and common or concerning adverse effects is first discussed. The specific efficacy of each treatment for SS, both as monotherapy and combination therapy, is then reported using standardized criteria for both SS and response to therapy with the type of study defined by a modification of the US Preventive Services guidelines for evidence-based medicine. Finally, guidelines for the treatment of SS and suggestions for adjuvant treatment are noted.", "title": "" }, { "docid": "d6ee313e66b33bfebc87bb9174aed00f", "text": "The majority of arm amputees live in developing countries and cannot afford prostheses beyond cosmetic hands with simple grippers. Customized hand prostheses with high performance are too expensive for the average arm amputee. Currently, commercially available hand prostheses use costly and heavy DC motors for actuation. This paper presents an inexpensive hand prosthesis, which uses a 3D printable design to reduce the cost of customizable parts and novel electro-thermal actuator based on nylon 6-6 polymer muscles. The prosthetic hand was tested and found to be able to grasp a variety of shapes 100% of the time tested (sphere, cylinder, cube, and card) and other commonly used tools. Grip times for each object were repeatable with small standard deviations. With a low estimated material cost of $170 for actuation, this prosthesis could have a potential to be used for low-cost and high-performance system.", "title": "" }, { "docid": "aa749c00010e5391710738cc235c1c35", "text": "Traditional summarization initiatives have been focused on specific types of documents such as articles, reviews, videos, image feeds, or tweets, a practice which may result in pigeonholing the summarization task in the context of modern, content-rich multimedia collections. Consequently, much of the research to date has revolved around mostly toy problems in narrow domains and working on single-source media types. We argue that summarization and story generation systems need to refocus the problem space in order to meet the information needs in the age of user-generated content in di↵erent formats and languages. Here we create a framework for flexible multimedia storytelling. Narratives, stories, and summaries carry a set of challenges in big data and dynamic multi-source media that give rise to new research in spatial-temporal representation, viewpoint generation, and explanation.", "title": "" }, { "docid": "4a1559bd8a401d3273c34ab20931611d", "text": "Spiking Neural Networks (SNNs) are widely regarded as the third generation of artificial neural networks, and are expected to drive new classes of recognition, data analytics and computer vision applications. However, large-scale SNNs (e.g., of the scale of the human visual cortex) are highly compute and data intensive, requiring new approaches to improve their efficiency. Complementary to prior efforts that focus on parallel software and the design of specialized hardware, we propose AxSNN, the first effort to apply approximate computing to improve the computational efficiency of evaluating SNNs. In SNNs, the inputs and outputs of neurons are encoded as a time series of spikes. A spike at a neuron's output triggers updates to the potentials (internal states) of neurons to which it is connected. AxSNN determines spike-triggered neuron updates that can be skipped with little or no impact on output quality and selectively skips them to improve both compute and memory energy. Neurons that can be approximated are identified by utilizing various static and dynamic parameters such as the average spiking rates and current potentials of neurons, and the weights of synaptic connections. Such a neuron is placed into one of many approximation modes, wherein the neuron is sensitive only to a subset of its inputs and sends spikes only to a subset of its outputs. A controller periodically updates the approximation modes of neurons in the network to achieve energy savings with minimal loss in quality. We apply AxSNN to both hardware and software implementations of SNNs. For hardware evaluation, we designed SNNAP, a Spiking Neural Network Approximate Processor that embodies the proposed approximation strategy, and synthesized it to 45nm technology. The software implementation of AxSNN was evaluated on a 2.7 GHz Intel Xeon server with 128 GB memory. Across a suite of 6 image recognition benchmarks, AxSNN achieves 1.4–5.5x reduction in scalar operations for network evaluation, which translates to 1.2–3.62x and 1.26–3.9x improvement in hardware and software energies respectively, for no loss in application quality. Progressively higher energy savings are achieved with modest reductions in output quality.", "title": "" }, { "docid": "d6602271d7024f7d894b14da52299ccc", "text": "BACKGROUND\nMost articles on face composite tissue allotransplantation have considered ethical and immunologic aspects. Few have dealt with the technical aspects of graft procurement. The authors report the technical difficulties involved in procuring a lower face graft for allotransplantation.\n\n\nMETHODS\nAfter a preclinical study of 20 fresh cadavers, the authors carried out an allotransplantation of the lower two-thirds of the face on a patient in January of 2007. The graft included all the perioral muscles, the facial nerves (VII, V2, and V3) and, for the first time, the parotid glands.\n\n\nRESULTS\nThe preclinical study and clinical results confirm that complete revascularization of a graft consisting of the lower two-thirds of the face is possible from a single facial pedicle. All dissections were completed within 3 hours. Graft procurement for the clinical study took 4 hours. The authors harvested the soft tissues of the face en bloc to save time and to prevent tissue injury. They restored the donor's face within approximately 4 hours, using a resin mask colored to resemble the donor's skin tone. All nerves were easily reattached. Voluntary activity was detected on clinical examination 5 months postoperatively, and electromyography confirmed nerve regrowth, with activity predominantly on the left side. The patient requested local anesthesia for biopsies performed in month 4.\n\n\nCONCLUSIONS\nPartial facial composite tissue allotransplantation of the lower two-thirds of the face is technically feasible, with a good cosmetic and functional outcome in selected clinical cases. Flaps of this type establish vascular and neurologic connections in a reliable manner and can be procured with a rapid, standardized procedure.", "title": "" }, { "docid": "8385f72bd060eee8c59178bc0b74d1e3", "text": "Gesture recognition plays an important role in human-computer interaction. However, most existing methods are complex and time-consuming, which limit the use of gesture recognition in real-time environments. In this paper, we propose a static gesture recognition system that combines depth information and skeleton data to classify gestures. Through feature fusion, hand digit gestures of 0-9 can be recognized accurately and efficiently. According to the experimental results, the proposed gesture recognition system is effective and robust, which is invariant to complex background, illumination changes, reversal, structural distortion, rotation etc. We have tested the system both online and offline which proved that our system is satisfactory to real-time requirements, and therefore it can be applied to gesture recognition in real-world human-computer interaction systems.", "title": "" }, { "docid": "af49fef0867a951366cfb21288eeb3ed", "text": "As a discriminative method of one-shot learning, Siamese deep network allows recognizing an object from a single exemplar with the same class label. However, it does not take the advantage of the underlying structure and relationship among a multitude of instances since it only relies on pairs of instances for training. In this paper, we propose a quadruplet deep network to examine the potential connections among the training instances, aiming to achieve a more powerful representation. We design four shared networks that receive multi-tuple of instances as inputs and are connected by a novel loss function consisting of pair-loss and tripletloss. According to the similarity metric, we select the most similar and the most dissimilar instances as the positive and negative inputs of triplet loss from each multi-tuple. We show that this scheme improves the training performance and convergence speed. Furthermore, we introduce a new weighted pair loss for an additional acceleration of the convergence. We demonstrate promising results for model-free tracking-by-detection of objects from a single initial exemplar in the Visual Object Tracking benchmark.", "title": "" }, { "docid": "2dbffa465a1d0b9c7e2ae1044dd0cdcb", "text": "Total variation denoising is a nonlinear filtering method well suited for the estimation of piecewise-constant signals observed in additive white Gaussian noise. The method is defined by the minimization of a particular nondifferentiable convex cost function. This letter describes a generalization of this cost function that can yield more accurate estimation of piecewise constant signals. The new cost function involves a nonconvex penalty (regularizer) designed to maintain the convexity of the cost function. The new penalty is based on the Moreau envelope. The proposed total variation denoising method can be implemented using forward–backward splitting.", "title": "" }, { "docid": "9ff6d7a36646b2f9170bd46d14e25093", "text": "Psychedelic drugs such as LSD and psilocybin are often claimed to be capable of inducing life-changing experiences described as mystical or transcendental, especially if high doses are taken. The present study examined possible enduring effects of such experiences by comparing users of psychedelic drugs (n = 88), users of nonpsychedelic illegal drugs (e.g., marijuana, amphetamines) (n = 29) and non illicit drug-using social drinkers (n = 66) on questionnaire measures of values, beliefs and emotional empathy. Samples were obtained from Israel (n = 110) and Australia (n = 73) in a cross-cultural comparison to see if values associated with psychedelic drug use transcended culture of origin. Psychedelic users scored significantly higher on mystical beliefs (e.g., oneness with God and the universe) and life values of spirituality and concern for others than the other groups, and lower on the value of financial prosperity, irrespective of culture of origin. Users of nonpsychedelic illegal drugs scored significantly lower on a measure of coping ability than both psychedelic users and non illicit drug users. Both groups of illegal drug users scored significantly higher on empathy than non illicit drug users. Results are discussed in the context of earlier findings from Pahnke (1966) and Doblin (1991) of the transformative effect of psychedelic experiences, although the possibility remains that present findings reflect predrug characteristics of those who chose to take psychedelic drugs rather than effects of the drugs themselves.", "title": "" }, { "docid": "bde769df506e361bf374bd494fc5db6f", "text": "Molded interconnect devices (MID) allow the realization of electronic circuits on injection molded thermoplastics. MID antennas can be manufactured as part of device casings without the need for additional printed circuit boards or attachment of antennas printed on foil. Baluns, matching networks, amplifiers and connectors can be placed on the polymer in the vicinity of the antenna. A MID dipole antenna for 1 GHz is designed, manufactured and measured. A prototype of the antenna is built with laser direct structuring (LDS) on a Xantar LDS 3720 substrate. Measured return loss and calibrated gain patterns are compared to simulation results.", "title": "" }, { "docid": "7838934c12f00f987f6999460fc38ca1", "text": "The Internet has fostered an unconventional and powerful style of collaboration: \"wiki\" web sites, where every visitor has the power to become an editor. In this paper we investigate the dynamics of Wikipedia, a prominent, thriving wiki. We make three contributions. First, we introduce a new exploratory data analysis tool, the history flow visualization, which is effective in revealing patterns within the wiki context and which we believe will be useful in other collaborative situations as well. Second, we discuss several collaboration patterns highlighted by this visualization tool and corroborate them with statistical analysis. Third, we discuss the implications of these patterns for the design and governance of online collaborative social spaces. We focus on the relevance of authorship, the value of community surveillance in ameliorating antisocial behavior, and how authors with competing perspectives negotiate their differences.", "title": "" }, { "docid": "d050730d7a5bd591b805f1b9729b0f2d", "text": "In plant phenotyping, it has become important to be able to measure many features on large image sets in order to aid genetic discovery. The size of the datasets, now often captured robotically, often precludes manual inspection, hence the motivation for finding a fully automated approach. Deep learning is an emerging field that promises unparalleled results on many data analysis problems. Building on artificial neural networks, deep approaches have many more hidden layers in the network, and hence have greater discriminative and predictive power. We demonstrate the use of such approaches as part of a plant phenotyping pipeline. We show the success offered by such techniques when applied to the challenging problem of image-based plant phenotyping and demonstrate state-of-the-art results (>97% accuracy) for root and shoot feature identification and localization. We use fully automated trait identification using deep learning to identify quantitative trait loci in root architecture datasets. The majority (12 out of 14) of manually identified quantitative trait loci were also discovered using our automated approach based on deep learning detection to locate plant features. We have shown deep learning-based phenotyping to have very good detection and localization accuracy in validation and testing image sets. We have shown that such features can be used to derive meaningful biological traits, which in turn can be used in quantitative trait loci discovery pipelines. This process can be completely automated. We predict a paradigm shift in image-based phenotyping bought about by such deep learning approaches, given sufficient training sets.", "title": "" } ]
scidocsrr
93cb75342bfe9ae9a2e6faea0f043b3e
Chimera: Large-Scale Classification using Machine Learning, Rules, and Crowdsourcing
[ { "docid": "cf7c5ae92a0514808232e4e9d006024a", "text": "We present an interactive, hybrid human-computer method for object classification. The method applies to classes of objects that are recognizable by people with appropriate expertise (e.g., animal species or airplane model), but not (in general) by people without such expertise. It can be seen as a visual version of the 20 questions game, where questions based on simple visual attributes are posed interactively. The goal is to identify the true class while minimizing the number of questions asked, using the visual content of the image. We introduce a general framework for incorporating almost any off-the-shelf multi-class object recognition algorithm into the visual 20 questions game, and provide methodologies to account for imperfect user responses and unreliable computer vision algorithms. We evaluate our methods on Birds-200, a difficult dataset of 200 tightly-related bird species, and on the Animals With Attributes dataset. Our results demonstrate that incorporating user input drives up recognition accuracy to levels that are good enough for practical applications, while at the same time, computer vision reduces the amount of human interaction required.", "title": "" }, { "docid": "d5f2cb3839a8e129253e3433b9e9a5bc", "text": "Product classification in Commerce search (\\eg{} Google Product Search, Bing Shopping) involves associating categories to offers of products from a large number of merchants. The categorized offers are used in many tasks including product taxonomy browsing and matching merchant offers to products in the catalog. Hence, learning a product classifier with high precision and recall is of fundamental importance in order to provide high quality shopping experience. A product offer typically consists of a short textual description and an image depicting the product. Traditional approaches to this classification task is to learn a classifier using only the textual descriptions of the products. In this paper, we show that the use of images, a weaker signal in our setting, in conjunction with the textual descriptions, a more discriminative signal, can considerably improve the precision of the classification task, irrespective of the type of classifier being used. We present a novel classification approach, \\Cross Adapt{} (\\CrossAdaptAcro{}), that is cognizant of the disparity in the discriminative power of different types of signals and hence makes use of the confusion matrix of dominant signal (text in our setting) to prudently leverage the weaker signal (image), for an improved performance. Our evaluation performed on data from a major Commerce search engine's catalog shows a 12\\% (absolute) improvement in precision at 100\\% coverage, and a 16\\% (absolute) improvement in recall at 90\\% precision compared to classifiers that only use textual description of products. In addition, \\CrossAdaptAcro{} also provides a more accurate classifier based only on the dominant signal (text) that can be used in situations in which only the dominant signal is available during application time.", "title": "" } ]
[ { "docid": "97e2077fc8b801656f046f8619fe6647", "text": "In this paper we present a fairy tale corpus that was semantically organized and tagged. The proposed method uses latent semantic mapping to represent the stories and a top-n item-to-item recommendation algorithm to define clusters of similar stories. Each story can be placed in more than one cluster and stories in the same cluster are related to the same concepts. The results were manually evaluated regarding the groupings as perceived by human judges. The evaluation resulted in a precision of 0.81, a recall of 0.69, and an f-measure of 0.75 when using tf*idf for word frequency. Our method is topicand language-independent, and, contrary to traditional clustering methods, automatically defines the number of clusters based on the set of documents. This method can be used as a setup for traditional clustering or classification. The resulting corpus will be used for recommendation purposes, although it can also be used for emotion extraction, semantic role extraction, meaning extraction, text classification, among others.", "title": "" }, { "docid": "dbc7e759ce30307475194adb4ca37f1f", "text": "Pharyngeal arches appear in the 4th and 5th weeks of development of the human embryo. The 1st pharyngeal arch develops into the incus and malleus, premaxilla, maxilla, zygomatic bone; part of the temporal bone, the mandible and it contributes to the formation of bones of the middle ear. The musculature of the 1st pharyngeal arch includes muscles of mastication, anterior belly of the digastric mylohyoid, tensor tympani and tensor palatini. The second pharyngeal arch gives rise to the stapes, styloid process of the temporal bone, stylohyoid ligament, the lesser horn and upper part of the body of the hyoid bone. The stapedius muscle, stylohyoid, posterior belly of the digastric, auricular and muscles of facial expressional all derive from the 2nd pharyngeal arch. Otocephaly has been classified as a defect of blastogenesis, with structural defects primarily involving the first and second branchial arch derivatives. It may also result in dysmorphogenesis of other midline craniofacial field structures, such as the forebrain and axial body structures.", "title": "" }, { "docid": "a91ba04903c584a1165867c7215385d0", "text": "The INLA approach for approximate Bayesian inference for latent Gaussian models has been shown to give fast and accurate estimates of posterior marginals and also to be a valuable tool in practice via the R-package R-INLA. In this paper we formalize new developments in the R-INLA package and show how these features greatly extend the scope of models that can be analyzed by this interface. We also discuss the current default method in R-INLA to approximate posterior marginals of the hyperparameters using only a modest number of evaluations of the joint posterior distribution of the hyperparameters, without any need for numerical integration.", "title": "" }, { "docid": "54314e448a1dd146289c6c4859ab9791", "text": "The article investigates how the difficulties caused by the flexibility of the endoscope shaft could be solved and to provide a categorized overview of designs that potentially provide a solution. The following are discussed: paradoxical problem of flexible endoscopy; NOTES or hybrid endoscopy surgery; design challenges; shaft-guidance: guiding principles; virtual track guidance; physical track guidance; shaft-guidance: rigidity control; material stiffening; structural stiffening; and hybrid stiffening.", "title": "" }, { "docid": "3a322129019eed67686018404366fe0b", "text": "Scientists and casual users need better ways to query RDF databases or Linked Open Data. Using the SPARQL query language requires not only mastering its syntax and semantics but also understanding the RDF data model, the ontology used, and URIs for entities of interest. Natural language query systems are a powerful approach, but current techniques are brittle in addressing the ambiguity and complexity of natural language and require expensive labor to supply the extensive domain knowledge they need. We introduce a compromise in which users give a graphical \"skeleton\" for a query and annotates it with freely chosen words, phrases and entity names. We describe a framework for interpreting these \"schema-agnostic queries\" over open domain RDF data that automatically translates them to SPARQL queries. The framework uses semantic textual similarity to find mapping candidates and uses statistical approaches to learn domain knowledge for disambiguation, thus avoiding expensive human efforts required by natural language interface systems. We demonstrate the feasibility of the approach with an implementation that performs well in an evaluation on DBpedia data.", "title": "" }, { "docid": "ade0742bcb8fa3a195b142ba39d245ce", "text": "We describe a new approach to solving the click-through rate (CTR) prediction problem in sponsored search by means of MatrixNet, the proprietary implementation of boosted trees. This problem is of special importance for the search engine, because choosing the ads to display substantially depends on the predicted CTR and greatly affects the revenue of the search engine and user experience. We discuss different issues such as evaluating and tuning MatrixNet algorithm, feature importance, performance, accuracy and training data set size. Finally, we compare MatrixNet with several other methods and present experimental results from the production system.", "title": "" }, { "docid": "435307df5495b497ff9065e9d98af044", "text": "Recent breakthroughs in word representation methods have generated a new spark of enthusiasm amidst the computational linguistic community, with methods such as Word2Vec have indeed shown huge potential to compress insightful information on words’ contextual meaning in lowdimensional vectors. While the success of these representations has mainly been harvested for traditional NLP tasks such as word prediction or sentiment analysis, recent studies have begun using these representations to track the dynamics of language and meaning over time. However, recent works have also shown these embeddings to be extremely noisy and training-set dependent, thus considerably restricting the scope and significance of this potential application. In this project, building upon the work presented by [1] in 2015, we thus propose to investigate ways of defining interpretable embeddings, and as well as alternative ways of assessing the dynamics of semantic changes so as to endow more statistical power to the analysis. 1 Problem Statement, Motivation and Prior Work The recent success of Neural-Network-generated word embeddings (word2vec, Glove, etc.) for traditional NLP tasks such as word prediction or text sentiment analysis has motivated the scientific community to use these representations as a way to analyze language itself. Indeed, if these low-dimensional word representations have proven to successfully carry both semantic and syntactic information, such a successful information compression could thus potentially be harvested to tackle more complex linguistic problems, such as monitoring language dynamics over time or space. In particular, in [1], [5], and [7], word embeddings are used to capture drifts of word meanings over time through the analysis of the temporal evolution of any given word’ closest neighbors. Other studies [6] use them to relate semantic shifts to geographical considerations. However, as highlighted by Hahn and Hellrich in [3], the inherent randomness of the methods used to encode these representations results in the high variability of any given word’s closest neighbors, thus considerably narrowing the statistical power of the study: how can we detect real semantic changes from the ambient jittering inherent to the embeddings’ representations? Can we try to provide a perhaps more interpretable and sounder basis of comparison than the neighborhoods to detect these changes? Building upon the methodology developed by Hamilton and al [1] to study language dynamics and the observations made by Hahn and Hellrich [3], we propose to tackle this problem from a mesoscopic scale: the intuition would be that if local neighborhoods are too unstable, we should thus look at information contained in the overall embedding matrix to build our statistical framework. In particular, a first idea is that we should try to evaluate the existence of a potentially ”backbone” structure of the embeddings. Indeed, it would seem intuitive that if certain words –such as “gay” or “asylum” (as observed by Hamilton et al) have exhibited important drifts in meaning throughout the 20th century, another large set of words – such as “food”,“house” or “people” – have undergone very little semantic change over time. As such, we should expect the relative distance between atoms in this latter set (as defined by the acute angle between their respective embeddings) to remain relatively constant from decade to decade. Hence, one could try to use this stable backbone graph as a way to triangulate the movement of the other word vectors over time, thus hopefully inducing more interpretable changes over time. Such an approach could also be used to answer the question of assessing the validity of our embeddings for linguistic purposes: how well do these embeddings capture similarity and nuances between words? A generally", "title": "" }, { "docid": "001b3155f0d67fd153173648cd483ac2", "text": "A new approach to the problem of multimodality medical image registration is proposed, using a basic concept from information theory, mutual information (MI), or relative entropy, as a new matching criterion. The method presented in this paper applies MI to measure the statistical dependence or information redundancy between the image intensities of corresponding voxels in both images, which is assumed to be maximal if the images are geometrically aligned. Maximization of MI is a very general and powerful criterion, because no assumptions are made regarding the nature of this dependence and no limiting constraints are imposed on the image content of the modalities involved. The accuracy of the MI criterion is validated for rigid body registration of computed tomography (CT), magnetic resonance (MR), and photon emission tomography (PET) images by comparison with the stereotactic registration solution, while robustness is evaluated with respect to implementation issues, such as interpolation and optimization, and image content, including partial overlap and image degradation. Our results demonstrate that subvoxel accuracy with respect to the stereotactic reference solution can be achieved completely automatically and without any prior segmentation, feature extraction, or other preprocessing steps which makes this method very well suited for clinical applications.", "title": "" }, { "docid": "a35efadff207d320af4ae6a5be2e1689", "text": "Human-Robot interaction brings new challenges to motion planning. The human, who is generally considered as an obstacle for the robot, needs to be considered as a separate entity that has a position, a posture, a field of view and an activity. These properties can be represented as new constraints to the motion generation mechanisms. In this paper we present three human related constraints to the motion planning for object hand over scenarios. We also describe a new planning method to consider these constraints. The resulting system automatically computes where the object should be transferred to the human, and the motion of the whole robot considering human’s comfort.", "title": "" }, { "docid": "7ec2f6b720cdcabbcdfb7697dbdd25ae", "text": "To help marketers to build and manage their brands in a dramatically changing marketing communications environment, the customer-based brand equity model that emphasizes the importance of understanding consumer brand knowledge structures is put forth. Specifically, the brand resonance pyramid is reviewed as a means to track how marketing communications can create intense, active loyalty relationships and affect brand equity. According to this model, integrating marketing communications involves mixing and matching different communication options to establish the desired awareness and image in the minds of consumers. The versatility of on-line, interactive marketing communications to marketers in brand building is also addressed.", "title": "" }, { "docid": "43233e45f07b80b8367ac1561356888d", "text": "Current Zero-Shot Learning (ZSL) approaches are restricted to recognition of a single dominant unseen object category in a test image. We hypothesize that this setting is ill-suited for real-world applications where unseen objects appear only as a part of a complex scene, warranting both the ‘recognition’ and ‘localization’ of an unseen category. To address this limitation, we introduce a new ‘Zero-Shot Detection’ (ZSD) problem setting, which aims at simultaneously recognizing and locating object instances belonging to novel categories without any training examples. We also propose a new experimental protocol for ZSD based on the highly challenging ILSVRC dataset, adhering to practical issues, e.g., the rarity of unseen objects. To the best of our knowledge, this is the first end-to-end deep network for ZSD that jointly models the interplay between visual and semantic domain information. To overcome the noise in the automatically derived semantic descriptions, we utilize the concept of meta-classes to design an original loss function that achieves synergy between max-margin class separation and semantic space clustering. Furthermore, we present a baseline approach extended from recognition to detection setting. Our extensive experiments show significant performance boost over the baseline on the imperative yet difficult ZSD problem.", "title": "" }, { "docid": "c274b4396b73d076e38cb79a0799c943", "text": "This paper addresses the development of a model that reproduces the dynamic behaviour of a redundant, 7 degrees of freedom robotic manipulator, namely the Kuka Lightweight Robot IV, in the Robotic Surgery Laboratory of the Instituto Superior Técnico. For this purpose, the control architecture behind the Lightweight Robot (LWR) is presented, as well as, the joint and the Cartesian level impedance control aspects. Then, the manipulator forward and inverse kinematic models are addressed, in which the inverse kinematics relies on the Closed Loop Inverse Kinematic method (CLIK). Redundancy resolution methods are used to ensure that the joint angle values remain bounded considering their physical limits. The joint level model is the first presented, followed by the Cartesian level model. The redundancy inherent to the Cartesian model is compensated by a null space controller, developed by employing the impedance superposition method. Finally, the effect of possible faults occurring in the system are simulated using the derived model.", "title": "" }, { "docid": "523677ed6d482ab6551f6d87b8ad761e", "text": "To enable information integration, schema matching is a critical step for discovering semantic correspondences of attributes across heterogeneous sources. While complex matchings are common, because of their far more complex search space, most existing techniques focus on simple 1:1 matchings. To tackle this challenge, this article takes a conceptually novel approach by viewing schema matching as correlation mining, for our task of matching Web query interfaces to integrate the myriad databases on the Internet. On this “deep Web ” query interfaces generally form complex matchings between attribute groups (e.g., {author} corresponds to {first name, last name} in the Books domain). We observe that the co-occurrences patterns across query interfaces often reveal such complex semantic relationships: grouping attributes (e.g., {first name, last name}) tend to be co-present in query interfaces and thus positively correlated. In contrast, synonym attributes are negatively correlated because they rarely co-occur. This insight enables us to discover complex matchings by a correlation mining approach. In particular, we develop the DCM framework, which consists of data preprocessing, dual mining of positive and negative correlations, and finally matching construction. We evaluate the DCM framework on manually extracted interfaces and the results show good accuracy for discovering complex matchings. Further, to automate the entire matching process, we incorporate automatic techniques for interface extraction. Executing the DCM framework on automatically extracted interfaces, we find that the inevitable errors in automatic interface extraction may significantly affect the matching result. To make the DCM framework robust against such “noisy” schemas, we integrate it with a novel “ensemble” approach, which creates an ensemble of DCM matchers, by randomizing the schema data into many trials and aggregating their ranked results by taking majority voting. As a principled basis, we provide analytic justification of the robustness of the ensemble approach. Empirically, our experiments show that the “ensemblization” indeed significantly boosts the matching accuracy, over automatically extracted and thus noisy schema data. By employing the DCM framework with the ensemble approach, we thus complete an automatic process of matchings Web query interfaces.", "title": "" }, { "docid": "eacfd15ac85517311bca0c3706fc55d9", "text": "Numerous applications require a self-contained personal navigation system that works in indoor and outdoor environments, does not require any infrastructure support, and is not susceptible to jamming. Posture tracking with an array of inertial/magnetic sensors attached to individual human limb segments has been successfully demonstrated. The \"sourceless\" nature of this technique makes possible full body posture tracking in an area of unlimited size with no supporting infrastructure. Such sensor modules contain three orthogonally mounted angular rate sensors, three orthogonal linear accelerometers and three orthogonal magnetometers. This paper describes a method for using accelerometer data combined with orientation estimates from the same modules to calculate position during walking and running. The periodic nature of these motions includes short periods of zero foot velocity when the foot is in contact with the ground. This pattern allows for precise drift error correction. Relative position is calculated through double integration of drift corrected accelerometer data. Preliminary experimental results for various types of motion including walking, side stepping, and running document accuracy of distance and position estimates.", "title": "" }, { "docid": "704c62beaf6b9b09265c0daacde69abc", "text": "This paper investigates discrimination capabilities in the texture of fundus images to differentiate between pathological and healthy images. For this purpose, the performance of local binary patterns (LBP) as a texture descriptor for retinal images has been explored and compared with other descriptors such as LBP filtering and local phase quantization. The goal is to distinguish between diabetic retinopathy (DR), age-related macular degeneration (AMD), and normal fundus images analyzing the texture of the retina background and avoiding a previous lesion segmentation stage. Five experiments (separating DR from normal, AMD from normal, pathological from normal, DR from AMD, and the three different classes) were designed and validated with the proposed procedure obtaining promising results. For each experiment, several classifiers were tested. An average sensitivity and specificity higher than 0.86 in all the cases and almost of 1 and 0.99, respectively, for AMD detection were achieved. These results suggest that the method presented in this paper is a robust algorithm for describing retina texture and can be useful in a diagnosis aid system for retinal disease screening.", "title": "" }, { "docid": "2951dc312799671c8feaf6d5086d5564", "text": "There has been significant interest of late in generating behavior of agents that is interpretable to the human (observer) in the loop. However, the work in this area has typically lacked coherence on the topic, with proposed solutions for “explicable”, “legible”, “predictable” and “transparent” planning with overlapping, and sometimes conflicting, semantics all aimed at some notion of understanding what intentions the observer will ascribe to an agent by observing its behavior. This is also true for the recent works on “security” and “privacy” of plans which are also trying to answer the same question, but from the opposite point of view – i.e. when the agent is trying to hide instead of reveal its intentions. This paper attempts to provide a workable taxonomy of relevant concepts in this exciting and emerging field of inquiry.", "title": "" }, { "docid": "65aa93b6ca41fe4ca54a4a7dee508db2", "text": "The field of deep learning has seen significant advancement in recent years. However, much of the existing work has been focused on real-valued numbers. Recent work has shown that a deep learning system using the complex numbers can be deeper for a fixed parameter budget compared to its real-valued counterpart. In this work, we explore the benefits of generalizing one step further into the hyper-complex numbers, quaternions specifically, and provide the architecture components needed to build deep quaternion networks. We develop the theoretical basis by reviewing quaternion convolutions, developing a novel quaternion weight initialization scheme, and developing novel algorithms for quaternion batch-normalization. These pieces are tested in a classification model by end-to-end training on the CIFAR −10 and CIFAR −100 data sets and a segmentation model by end-to-end training on the KITTI Road Segmentation data set. These quaternion networks show improved convergence compared to real-valued and complex-valued networks, especially on the segmentation task, while having fewer parameters.", "title": "" }, { "docid": "e76a82bcf7ff1a151c438d16640ae286", "text": "Bioinformaticists use the Basic Local Alignment Search Tool (BLAST) to characterize an unknown sequence by comparing it against a database of known sequences, thus detecting evolutionary relationships and biological properties. mpiBLAST is a widely-used, high-performance, open-source parallelization of BLAST that runs on a computer cluster delivering super-linear speedups. However, the Achilles heel of mpiBLAST is its lack of modularity, thus adversely affecting maintainability and extensibility. Alleviating this shortcoming requires an architectural refactoring to improve maintenance and extensibility while preserving high performance. Toward that end, this paper evaluates five different software architectures and details how each satisfies our design objectives. In addition, we introduce a novel approach to using mixin layers to enable mixing-and-matching of modules in constructing sequence-search applications for a variety of high-performance computing systems. Our design, which we call \"mixin layers with refined roles\", utilizes mixin layers to separate functionality into complementary modules and the refined roles in each layer improve the inherently modular design by precipitating flexible and structured parallel development, a necessity for an open-source application. We believe that this new software architecture for mpiBLAST-2.0 will benefit both the users and developers of the package and that our evaluation of different software architectures will be of value to other software engineers faced with the challenges of creating maintainable and extensible, high-performance, bioinformatics software.", "title": "" }, { "docid": "9292f1925de5d6df9eb89b2157842e5c", "text": "According to Breast Cancer Institute (BCI), Breast Cancer is one of the most dangerous type of diseases that is very effective for women in the world. As per clinical expert detecting this cancer in its first stage helps in saving lives. As per cancer.net offers individualized guides for more than 120 types of cancer and related hereditary syndromes. For detecting breast cancer mostly machine learning techniques are used. In this paper we proposed adaptive ensemble voting method for diagnosed breast cancer using Wisconsin Breast Cancer database. The aim of this work is to compare and explain how ANN and logistic algorithm provide better solution when its work with ensemble machine learning algorithms for diagnosing breast cancer even the variables are reduced. In this paper we used the Wisconsin Diagnosis Breast Cancer dataset. When compared to related work from the literature. It is shown that the ANN approach with logistic algorithm is achieved 98.50% accuracy from another machine learning algorithm.", "title": "" }, { "docid": "2e0262fce0a7ba51bd5ccf9e1397b0ca", "text": "We present a topology detection method combining smart meter sensor information and sparse line measurements. The problem is formulated as a spanning tree identification problem over a graph given partial nodal and edge power flow information. In the deterministic case of known nodal power consumption and edge power flow we provide sensor placement criterion which guarantees correct identification of all spanning trees. We then present a detection method which is polynomial in complexity to the size of the graph. In the stochastic case where loads are given by forecasts derived from delayed smart meter data, we provide a combinatorial complexity MAP detector and a polynomial complexity approximate MAP detector which is shown to work near optimum in all numerical cases.", "title": "" } ]
scidocsrr
154f8e8e4ee64ce4143eeda45cd842ba
Who are the Devils Wearing Prada in New York City?
[ { "docid": "b17fdc300edc22ab855d4c29588731b2", "text": "Describing clothing appearance with semantic attributes is an appealing technique for many important applications. In this paper, we propose a fully automated system that is capable of generating a list of nameable attributes for clothes on human body in unconstrained images. We extract low-level features in a pose-adaptive manner, and combine complementary features for learning attribute classifiers. Mutual dependencies between the attributes are then explored by a Conditional Random Field to further improve the predictions from independent classifiers. We validate the performance of our system on a challenging clothing attribute dataset, and introduce a novel application of dressing style analysis that utilizes the semantic attributes produced by our system.", "title": "" } ]
[ { "docid": "8dfa68e87eee41dbef8e137b860e19cc", "text": "We investigate regrets associated with users' posts on a popular social networking site. Our findings are based on a series of interviews, user diaries, and online surveys involving 569 American Facebook users. Their regrets revolved around sensitive topics, content with strong sentiment, lies, and secrets. Our research reveals several possible causes of why users make posts that they later regret: (1) they want to be perceived in favorable ways, (2) they do not think about their reason for posting or the consequences of their posts, (3) they misjudge the culture and norms within their social circles, (4) they are in a \"hot\" state of high emotion when posting, or under the influence of drugs or alcohol, (5) their postings are seen by an unintended audience, (6) they do not foresee how their posts could be perceived by people within their intended audience, and (7) they misunderstand or misuse the Facebook platform. Some reported incidents had serious repercussions, such as breaking up relationships or job losses. We discuss methodological considerations in studying negative experiences associated with social networking posts, as well as ways of helping users of social networking sites avoid such regrets.", "title": "" }, { "docid": "d142ad76c2c5bb1565ef539188ce7d43", "text": "The recent discovery of new classes of small RNAs has opened unknown territories to explore new regulations of physiopathological events. We have recently demonstrated that RNY (or Y RNA)-derived small RNAs (referred to as s-RNYs) are an independent class of clinical biomarkers to detect coronary artery lesions and are associated with atherosclerosis burden. Here, we have studied the role of s-RNYs in human and mouse monocytes/macrophages and have shown that in lipid-laden monocytes/macrophages s-RNY expression is timely correlated to the activation of both NF-κB and caspase 3-dependent cell death pathways. Loss- or gain-of-function experiments demonstrated that s-RNYs activate caspase 3 and NF-κB signaling pathways ultimately promoting cell death and inflammatory responses. As, in atherosclerosis, Ro60-associated s-RNYs generated by apoptotic macrophages are released in the blood of patients, we have investigated the extracellular function of the s-RNY/Ro60 complex. Our data demonstrated that s-RNY/Ro60 complex induces caspase 3-dependent cell death and NF-κB-dependent inflammation, when added to the medium of cultured monocytes/macrophages. Finally, we have shown that s-RNY function is mediated by Toll-like receptor 7 (TLR7). Indeed using chloroquine, which disrupts signaling of endosome-localized TLRs 3, 7, 8 and 9 or the more specific TLR7/9 antagonist, the phosphorothioated oligonucleotide IRS954, we blocked the effect of either intracellular or extracellular s-RNYs. These results position s-RNYs as relevant novel functional molecules that impacts on macrophage physiopathology, indicating their potential role as mediators of inflammatory diseases, such as atherosclerosis.", "title": "" }, { "docid": "7b6482f295304b2a7a4c6082d0300dc9", "text": "In this paper we proposed SVM algorithm for MNIST dataset with fringe and its complementary version, inverse fringe as feature for SVM. MNIST data-set is consists of 60000 examples of training set and 10000 examples of test set. In our experiments we started with using fringe distance map as feature and found that the accuracy of system on trained data is 99.99% and on test data it is 97.14%, using inverse fringe distance map as feature and found that the accuracy of system on trained data is 99.92% and on test data is 97.72% and using combination of above two feature as feature and found that the accuracy of system on trained data is 100 and on test data is 97.55%.", "title": "" }, { "docid": "d21213e0dbef657d5e7ec8689fe427ed", "text": "Cutaneous infections due to Listeria monocytogenes are rare. Typically, infections manifest as nonpainful, nonpruritic, self-limited, localized, papulopustular or vesiculopustular eruptions in healthy persons. Most cases follow direct inoculation of the skin in veterinarians or farmers who have exposure to animal products of conception. Less commonly, skin lesions may arise from hematogenous dissemination in compromised hosts with invasive disease. Here, we report the first case in a gardener that occurred following exposure to soil and vegetation.", "title": "" }, { "docid": "cb4bf3bc76586e455dc863bc1ca2800e", "text": "Client-side apps (e.g., mobile or in-browser) need cloud data to be available in a local cache, for both reads and updates. For optimal user experience and developer support, the cache should be consistent and fault-tolerant. In order to scale to high numbers of unreliable and resource-poor clients, and large database, the system needs to use resources sparingly. The SwiftCloud distributed object database is the first to provide fast reads and writes via a causally-consistent client-side local cache backed by the cloud. It is thrifty in resources and scales well, thanks to consistent versioning provided by the cloud, using small and bounded metadata. It remains available during faults, switching to a different data centre when the current one is not responsive, while maintaining its consistency guarantees. This paper presents the SwiftCloud algorithms, design, and experimental evaluation. It shows that client-side apps enjoy the high performance and availability, under the same guarantees as a remote cloud data store, at a small cost.", "title": "" }, { "docid": "d676598b1afe341079b4705284d6a911", "text": "Quality of underwater image is poor due to the environment of water medium. The physical property of water medium causes attenuation of light travels through the water medium, resulting in low contrast, blur, inhomogeneous lighting, and color diminishing of the underwater images. This paper extends the methods of enhancing the quality of underwater image. The proposed method consists of two stages. At the first stage, the contrast correction technique is applied to the image, where the image is applied with the modified Von Kreis hypothesis and stretching the image into two different intensity images at the average value with respects to Rayleigh distribution. At the second stage, the color correction technique is applied to the image where the image is first converted into hue-saturation-value (HSV) color model. The modification of the color component increases the image color performance. Qualitative and quantitative analyses indicate that the proposed method outperforms other state-of-the-art methods in terms of contrast, details, and noise reduction.", "title": "" }, { "docid": "24625cbc472bf376b44ac6e962696d0b", "text": "Although deep neural networks have made tremendous progress in the area of multimedia representation, training neural models requires a large amount of data and time. It is well known that utilizing trained models as initial weights often achieves lower training error than neural networks that are not pre-trained. A fine-tuning step helps to both reduce the computational cost and improve the performance. Therefore, sharing trained models has been very important for the rapid progress of research and development. In addition, trained models could be important assets for the owner(s) who trained them; hence, we regard trained models as intellectual property. In this paper, we propose a digital watermarking technology for ownership authorization of deep neural networks. First, we formulate a new problem: embedding watermarks into deep neural networks. We also define requirements, embedding situations, and attack types on watermarking in deep neural networks. Second, we propose a general framework for embedding a watermark in model parameters, using a parameter regularizer. Our approach does not impair the performance of networks into which a watermark is placed because the watermark is embedded while training the host network. Finally, we perform comprehensive experiments to reveal the potential of watermarking deep neural networks as the basis of this new research effort. We show that our framework can embed a watermark during the training of a deep neural network from scratch, and during fine-tuning and distilling, without impairing its performance. The embedded watermark does not disappear even after fine-tuning or parameter pruning; the watermark remains complete even after 65% of parameters are pruned.", "title": "" }, { "docid": "c99fd51e8577a5300389c565aebebdb3", "text": "Face Detection and Recognition is an important area in the field of substantiation. Maintenance of records of students along with monitoring of class attendance is an area of administration that requires significant amount of time and efforts for management. Automated Attendance Management System performs the daily activities of attendance analysis, for which face recognition is an important aspect. The prevalent techniques and methodologies for detecting and recognizing faces by using feature extraction tools like mean, standard deviation etc fail to overcome issues such as scaling, pose, illumination, variations. The proposed system provides features such as detection of faces, extraction of the features, detection of extracted features, and analysis of student’s attendance. The proposed system integrates techniques such as Principal Component Analysis (PCA) for feature extraction and voila-jones for face detection &Euclidian distance classifier. Faces are recognized using PCA, using the database that contains images of students and is used to recognize student using the captured image. Better accuracy is attained in results and the system takes into account the changes that occurs in the face over the period of time.", "title": "" }, { "docid": "7e10aa210d6985d757a21b8b6c49ae53", "text": "Haptic devices for computers and video-game consoles aim to reproduce touch and to engage the user with `force feedback'. Although physical touch is often associated with proximity and intimacy, technologies of touch can reproduce such sensations over a distance, allowing intricate and detailed operations to be conducted through a network such as the Internet. The `virtual handshake' between Boston and London in 2002 is given as an example. This paper is therefore a critical investigation into some technologies of touch, leading to observations about the sociospatial framework in which this technological touching takes place. Haptic devices have now become routinely included with videogame consoles, and have started to be used in computer-aided design and manufacture, medical simulation, and even the cybersex industry. The implications of these new technologies are enormous, as they remould the human ^ computer interface from being primarily audiovisual to being more truly multisensory, and thereby enhance the sense of `presence' or immersion. But the main thrust of this paper is the development of ideas of presence over a large distance, and how this is enhanced by the sense of touch. By using the results of empirical research, including interviews with key figures in haptics research and engineering and personal experience of some of the haptic technologies available, I build up a picture of how `presence', `copresence', and `immersion', themselves paradoxically intangible properties, are guiding the design, marketing, and application of haptic devices, and the engendering and engineering of a set of feelings of interacting with virtual objects, across a range of distances. DOI:10.1068/d394t", "title": "" }, { "docid": "0e482ebd5fa8f8f3fc67b01e9e6ee4bc", "text": "Lung cancer is one of the most deadly diseases. It has a high death rate and its incidence rate has been increasing all over the world. Lung cancer appears as a solitary nodule in chest x-ray radiograph (CXR). Therefore, lung nodule detection in CXR could have a significant impact on early detection of lung cancer. Radiologists define a lung nodule in CXR as “solitary white nodule-like blob.” However, the solitary feature has not been employed for lung nodule detection before. In this paper, a solitary feature-based lung nodule detection method was proposed. We employed stationary wavelet transform and convergence index filter to extract the texture features and used AdaBoost to generate white nodule-likeness map. A solitary feature was defined to evaluate the isolation degree of candidates. Both the isolation degree and the white nodule likeness were used as final evaluation of lung nodule candidates. The proposed method shows better performance and robustness than those reported in previous research. More than 80% and 93% of lung nodules in the lung field in the Japanese Society of Radiological Technology (JSRT) database were detected when the false positives per image were two and five, respectively. The proposed approach has the potential of being used in clinical practice.", "title": "" }, { "docid": "61309b5f8943f3728f714cd40f260731", "text": "Article history: Received 4 January 2011 Received in revised form 1 August 2011 Accepted 13 August 2011 Available online 15 September 2011 Advertising media are a means of communication that creates different marketing and communication results among consumers. Over the years, newspaper, magazine, TV, and radio have provided a one-way media where information is broadcast and communicated. Due to the widespread application of the Internet, advertising has entered into an interactive communications mode. In the advent of 3G broadband mobile communication systems and smartphone devices, consumers' preferences can be pre-identified and advertising messages can therefore be delivered to consumers in a multimedia format at the right time and at the right place with the right message. In light of this new advertisement possibility, designing personalized mobile advertising to meet consumers' needs becomes an important issue. This research uses the fuzzy Delphi method to identify the key personalized attributes in a personalized mobile advertising message for different products. Results of the study identify six important design attributes for personalized advertisements: price, preference, promotion, interest, brand, and type of mobile device. As personalized mobile advertising becomes more integrated in people's daily activities, its pros and cons and social impact are also discussed. The research result can serve as a guideline for the key parties in mobile marketing industry to facilitate the development of the industry and ensure that advertising resources are properly used. © 2011 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "115028551249d3cb1accbd0841b9930a", "text": "To study the Lombard reflex, more realistic databases representing real-world conditions need to be recorded and analyzed. In this paper we 1) summarize a procedure to record Lombard data which provides a good approximation of realistic conditions, 2) present an analysis per class of sounds for duration and energy of words recorded while subjects are listening to noise through open-ear headphones a) when speakers are in communication with a recognition device and b) when reading a list, and 3) report on the influence of speaking style on speakerdependent and speaker-independent experiments. This paper extends a previous study aimed at analyzing the influence of the communication factor on the Lombard reflex. We also show evidence that it is difficult to separate the speaker from the environment stressor (in this case the noise) when studying the Lombard reflex. The main conclusion of our pilot study is that the communication factor should not be neglected because it strongly influences the Lombard reflex.", "title": "" }, { "docid": "fb915584f23482986e672b1a38993ca1", "text": "We propose an efficient distributed online learning protocol for low-latency real-time services. It extends a previously presented protocol to kernelized online learners that represent their models by a support vector expansion. While such learners often achieve higher predictive performance than their linear counterparts, communicating the support vector expansions becomes inefficient for large numbers of support vectors. The proposed extension allows for a larger class of online learning algorithms—including those alleviating the problem above through model compression. In addition, we characterize the quality of the proposed protocol by introducing a novel criterion that requires the communication to be bounded by the loss suffered.", "title": "" }, { "docid": "843e7bfe22d8b93852374dde8715ca42", "text": "In this paper, we formalize the idea behind capsule nets of using a capsule vector rather than a neuron activation to predict the label of samples. To this end, we propose to learn a group of capsule subspaces onto which an input feature vector is projected. Then the lengths of resultant capsules are used to score the probability of belonging to different classes. We train such a Capsule Projection Network (CapProNet) by learning an orthogonal projection matrix for each capsule subspace, and show that each capsule subspace is updated until it contains input feature vectors corresponding to the associated class. We will also show that the capsule projection can be viewed as normalizing the multiple columns of the weight matrix simultaneously to form an orthogonal basis, which makes it more effective in incorporating novel components of input features to update capsule representations. In other words, the capsule projection can be viewed as a multi-dimensional weight normalization in capsule subspaces, where the conventional weight normalization is simply a special case of the capsule projection onto 1D lines. Only a small negligible computing overhead is incurred to train the network in low-dimensional capsule subspaces or through an alternative hyper-power iteration to estimate the normalization matrix. Experiment results on image datasets show the presented model can greatly improve the performance of the state-of-the-art ResNet backbones by 10− 20% and that of the Densenet by 5− 7% respectively at the same level of computing and memory expenses. The CapProNet establishes the competitive state-of-the-art performance for the family of capsule nets by significantly reducing test errors on the benchmark datasets.", "title": "" }, { "docid": "aca800983a0e24aa663c09cccb91f02a", "text": "A multiple model adaptive estimator (MMAE) [1, 2, 6, 8, 9, 11] consists of a bank of parallel Kalman filters, each with a different model, and a hypothesis testing algorithm as shown in Fig. 1. Each of the internal models of the Kalman filters can be represented by a discrete value of a parameter vector (ak; k= 1,2, : : : ,K). The Kalman filters are provided a measurement vector (z) and the input vector (u), and produce a state estimate (x̂k) and a residual (rk). The hypothesis testing algorithm uses the residuals to compute conditional probabilities (pk) of the various hypotheses that are modeled in the Kalman filters, conditioned on the history of measurements received up to that time, and to compute an estimate of the true parameter vector (â). The conventional MMAE computes conditional probabilities (pk) in a manner that exploits three of four characteristics of Kalman filter residuals that are based on a correctly modeled hypothesis—that they should be Gaussian, zero-mean, and of computable covariance—but does not exploit the fact that they should also be white. The algorithm developed herein addresses this directly, yielding a complement to the conventional MMAE. One application of MMAE is flight control sensor/actuator failure detection and identification, where each Kalman filter has a different failure status model (ak) that it uses to form the state estimate (x̂k) and the residual (rk). The hypothesis testing algorithm assigns conditional probabilities (pk) to each of the hypotheses that were used to form the Kalman filter models. These conditional probabilities indicate the relative correctness of the various filter models, and can be used to select the best estimate of the true system failure status, weight the individual state estimates appropriately, and form a probability-weighted average state estimate (x̂MMAE). A primary objection to implementing an MMAE-based (or other) failure detection algorithm is the need to dither the system constantly to enhance failure identifiability. The MMAE compares the magnitudes of the residuals (appropriately scaled to account for various uncertainties and noises) from the various filters and chooses the hypothesis that corresponds to the residual that has a history of having smallest (scaled) magnitude. Large residuals must be produced by the filters with models that are incorrect to be able to discount these incorrect hypotheses. The residual is the difference between the measurement of the system output and the filter’s prediction of what that measurement should be, based on the filter-assumed system model. Therefore, to produce the needed large residuals in the incorrect filters, we need to produce a history of sufficiently large system outputs, so we need to dither the system constantly and thereby", "title": "" }, { "docid": "c5cfe386f6561eab1003d5572443612e", "text": "Agri-Food is the largest manufacturing sector in the UK. It supports a food chain that generates over {\\pounds}108bn p.a., with 3.9m employees in a truly international industry and exports {\\pounds}20bn of UK manufactured goods. However, the global food chain is under pressure from population growth, climate change, political pressures affecting migration, population drift from rural to urban regions and the demographics of an aging global population. These challenges are recognised in the UK Industrial Strategy white paper and backed by significant investment via a Wave 2 Industrial Challenge Fund Investment (\"Transforming Food Production: from Farm to Fork\"). Robotics and Autonomous Systems (RAS) and associated digital technologies are now seen as enablers of this critical food chain transformation. To meet these challenges, this white paper reviews the state of the art in the application of RAS in Agri-Food production and explores research and innovation needs to ensure these technologies reach their full potential and deliver the necessary impacts in the Agri-Food sector.", "title": "" }, { "docid": "c5d06fe50c16278943fe1df7ad8be888", "text": "Current main memory organizations in embedded and mobile application systems are DRAM dominated. The ever-increasing gap between today's processor and memory speeds makes the DRAM subsystem design a major aspect of computer system design. However, the limitations to DRAM scaling and other challenges like refresh provide undesired trade-offs between performance, energy and area to be made by architecture designers. Several emerging NVM options are being explored to at least partly remedy this but today it is very hard to assess the viability of these proposals because the simulations are not fully based on realistic assumptions on the NVM memory technologies and on the system architecture level. In this paper, we propose to use realistic, calibrated STT-MRAM models and a well calibrated cross-layer simulation and exploration framework, named SEAT, to better consider technologies aspects and architecture constraints. We will focus on general purpose/mobile SoC multi-core architectures. We will highlight results for a number of relevant benchmarks, representatives of numerous applications based on actual system architecture. The most energy efficient STT-MRAM based main memory proposal provides an average energy consumption reduction of 27% at the cost of 2x the area and the least energy efficient STT-MRAM based main memory proposal provides an average energy consumption reduction of 8% at the around the same area or lesser when compared to DRAM.", "title": "" }, { "docid": "eb6f055399614a4e0876ffefae8d6a28", "text": "For accurate recognition of protein folds, a deep learning network method (DN-Fold) was developed to predict if a given query-template protein pair belongs to the same structural fold. The input used stemmed from the protein sequence and structural features extracted from the protein pair. We evaluated the performance of DN-Fold along with 18 different methods on Lindahl's benchmark dataset and on a large benchmark set extracted from SCOP 1.75 consisting of about one million protein pairs, at three different levels of fold recognition (i.e., protein family, superfamily, and fold) depending on the evolutionary distance between protein sequences. The correct recognition rate of ensembled DN-Fold for Top 1 predictions is 84.5%, 61.5%, and 33.6% and for Top 5 is 91.2%, 76.5%, and 60.7% at family, superfamily, and fold levels, respectively. We also evaluated the performance of single DN-Fold (DN-FoldS), which showed the comparable results at the level of family and superfamily, compared to ensemble DN-Fold. Finally, we extended the binary classification problem of fold recognition to real-value regression task, which also show a promising performance. DN-Fold is freely available through a web server at http://iris.rnet.missouri.edu/dnfold.", "title": "" }, { "docid": "7a430880e5274fbb9d8cf4085920a5b6", "text": "Human beings are biologically adapted for culture in ways that other primates are not. The difference can be clearly seen when the social learning skills of humans and their nearest primate relatives are systematically compared. The human adaptation for culture begins to make itself manifest in human ontogeny at around 1 year of age as human infants come to understand other persons as intentional agents like the self and so engage in joint attentional interactions with them. This understanding then enables young children (a) to employ some uniquely powerful forms of cultural learning to acquire the accumulated wisdom of their cultures, especially as embodied in language, and also (b) to comprehend their worlds in some uniquely powerful ways involving perspectivally based symbolic representations. Until fairly recently, the study of children's cognitive development was dominated by the theory of Jean Piaget. Piaget's theory was detailed , elaborate, comprehensive, and, in many important respects, wrong. In attempting to fill the theoretical vacuum created by Piaget's demise, developmental psychologists have sorted themselves into two main groups. In the first group are those theorists who emphasize biology. These neo-nativists believe that organic evolution has provided human beings with some specific domains of knowledge of the world and its workings and that this knowledge is best characterized as \" innate. \" Such domains include, for example , mathematics, language, biology , and psychology. In the other group are theorists who have focused on the cultural dimension of human cognitive development. These cultural psychologists begin with the fact that human children grow into cogni-tively competent adults in the context of a structured social world full of material and symbolic arti-facts such as tools and language, structured social interactions such as rituals and games, and cultural institutions such as families and religions. The claim is that the cultural context is not just a facilitator or motivator for cognitive development , but rather a unique \" ontoge-netic niche \" (i.e., a unique context for development) that actually structures human cognition in fundamental ways. There are many thoughtful scientists in each of these theoretical camps. This suggests the possibility that each has identified some aspects of the overall theory that will be needed to go beyond Piaget and incorporate adequately both the cultural and the biological dimensions of human cognitive development. What is needed to achieve this aim, in my opinion, is (a) an evolutionary approach to the human …", "title": "" } ]
scidocsrr
c79fad33fdeb2a2a15da27e3f8f904cf
V3D enables real-time 3D visualization and quantitative analysis of large-scale biological image data sets
[ { "docid": "e380710014dd33734636f077a59f1b62", "text": "Since the work of Golgi and Cajal, light microscopy has remained a key tool for neuroscientists to observe cellular properties. Ongoing advances have enabled new experimental capabilities using light to inspect the nervous system across multiple spatial scales, including ultrastructural scales finer than the optical diffraction limit. Other progress permits functional imaging at faster speeds, at greater depths in brain tissue, and over larger tissue volumes than previously possible. Portable, miniaturized fluorescence microscopes now allow brain imaging in freely behaving mice. Complementary progress on animal preparations has enabled imaging in head-restrained behaving animals, as well as time-lapse microscopy studies in the brains of live subjects. Mouse genetic approaches permit mosaic and inducible fluorescence-labeling strategies, whereas intrinsic contrast mechanisms allow in vivo imaging of animals and humans without use of exogenous markers. This review surveys such advances and highlights emerging capabilities of particular interest to neuroscientists.", "title": "" } ]
[ { "docid": "9c5b908801357f296a16558284a5b3ae", "text": "People constantly make snap judgments about objects encountered in the environment. Such rapid judgments must be based on the physical properties of the targets, but the nature of these properties is yet unknown. We hypothesized that sharp transitions in contour might convey a sense of threat, and therefore trigger a negative bias. Our results were consistent with this hypothesis. The type of contour a visual object possesses--whether the contour is sharp angled or curved--has a critical influence on people's attitude toward that object.", "title": "" }, { "docid": "5523f345b8509e8636374d14ac0cf9de", "text": "In this paper we discuss and create a MQTT based Secured home automation system, by using mentioned sensors and using Raspberry pi B+ model as the network gateway, here we have implemented MQTT Protocol for transferring & receiving sensor data and finally getting access to those sensor data, also we have implemented ACL (access control list) to provide encryption method for the data and finally monitoring those data on webpage or any network devices. R-pi has been used as a gateway or the main server in the whole system, which has various sensor connected to it via wired or wireless communication.", "title": "" }, { "docid": "f3860c0ed0803759e44133a0110a60bb", "text": "Using comment information available from Digg we define a co-participation network between users. We focus on the analysis of this implicit network, and study the behavioral characteristics of users. Using an entropy measure, we infer that users at Digg are not highly focused and participate across a wide range of topics. We also use the comment data and social network derived features to predict the popularity of online content linked at Digg using a classification and regression framework. We show promising results for predicting the popularity scores even after limiting our feature extraction to the first few hours of comment activity that follows a Digg submission.", "title": "" }, { "docid": "b206560e0c9f3e59c8b9a8bec6f12462", "text": "A symmetrical microstrip directional coupler design using the synthesis technique without prior knowledge of the physical geometry of the directional coupler is analytically given. The introduced design method requires only the information of the port impedances, the coupling level, and the operational frequency. The analytical results are first validated by using a planar electromagnetic simulation tool and then experimentally verified. The error between the experimental and analytical results is found to be within 3% for the worst case. The design charts that give all the physical dimensions, including the length of the directional coupler versus frequency and different coupling levels, are given for alumina, Teflon, RO4003, FR4, and RF-60, which are widely used in microwave applications. The complete design of symmetrical two-line microstrip directional couplers can be obtained for the first time using our results in this paper.", "title": "" }, { "docid": "c0fc94aca86a6aded8bc14160398ddea", "text": "THE most persistent problems of recall all concern the ways in which past experiences and past reactions are utilised when anything is remembered. From a general point of view it looks as if the simplest explanation available is to suppose that when any specific event occurs some trace, or some group of traces, is made and stored up in the organism or in the mind. Later, an immediate stimulus re-excites the trace, or group of traces, and, provided a further assumption is made to the effect that the trace somehow carries with it a temporal sign, the re-excitement appears to be equivalent to recall. There is, of course, no direct evidence for such traces, but the assumption at first sight seems to be a very simple one, and so it has commonly been made.", "title": "" }, { "docid": "60513bd4ef2e25915c72674734e3eda2", "text": "InT. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 397-420). New York: Cambridge University Press,2002. This chapter introduces a theoretical framework that describes the importance of affect in guiding judgments and decisions. As used here,. affect means the specific quality of \"goodness\" or \"badness\" (1) experienced as a feeling state (with or without consciousness) and (2) demarcating a positive or negative quality of a stimulus. Affective responses occur rapidly and automatically note how quickly you sense the feelings as.sociated with the stimulus words treasure or hate. We argue that reliance on such feelings can be characterized as the affect heuristic. In this chapter, we trace the development of the affect heuristic across a variety of research paths followed by ourselves and many others. We also discuss some of the important practical implications resulting from ways that this heuristic impacts our daily lives.", "title": "" }, { "docid": "62d1574e23fcf07befc54838ae2887c1", "text": "Digital images are widely used and numerous application in different scientific fields use digital image processing algorithms where image segmentation is a common task. Thresholding represents one technique for solving that task and Kapur's and Otsu's methods are well known criteria often used for selecting thresholds. Finding optimal threshold values represents a hard optimization problem and swarm intelligence algorithms have been successfully used for solving such problems. In this paper we adjusted recent elephant herding optimization algorithm for multilevel thresholding by Kapur's and Otsu's method. Performance was tested on standard benchmark images and compared with four other swarm intelligence algorithms. Elephant herding optimization algorithm outperformed other approaches from literature and it was more robust.", "title": "" }, { "docid": "2a34800bc275f062f820c0eb4597d297", "text": "Construction sites are dynamic and complicated systems. The movement and interaction of people, goods and energy make construction safety management extremely difficult. Due to the ever-increasing amount of information, traditional construction safety management has operated under difficult circumstances. As an effective way to collect, identify and process information, sensor-based technology is deemed to provide new generation of methods for advancing construction safety management. It makes the real-time construction safety management with high efficiency and accuracy a reality and provides a solid foundation for facilitating its modernization, and informatization. Nowadays, various sensor-based technologies have been adopted for construction safety management, including locating sensor-based technology, vision-based sensing and wireless sensor networks. This paper provides a systematic and comprehensive review of previous studies in this field to acknowledge useful findings, identify the research gaps and point out future research directions.", "title": "" }, { "docid": "78454419cd378a8f6d4417e4063835f5", "text": "We present and evaluate a method for automatically detecting sentence fragments in English texts written by non-native speakers. Our method combines syntactic parse tree patterns and parts-of-speech information produced by a tagger to detect this phenomenon. When evaluated on a corpus of authentic learner texts, our best model achieved a precision of 0.84 and a recall of 0.62, a statistically significant improvement over baselines using non-parse features, as well as a popular grammar checker.", "title": "" }, { "docid": "c79038936fa81d7036d00314d7405e0a", "text": "This paper proposes a new current controller with modified decoupling and anti-windup schemes for a permanent magnet synchronous motor (PMSM) drive. In designing the controller, an improved voltage model, which is different from existing models in that it reflects all the nonlinear characteristics of the motor, is considered. In an actual PMSM, unintentional distortion occurs in inductance and flux due to magnetic saturation and structural asymmetry. In this paper, the effects of such distortion on voltage ripple are analyzed and the effect of voltage distortion on current control is analyzed in detail. Based on the voltage model, a decoupling controller is developed to effectively separate the d-q current regulators. The controller produces compensation voltages using the current error of the other axis. In addition, an anti-windup controller is designed that takes into account not only the integrator output in PI controllers but also the integrator output in decoupling controllers. The proposed current controller aimed at compensating all nonlinearities of PMSM enables high-performance operation of the motor. The feasibility of the proposed current control scheme is verified by experimental results.", "title": "" }, { "docid": "aec560c27d4873674114bd5dd9d64625", "text": "Caches consume a significant amount of energy in modern microprocessors. To design an energy-efficient microprocessor, it is important to optimize cache energy consumption. This paper examines performance and power trade-offs in cache designs and the effectiveness of energy reduction for several novel cache design techniques targeted for low power.", "title": "" }, { "docid": "615dbb03f31acfce971a383fa54d7d12", "text": "Objectives\nTo introduce blockchain technologies, including their benefits, pitfalls, and the latest applications, to the biomedical and health care domains.\n\n\nTarget Audience\nBiomedical and health care informatics researchers who would like to learn about blockchain technologies and their applications in the biomedical/health care domains.\n\n\nScope\nThe covered topics include: (1) introduction to the famous Bitcoin crypto-currency and the underlying blockchain technology; (2) features of blockchain; (3) review of alternative blockchain technologies; (4) emerging nonfinancial distributed ledger technologies and applications; (5) benefits of blockchain for biomedical/health care applications when compared to traditional distributed databases; (6) overview of the latest biomedical/health care applications of blockchain technologies; and (7) discussion of the potential challenges and proposed solutions of adopting blockchain technologies in biomedical/health care domains.", "title": "" }, { "docid": "40b9004b6eb3cdbd8471df38f85d8f12", "text": "Indoor scene understanding is central to applications such as robot navigation and human companion assistance. Over the last years, data-driven deep neural networks have outperformed many traditional approaches thanks to their representation learning capabilities. One of the bottlenecks in training for better representations is the amount of available per-pixel ground truth data that is required for core scene understanding tasks such as semantic segmentation, normal prediction, and object boundary detection. To address this problem, a number of works proposed using synthetic data. However, a systematic study of how such synthetic data is generated is missing. In this work, we introduce a large-scale synthetic dataset with 500K physically-based rendered images from 45K realistic 3D indoor scenes. We study the effects of rendering methods and scene lighting on training for three computer vision tasks: surface normal prediction, semantic segmentation, and object boundary detection. This study provides insights into the best practices for training with synthetic data (more realistic rendering is worth it) and shows that pretraining with our new synthetic dataset can improve results beyond the current state of the art on all three tasks.", "title": "" }, { "docid": "a08fe0c015f5fc02b7654f3fd00fb599", "text": "Recently, there has been considerable interest in attribute based access control (ABAC) to overcome the limitations of the dominant access control models (i.e, discretionary-DAC, mandatory-MAC and role based-RBAC) while unifying their advantages. Although some proposals for ABAC have been published, and even implemented and standardized, there is no consensus on precisely what is meant by ABAC or the required features of ABAC. There is no widely accepted ABAC model as there are for DAC, MAC and RBAC. This paper takes a step towards this end by constructing an ABAC model that has “just sufficient” features to be “easily and naturally” configured to do DAC, MAC and RBAC. For this purpose we understand DAC to mean owner-controlled access control lists, MAC to mean lattice-based access control with tranquility and RBAC to mean flat and hierarchical RBAC. Our central contribution is to take a first cut at establishing formal connections between the three successful classical models and desired ABAC models.", "title": "" }, { "docid": "f8ba12d3fd6ebf65429a2ce5f5143dbd", "text": "The contour-guided color palette (CCP) is proposed for robust image segmentation. It efficiently integrates contour and color cues of an image. To find representative colors of an image, color samples along long contours between regions, similar in spirit to machine learning methodology that focus on samples near decision boundaries, are collected followed by the mean-shift (MS) algorithm in the sampled color space to achieve an image-dependent color palette. This color palette provides a preliminary segmentation in the spatial domain, which is further fine-tuned by post-processing techniques such as leakage avoidance, fake boundary removal, and small region mergence. Segmentation performances of CCP and MS are compared and analyzed. While CCP offers an acceptable standalone segmentation result, it can be further integrated into the framework of layered spectral segmentation to produce a more robust segmentation. The superior performance of CCP-based segmentation algorithm is demonstrated by experiments on the Berkeley Segmentation Dataset.", "title": "" }, { "docid": "31201cf1a9fcd93b84c2c402df9003b7", "text": "Abstract—This paper presents a planar microstrip-fed tab monopole antenna for ultra wideband wireless communications applications. The impedance bandwidth of the antenna is improved by adding slit in one side of the monopole, introducing a tapered transition between the monopole and the feed line, and adding two-step staircase notch in the ground plane. Numerical analysis for the antenna dimensional parameters using Ansoft HFSS is performed and presented. The proposed antenna has a small size of 16 × 19 mm, and provides an ultra wide bandwidth from 2.8 to 28 GHz with low VSWR level and good radiation characteristics to satisfy the requirements of the current and future wireless communications systems.", "title": "" }, { "docid": "3007b72b893b352ae89b519ad54276e9", "text": "Natural products such as plant extracts and complex microbial secondary metabolites have recently attracted the attention of scientific world for their potential use as drugs for treating chronic diseases such as Type II diabetes. Non-Insulin-Dependent Diabetes Mellitus (NIDDM) or Type II diabetes has complicated basis and has various treatment options, each targeting different mechanism of action. One such option relies on digestive enzyme inhibition. Almost all of the currently used clinically digestive enzyme inhibitors are bacterial secondary metabolites. However in most cases understanding of their complete biosynthetic pathways remains a challenge. The currently used digestive enzyme inhibitors have significant side effects that have restricted their usage. Hence, many active plant metabolites are being investigated as more effective treatment with fewer side effects. Flavonoids, terpenoids, glycosides are few to name in that class. Many of these are proven inhibitors of digestive enzymes but their large scale production remains a technical conundrum. Their successful heterologous production in simple host bacteria in scalable quantities gives a new dimension to the continuously active research for better treatment for type II diabetes. Looking at existing and new methods of mass level production of digestive inhibitors and latest efforts to effectively discover new potential drugs is the subject of this book chapter.", "title": "" }, { "docid": "0c7221ffca357ba80401551333e1080d", "text": "The effects of temperature and current on the resistance of small geometry silicided contact structures have been characterized and modeled for the first time. Both, temperature and high current induced self heating have been shown to cause contact resistance lowering which can be significant in the performance of advanced ICs. It is demonstrated that contact-resistance sensitivity to temperature and current is controlled by the silicide thickness which influences the interface doping concentration, N. Behavior of W-plug and force-fill (FF) Al plug contacts have been investigated in detail. A simple model has been formulated which directly correlates contact resistance to temperature and N. Furthermore, thermal impedance of these contact structures have been extracted and a critical failure temperature demonstrated that can be used to design robust contact structures.", "title": "" }, { "docid": "0822720d8bb0222bd7f0f758fa93ff9d", "text": "Hydrogen can be recovered by fermentation of organic material rich in carbohydrates, but much of the organic matter remains in the form of acetate and butyrate. An alternative to methane production from this organic matter is the direct generation of electricity in a microbial fuel cell (MFC). Electricity generation using a single-chambered MFC was examined using acetate or butyrate. Power generated with acetate (800 mg/L) (506 mW/m2 or 12.7 mW/ L) was up to 66% higher than that fed with butyrate (1000 mg/L) (305 mW/m2 or 7.6 mW/L), demonstrating that acetate is a preferred aqueous substrate for electricity generation in MFCs. Power output as a function of substrate concentration was well described by saturation kinetics, although maximum power densities varied with the circuit load. Maximum power densities and half-saturation constants were Pmax ) 661 mW/m2 and Ks ) 141 mg/L for acetate (218 Ω) and Pmax ) 349 mW/m2 and Ks ) 93 mg/L for butyrate (1000 Ω). Similar open circuit potentials were obtained in using acetate (798 mV) or butyrate (795 mV). Current densities measured for stable power output were higher for acetate (2.2 A/m2) than those measured in MFCs using butyrate (0.77 A/m2). Cyclic voltammograms suggested that the main mechanism of power production in these batch tests was by direct transfer of electrons to the electrode by bacteria growing on the electrode and not by bacteria-produced mediators. Coulombic efficiencies and overall energy recovery were 10-31 and 3-7% for acetate and 8-15 and 2-5% for butyrate, indicating substantial electron and energy losses to processes other than electricity generation. These results demonstrate that electricity generation is possible from soluble fermentation end products such as acetate and butyrate, but energy recoveries should be increased to improve the overall process performance.", "title": "" }, { "docid": "4d7cbe7f5e854028277f0120085b8977", "text": "In this paper we formulate structure from motion as a learning problem. We train a convolutional network end-to-end to compute depth and camera motion from successive, unconstrained image pairs. The architecture is composed of multiple stacked encoder-decoder networks, the core part being an iterative network that is able to improve its own predictions. The network estimates not only depth and motion, but additionally surface normals, optical flow between the images and confidence of the matching. A crucial component of the approach is a training loss based on spatial relative differences. Compared to traditional two-frame structure from motion methods, results are more accurate and more robust. In contrast to the popular depth-from-single-image networks, DeMoN learns the concept of matching and, thus, better generalizes to structures not seen during training.", "title": "" } ]
scidocsrr
46434dc890c177e818b33ec00a6d5f1d
Structural Representations for Learning Relations between Pairs of Texts
[ { "docid": "2c20209b57b93135cea658fdf97c3712", "text": "We present the UKP system which performed best in the Semantic Textual Similarity (STS) task at SemEval-2012 in two out of three metrics. It uses a simple log-linear regression model, trained on the training data, to combine multiple text similarity measures of varying complexity. These range from simple character and word n-grams and common subsequences to complex features such as Explicit Semantic Analysis vector comparisons and aggregation of word similarity based on lexical-semantic resources. Further, we employ a lexical substitution system and statistical machine translation to add additional lexemes, which alleviates lexical gaps. Our final models, one per dataset, consist of a log-linear combination of about 20 features, out of the possible 300+ features implemented.", "title": "" } ]
[ { "docid": "dc67945b32b2810a474acded3c144f68", "text": "This paper presents an overview of the eld of Intelligent Products. As Intelligent Products have many facets, this paper is mainly focused on the concept behind Intelligent Products, the technical foundations, and the achievable practical goals of Intelligent Products. A novel classi cation of Intelligent Products is introduced, which distinguishes between three orthogonal dimensions. Furthermore, the technical foundations in the areas of automatic identi cation and embedded processing, distributed information storage and processing, and agent-based systems are discussed, as well as the achievable practical goals in the contexts of manufacturing, supply chains, asset management, and product life cycle management.", "title": "" }, { "docid": "4a8bf7a4e1596f83f97c08270386fed1", "text": "Acute unclassified colitis could be the first attack of inflammatory bowel disease, particularly chronic ulcerative colitis or acute non specific colitis regarded as being of infectious origin without recurrence. The aim of this work was to determine the outcome of 104 incidental cases of acute unclassified colitis diagnosed during the year 1988 at a census point made 2.5 to 3 years later and to search for demographic and clinical discriminating data for final diagnosis. Thirteen patients (12.5%) were lost to follow up. Another final diagnosis was made in three other patients: two had salmonellosis and one diverticulosis. Of the remaining 88 patients, 46 (52.3%) relapsed and were subsequently classified as inflammatory bowel disease: 54% ulcerative colitis, 33% Crohn's disease and 13% chronic unclassified colitis. Forty-two (47.7%) did not relapse and were considered to have acute non specific colitis. The mean age at onset was significantly lower in patients with inflammatory bowel disease (32.3 years) than in patients with acute non specific colitis (42.6 years) (P < 0.001). No clinical data (diarrhea, abdominal pain, bloody stool, mucus discharge fever, weight loss) was predictive of the final diagnosis. In this series, 52.3% of patients initially classified as having an acute unclassified colitis had a final diagnosis of inflammatory bowel disease after a 2.5-3 years follow-up. These data warrant a thorough follow up of acute unclassified colitis, especially when it occurs in patients < 40 years.", "title": "" }, { "docid": "aa2b1a8d0cf511d5862f56b47d19bc6a", "text": "DBMSs have long suffered from SQL’s lack of power and extensibility. We have implemented ATLaS [1], a powerful database language and system that enables users to develop complete data-intensive applications in SQL—by writing new aggregates and table functions in SQL, rather than in procedural languages as in current Object-Relational systems. As a result, ATLaS’ SQL is Turing-complete [7], and is very suitable for advanced data-intensive applications, such as data mining and stream queries. The ATLaS system is now available for download along with a suite of applications [1] including various data mining functions, that have been coded in ATLaS’ SQL, and execute with a modest (20–40%) performance overhead with respect to the same applications written in C/C++. Our proposed demo will illustrate the key features and applications of ATLaS. In particular, we will demonstrate:", "title": "" }, { "docid": "5a5c71b56cf4aa6edff8ecc57298a337", "text": "The learning process of a multilayer perceptron requires the optimization of an error function E(y,t) comparing the predicted output, y, and the observed target, t. We review some usual error functions, analyze their mathematical properties for data classification purposes, and introduce a new one, E(Exp), inspired by the Z-EDM algorithm that we have recently proposed. An important property of E(Exp) is its ability to emulate the behavior of other error functions by the sole adjustment of a real-valued parameter. In other words, E(Exp) is a sort of generalized error function embodying complementary features of other functions. The experimental results show that the flexibility of the new, generalized, error function allows one to obtain the best results achievable with the other functions with a performance improvement in some cases.", "title": "" }, { "docid": "8e34d3c0f25abc171599b76e3c4f07e8", "text": "During the past 100 years clinical studies of amnesia have linked memory impairment to damage of the hippocampus. Yet the damage in these cases has not usually been confined to the hippocampus, and the status of memory functions has often been based on incomplete neuropsychological information. Thus, the human cases have until now left some uncertainty as to whether lesions limited to the hippocampus are sufficient to cause amnesia. Here we report a case of amnesia in a patient (R.B.) who developed memory impairment following an ischemic episode. During the 5 years until his death, R.B. exhibited marked anterograde amnesia, little if any retrograde amnesia, and showed no signs of cognitive impairment other than memory. Thorough histological examination revealed a circumscribed bilateral lesion involving the entire CA1 field of the hippocampus. Minor pathology was found elsewhere in the brain (e.g., left globus pallidus, right postcentral gyrus, left internal capsule), but the only damage that could be reasonably associated with the memory defect was the lesion in the hippocampus. To our knowledge, this is the first reported case of amnesia following a lesion limited to the hippocampus in which extensive neuropsychological and neuropathological analyses have been carried out.", "title": "" }, { "docid": "ef9947c8f478d6274fcbcf8c9e300806", "text": "The introduction in 1998 of multi-detector row computed tomography (CT) by the major CT vendors was a milestone with regard to increased scan speed, improved z-axis spatial resolution, and better utilization of the available x-ray power. In this review, the general technical principles of multi-detector row CT are reviewed as they apply to the established four- and eight-section systems, the most recent 16-section scanners, and future generations of multi-detector row CT systems. Clinical examples are used to demonstrate both the potential and the limitations of the different scanner types. When necessary, standard single-section CT is referred to as a common basis and starting point for further developments. Another focus is the increasingly important topic of patient radiation exposure, successful dose management, and strategies for dose reduction. Finally, the evolutionary steps from traditional single-section spiral image-reconstruction algorithms to the most recent approaches toward multisection spiral reconstruction are traced.", "title": "" }, { "docid": "f2c8af1f4bcf7115fc671ae9922adbb3", "text": "Extracting insights from temporal event sequences is an important challenge. In particular, mining frequent patterns from event sequences is a desired capability for many domains. However, most techniques for mining frequent patterns are ineffective for real-world data that may be low-resolution, concurrent, or feature many types of events, or the algorithms may produce results too complex to interpret. To address these challenges, we propose Frequence, an intelligent user interface that integrates data mining and visualization in an interactive hierarchical information exploration system for finding frequent patterns from longitudinal event sequences. Frequence features a novel frequent sequence mining algorithm to handle multiple levels-of-detail, temporal context, concurrency, and outcome analysis. Frequence also features a visual interface designed to support insights, and support exploration of patterns of the level-of-detail relevant to users. Frequence's effectiveness is demonstrated with two use cases: medical research mining event sequences from clinical records to understand the progression of a disease, and social network research using frequent sequences from Foursquare to understand the mobility of people in an urban environment.", "title": "" }, { "docid": "4d2986dffedadfd425505f9e25c5f6cb", "text": "BACKGROUND\nThe use of heart rate variability (HRV) in the management of sport training is a practice which tends to spread, especially in order to prevent the occurrence of states of fatigue.\n\n\nOBJECTIVE\nTo estimate the HRV parameters obtained using a heart rate recording, according to different loads of sporting activities, and to make the possible link with the appearance of fatigue.\n\n\nMETHODS\nEight young football players, aged 14.6 years+/-2 months, playing at league level in Rhône-Alpes, training for 10 to 20 h per week, were followed over a period of 5 months, allowing to obtain 54 recordings of HRV in three different conditions: (i) after rest (ii) after a day with training and (iii) after a day with a competitive match.\n\n\nRESULTS\nUnder the effect of a competitive match, the HRV temporal indicators (heart rate, RR interval, and pNN50) were significantly altered compared to the rest day. The analysis of the sympathovagal balance rose significantly as a result of the competitive constraint (0.72+/-0.17 vs. 0.90+/-0.20; p<0.05).\n\n\nCONCLUSION\nThe main results obtained show that the HRV is an objective and non-invasive monitoring of management of the training of young sportsmen. HRV analysis allowed to highlight any neurovegetative adjustments according to the physical loads. Thus, under the effect of an increase of physical and psychological constraints that a football match represents, the LF/HF ratio rises significantly; reflecting increased sympathetic stimulation, which beyond certain limits could be relevant to prevent the emergence of a state of fatigue.", "title": "" }, { "docid": "f472388e050e80837d2d5129ba8a358b", "text": "Voice control has emerged as a popular method for interacting with smart-devices such as smartphones, smartwatches etc. Popular voice control applications like Siri and Google Now are already used by a large number of smartphone and tablet users. A major challenge in designing a voice control application is that it requires continuous monitoring of user?s voice input through the microphone. Such applications utilize hotwords such as \"Okay Google\" or \"Hi Galaxy\" allowing them to distinguish user?s voice command and her other conversations. A voice control application has to continuously listen for hotwords which significantly increases the energy consumption of the smart-devices.\n To address this energy efficiency problem of voice control, we present AccelWord in this paper. AccelWord is based on the empirical evidence that accelerometer sensors found in today?s mobile devices are sensitive to user?s voice. We also demonstrate that the effect of user?s voice on accelerometer data is rich enough so that it can be used to detect the hotwords spoken by the user. To achieve the goal of low energy cost but high detection accuracy, we combat multiple challenges, e.g. how to extract unique signatures of user?s speaking hotwords only from accelerometer data and how to reduce the interference caused by user?s mobility.\n We finally implement AccelWord as a standalone application running on Android devices. Comprehensive tests show AccelWord has hotword detection accuracy of 85% in static scenarios and 80% in mobile scenarios. Compared to the microphone based hotword detection applications such as Google Now and Samsung S Voice, AccelWord is 2 times more energy efficient while achieving the accuracy of 98% and 92% in static and mobile scenarios respectively.", "title": "" }, { "docid": "f2fdd2f5a945d48c323ae6eb3311d1d0", "text": "Distributed computing systems such as clouds continue to evolve to support various types of scientific applications, especially scientific workflows, with dependable, consistent, pervasive, and inexpensive access to geographically-distributed computational capabilities. Scheduling multiple workflows on distributed computing systems like Infrastructure-as-a-Service (IaaS) clouds is well recognized as a fundamental NP-complete problem that is critical to meeting various types of Quality-of-Service (QoS) requirements. In this paper, we propose a multiobjective optimization workflow scheduling approach based on dynamic game-theoretic model aiming at reducing workflow make-spans, reducing total cost, and maximizing system fairness in terms of workload distribution among heterogeneous cloud virtual machines (VMs). We conduct extensive case studies as well based on various well-known scientific workflow templates and real-world third-party commercial IaaS clouds. Experimental results clearly suggest that our proposed approach outperform traditional ones by achieving lower workflow make-spans, lower cost, and better system fairness.", "title": "" }, { "docid": "651ddcbc6d514da005d0d4319a325e96", "text": "Convolutional Neural Networks (CNNs) have recently demonstrated a superior performance in computer vision applications; including image retrieval. This paper introduces a bilinear CNN-based model for the first time in the context of Content-Based Image Retrieval (CBIR). The proposed architecture consists of two feature extractors using a pre-trained deep CNN model fine-tuned for image retrieval task to generate a Compact Root Bilinear CNN (CRB-CNN) architecture. Image features are directly extracted from the activations of convolutional layers then pooled at image locations. Additionally, the output size of bilinear features is largely reduced to a compact but high descriminative image representation using kernal-based low-dimensional projection and pooling, which is a fundamental improvement in the retrieval performance in terms of search speed and memory size. An end-to-end training is applied by back-probagation to learn the parameters of the final CRB-CNN. Experimental results reported on the standard Holidays image dataset show the efficiency of the architecture at extracting and learning even complex features for CBIR tasks. Specifically, using a vector of 64-dimension, it achieves 95.13% mAP accuracy and outperforms the best results of state-of-the-art approaches.", "title": "" }, { "docid": "c7a15659f2fe5f67da39b77a3eb19549", "text": "Privacy breaches and their regulatory implications have attracted corporate attention in recent times. An often overlooked cause of privacy breaches is human error. In this study, we first apply a model based on the widely accepted GEMS error typology to analyze publicly reported privacy breach incidents within the U.S. Then, based on an examination of the causes of the reported privacy breach incidents, we propose a defense-in-depth solution strategy founded on error avoidance, error interception, and error correction. Finally, we illustrate the application of the proposed strategy to managing human error in the case of the two leading causes of privacy breach incidents. This study finds that mistakes in the information processing stage constitute the most cases of human errorrelated privacy breach incidents, clearly highlighting the need for effective policies and their enforcement in organizations. a 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "85b9cd3e6f0f55ad4aea17a52e25bcf8", "text": "Translating or rotating an input image should not affect the results of many computer vision tasks. Convolutional neural networks (CNNs) are already translation equivariant: input image translations produce proportionate feature map translations. This is not the case for rotations. Global rotation equivariance is typically sought through data augmentation, but patch-wise equivariance is more difficult. We present Harmonic Networks or H-Nets, a CNN exhibiting equivariance to patch-wise translation and 360-rotation. We achieve this by replacing regular CNN filters with circular harmonics, returning a maximal response and orientation for every receptive field patch. H-Nets use a rich, parameter-efficient and fixed computational complexity representation, and we show that deep feature maps within the network encode complicated rotational invariants. We demonstrate that our layers are general enough to be used in conjunction with the latest architectures and techniques, such as deep supervision and batch normalization. We also achieve state-of-the-art classification on rotated-MNIST, and competitive results on other benchmark challenges.", "title": "" }, { "docid": "54bae3ac2087dbc7dcba553ce9f2ef2e", "text": "The landscape of computing capabilities within the home has seen a recent shift from persistent desktops to mobile platforms, which has led to the use of the cloud as the primary computing platform implemented by developers today. Cloud computing platforms, such as Amazon EC2 and Google App Engine, are popular for many reasons including their reliable, always on, and robust nature. The capabilities that centralized computing platforms provide are inherent to their implementation, and unmatched by previous platforms (e.g., Desktop applications). Thus, third-party developers have come to rely on cloud computing platforms to provide high quality services to their end-users.", "title": "" }, { "docid": "855a8cfdd9d01cd65fe32d18b9be4fdf", "text": "Interest in business intelligence and analytics education has begun to attract IS scholars’ attention. In order to discover new research questions, there is a need for conducting a literature review of extant studies on BI&A education. This study identified 44 research papers through using Google Scholar related to BI&A education. This research contributes to the field of BI&A education by (a) categorizing the existing studies on BI&A education into the key five research foci, and (b) identifying the research gaps and providing the guide for future BI&A and IS research.", "title": "" }, { "docid": "5301ac53f4086ec6d170d953a46834a2", "text": "Permanent magnet (PM) motors are favored for electric traction applications due to their high efficiency. The paper proposes an enhanced wheel internal permanent magnet (IPM) motor offering the advantages of PM machines along with excellent torque performance and high power density. The advantage of the proposed scheme is the high magnetic flux developed in the air gap which allows much higher values of magnetic flux density, compared to a surface PM machine of the same size. This IPM motor aims to efficiently utilize the energy stored in the PM, where high load and intense transient phenomena occur, in electrical traction applications, while keeping a simple and robust structure.", "title": "" }, { "docid": "86165b08fbd8d7814203cf9cf928d7f1", "text": "The development of an anomaly-based intrusion detection system (IDS) is a primary research direction in the field of intrusion detection. An IDS learns normal and anomalous behavior by analyzing network traffic and can detect unknown and new attacks. However, the performance of an IDS is highly dependent on feature design, and designing a feature set that can accurately characterize network traffic is still an ongoing research issue. Anomaly-based IDSs also have the problem of a high false alarm rate (FAR), which seriously restricts their practical applications. In this paper, we propose a novel IDS called the hierarchical spatial-temporal features-based intrusion detection system (HAST-IDS), which first learns the low-level spatial features of network traffic using deep convolutional neural networks (CNNs) and then learns high-level temporal features using long short-term memory networks. The entire process of feature learning is completed by the deep neural networks automatically; no feature engineering techniques are required. The automatically learned traffic features effectively reduce the FAR. The standard DARPA1998 and ISCX2012 data sets are used to evaluate the performance of the proposed system. The experimental results show that the HAST-IDS outperforms other published approaches in terms of accuracy, detection rate, and FAR, which successfully demonstrates its effectiveness in both feature learning and FAR reduction.", "title": "" }, { "docid": "86617458af24278fa2b69b544dc0f09e", "text": "Recent research on learning in work situations has focussed on concepts such as ‘productive learning’ and ‘pedagogy of vocational learning’. In investigating what makes learning productive and what pedagogies enhance this, there is a tendency to take the notion of learning as unproblematic. This paper argues that much writing on workplace learning is strongly shaped by peoples’ understandings of learning in formal educational situations. Such assumptions distort attempts to understand learning at work. The main focus of this paper is to problematise the concept of ‘learning’ and to identify the implications of this for attempts to understand learning at work and the conditions that enhance it. An alternative conception of learning that promises to do more justice to the richness of learning at work is presented and discussed. For several years now, the adult and vocational learning research group at University of Technology, Sydney, (now known as OVAL Research1), has been pursuing a systematic research agenda centred on issues about learning at work (e.g. Boud & Garrick 1999, Symes & McIntyre 2000, Beckett & Hager 2002). The OVAL research group’s two most recent seminar series have been focussed on ‘productive learning’ and ‘pedagogy of vocational learning’. Both of these topics reflect a concern with conditions that enhance rich learning in work situations. In attempting, however, to characterise what makes learning productive and what pedagogies enhance this, there may be a tendency to take the notion of learning as unproblematic. I have elsewhere argued that common understandings of learning uncritically incorporate assumptions that derive from previous formal learning experiences (Hager forthcoming). Likewise Elkjaer (2003) has recently pointed out how much writing on workplace learning is strongly shaped by the authors’ understandings of learning in formal educational situations. The main focus of this paper is to problematise the concept of ‘learning’ and to identify the implications of this for attempts to understand learning at work and the conditions that enhance it. A key claim is that government policies that impact significantly on learning at work commonly treat learning as a product, i.e. as the acquisition of discrete items of knowledge or skill. The argument is that these policies thereby obstruct attempts to develop satisfactory understandings of learning at work. 1 The Australian Centre for Organisational, Vocational and Adult Learning Research. (For details see www.oval.uts.edu.au) Problematising the Concept of Learning Although learning is still widely treated as an unproblematic concept in educational writings, there is growing evidence that its meaning increasingly is being contested. For instance Brown & Palincsar (1989, p. 394) observed: “Learning is a term with more meanings that there are theorists”. Schoenfeld (1999, p. 6) noted “....that the very definition of learning is contested, and that assumptions that people make regarding its nature and where it takes place also vary widely.” According to Winch “.....the possibility of giving a scientific or even a systematic account of human learning is ..... mistaken” (1998, p. 2). His argument is that there are many and diverse cases of learning, each subject to “constraints in a variety of contexts and cultures” which precludes them from being treated in a general way (1998, p. 85). He concludes that “... grand theories of learning .... are underpinned ... invariably ... by faulty epistemological premises” (Winch, 1998, p. 183). Not only is the concept of learning disputed amongst theorists, it seems that even those with the greatest claims to practical knowledge of learning may be deficient in their understanding. Those bastions of learning, higher education institutions can trace their origins back into the mists of time. If anyone knows from experience what learning is it should be them. Yet the recent cyber learning debacle suggests otherwise. Many of the world’s most illustrious universities have invested many millions of dollars setting up suites of online courses in the expectation of making large profits from offcampus students. According to Brabazon (2002), these initiatives have manifestly failed since prospective students were not prepared to pay the fees. Many of these online courses are now available free as a backup resource for on-campus students. Brabazon’s analysis is that these university ‘experts’ on learning have confused technology with teaching and tools with learning. The staggering sums of money mis-invested in online education certainly shows that universities may not be the experts in learning that they think they are. We can take Brabazon’s analysis a step further. The reason why tools were confused with learning, I argue, is that learning is not a well understood concept at the start of the 21st century. Perhaps it is in a similar position to the concept of motion at the end of the middle ages. Of course, motion is one of the central concepts in physics, just as learning is a central concept in education, and the social sciences generally. For a long time, understanding of motion was limited by adherence to the Aristotelian attempt to provide a single account of all motion. Aristotle proposed a second-order distinction between natural and violent motions. It was the ‘nature’ of all terrestrial bodies to have a natural motion towards the centre of the universe (the centre of the earth); but bodies were also subject to violent motions in any direction imparted by disruptive, external, ‘non-natural’ causes. So the idea was to privilege one kind of motion as basic and to account for others in terms of non-natural disruptions to this natural motion. The Aristotelian account persisted for so long because it was in accord with ‘common sense’ ideas on motion. Everyone was familiar with motion and thought that they understood it. Likewise, everyone has experienced formal schooling and this shapes how they understand learning. Thus, the type of learning that is familiar to everyone gains privileged status. The worth of other kinds of learning is judged by how well they approximate the favoured kind (Beckett & Hager 2002, section 6.1). The dominance of this concept of learning is also evident in educational thought, where there has been a major focus on learning in formal education settings. This dominant view of learning also fits well with ‘folk’ conceptions of the mind (Bereiter 2002). Real progress in understanding motion came when physicists departed from ‘common sense’ ideas and recognised that there are many different types of motion – falling, projectile, pendulum, wave, etc. each requiring their own account. Likewise, it seems there are many types of learning and things that can be learnt – propositions, skills, behaviours, attitudes, etc. Efforts to understand these may well require a range of theories each with somewhat different assumptions. The Monolithic Influence of Viewing Learning as a Product There is currently a dominant view of learning that is akin to the Aristotelian view of motion in its pervasive influence. It provides an account of supposedly the best kind of learning, and all cases of learning are judged by how well they fit this view. This dominant view of learning – the ‘common sense’ account – views the mind as a ‘container’ and ‘knowledge as a type of substance’ (Lakoff & Johnson 1980). Under the influence of the mind-as-container metaphor, knowledge is treated as consisting of objects contained in individual minds, something like the contents of mental filing cabinets. (Bereiter 2002, p. 179) Thus there is a focus on ‘adding more substance’ to the mind. This is the ‘folk theory’ of learning (e.g. Bereiter 2002). It emphasises the products of learning. At this stage it might be objected that the educationally sophisticated have long ago moved beyond viewing learning as a product. Certainly, as shown later in this paper, the educational arguments for an alternative view have been persuasive for quite some time now. Nevertheless, much educational policy and practice, including policies and practices that directly impact on the emerging interest in learning at work, are clearly rooted in the learning as product view. For instance, typical policy documents relating to CompetencyBased Training view work performance as a series of decontextualised atomic elements, which novice workers are thought of as needing to pick up one by one. Once a discrete element is acquired, transfer or application to appropriate future circumstances by the learner is assumed to be unproblematic. This is a pure learning as product approach. Similarly, policy documents on generic skills (core or basic skills) typically reflect similar assumptions. Putative generic skills, such as communication and problem solving, are presented as discrete, decontextualised elements that, once acquired, can simply be transferred to diverse situations. Certainly, in literature emanating from employer groups, this assumption is endemic. These, then, are two policy areas that are closely linked to learning at work that are dominated by learning as product assumptions. Of course, Lyotard (1984) and other postmodern writers (e.g. Usher & Edwards 1994) have argued that the recent neo-liberal marketisation of education results in a commodification of knowledge, in which knowledge is equated with information. Such information can, for instance, be readily stored and transmitted via microelectronic technology. Students become consumers of educational commodities. All of this is grist to the learning as product mill. However, it needs to be emphasised that learning as product was the dominant mindset long before the rise of neo-liberal marketisation of education. This is reflected in standard international educational nomenclature: acquisition of content, transfer of learning, delivery of courses, course providers, course offerings, course load, ", "title": "" }, { "docid": "ce55485a60213c7656eb804b89be36cc", "text": "In a previous article, we presented a systematic computational study of the extraction of semantic representations from the word-word co-occurrence statistics of large text corpora. The conclusion was that semantic vectors of pointwise mutual information values from very small co-occurrence windows, together with a cosine distance measure, consistently resulted in the best representations across a range of psychologically relevant semantic tasks. This article extends that study by investigating the use of three further factors--namely, the application of stop-lists, word stemming, and dimensionality reduction using singular value decomposition (SVD)--that have been used to provide improved performance elsewhere. It also introduces an additional semantic task and explores the advantages of using a much larger corpus. This leads to the discovery and analysis of improved SVD-based methods for generating semantic representations (that provide new state-of-the-art performance on a standard TOEFL task) and the identification and discussion of problems and misleading results that can arise without a full systematic study.", "title": "" }, { "docid": "df78d3cc688a0223ebdd680279dd9022", "text": "This paper studies cache-aided interference networks with arbitrary number of transmitters and receivers, whereby each transmitter has a cache memory of finite size. Each transmitter fills its cache memory from a content library of files in the placement phase. In the subsequent delivery phase, each receiver requests one of the library files, and the transmitters are responsible for delivering the requested files from their caches to the receivers. The objective is to design schemes for the placement and delivery phases to maximize the sum degrees of freedom (sum-DoF) which expresses the capacity of the interference network at the high signal-to-noise ratio regime. Our work mainly focuses on a commonly used uncoded placement strategy. We provide an information-theoretic bound on the sumDoF for this placement strategy. We demonstrate by an example that the derived bound is tighter than the bounds existing in the literature for small cache sizes. We propose a novel delivery scheme with a higher achievable sum-DoF than those previously given in the literature. The results reveal that the reciprocal of sum-DoF decreases linearly as the transmitter cache size increases. Therefore, increasing cache sizes at transmitters translates to increasing the sum-DoF and, hence, the capacity of the interference networks. Index Terms Coded caching, Interference networks, Degrees of freedom, Interference management.", "title": "" } ]
scidocsrr
d44db327a23657cd5c3fe2c477a4994f
All You Need is Beyond a Good Init: Exploring Better Solution for Training Extremely Deep Convolutional Neural Networks with Orthonormality and Modulation
[ { "docid": "034bf47c5982756a1cf1c1ccd777d604", "text": "We present weight normalization: a reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction. By reparameterizing the weights in this way we improve the conditioning of the optimization problem and we speed up convergence of stochastic gradient descent. Our reparameterization is inspired by batch normalization but does not introduce any dependencies between the examples in a minibatch. This means that our method can also be applied successfully to recurrent models such as LSTMs and to noise-sensitive applications such as deep reinforcement learning or generative models, for which batch normalization is less well suited. Although our method is much simpler, it still provides much of the speed-up of full batch normalization. In addition, the computational overhead of our method is lower, permitting more optimization steps to be taken in the same amount of time. We demonstrate the usefulness of our method on applications in supervised image recognition, generative modelling, and deep reinforcement learning.", "title": "" }, { "docid": "976dc6591e21e96ddb9ac6133a47e2ec", "text": "Object detection is a fundamental problem in image understanding. One popular solution is the R-CNN framework [15] and its fast versions [14, 27]. They decompose the object detection problem into two cascaded easier tasks: 1) generating object proposals from images, 2) classifying proposals into various object categories. Despite that we are handling with two relatively easier tasks, they are not solved perfectly and there's still room for improvement. In this paper, we push the \"divide and conquer\" solution even further by dividing each task into two sub-tasks. We call the proposed method \"CRAFT\" (Cascade Regionproposal-network And FasT-rcnn), which tackles each task with a carefully designed network cascade. We show that the cascade structure helps in both tasks: in proposal generation, it provides more compact and better localized object proposals, in object classification, it reduces false positives (mainly between ambiguous categories) by capturing both inter-and intra-category variances. CRAFT achieves consistent and considerable improvement over the state-of the-art on object detection benchmarks like PASCAL VOC 07/12 and ILSVRC.", "title": "" }, { "docid": "9cb033c92c06f804118381f61dd884f9", "text": "Training state-of-the-art, deep neural networks is computationally expensive. One way to reduce the training time is to normalize the activities of the neurons. A recently introduced technique called batch normalization uses the distribution of the summed input to a neuron over a mini-batch of training cases to compute a mean and variance which are then used to normalize the summed input to that neuron on each training case. This significantly reduces the training time in feedforward neural networks. However, the effect of batch normalization is dependent on the mini-batch size and it is not obvious how to apply it to recurrent neural networks. In this paper, we transpose batch normalization into layer normalization by computing the mean and variance used for normalization from all of the summed inputs to the neurons in a layer on a single training case. Like batch normalization, we also give each neuron its own adaptive bias and gain which are applied after the normalization but before the non-linearity. Unlike batch normalization, layer normalization performs exactly the same computation at training and test times. It is also straightforward to apply to recurrent neural networks by computing the normalization statistics separately at each time step. Layer normalization is very effective at stabilizing the hidden state dynamics in recurrent networks. Empirically, we show that layer normalization can substantially reduce the training time compared with previously published techniques.", "title": "" } ]
[ { "docid": "0b146cb20ed80b17f607251fba7e25d7", "text": "Presence is widely accepted as the key concept to be considered in any research involving human interaction with Virtual Reality (VR). Since its original description, the concept of presence has developed over the past decade to be considered by many researchers as the essence of any experience in a virtual environment. The VR generating systems comprise two main parts: a technological component and a psychological experience. The different relevance given to them produced two different but coexisting visions of presence: the rationalist and the psychological/ecological points of view. The rationalist point of view considers a VR system as a collection of specific machines with the necessity of the inclusion of the concept of presence. The researchers agreeing with this approach describe the sense of presence as a function of the experience of a given medium (Media Presence). The main result of this approach is the definition of presence as the perceptual illusion of non-mediation produced by means of the disappearance of the medium from the conscious attention of the subject. At the other extreme, there is the psychological or ecological perspective (Inner Presence). Specifically, this perspective considers presence as a neuropsychological phenomenon, evolved from the interplay of our biological and cultural inheritance, whose goal is the control of the human activity. Given its key role and the rate at which new approaches to understanding and examining presence are appearing, this chapter draws together current research on presence to provide an up to date overview of the most widely accepted approaches to its understanding and measurement.", "title": "" }, { "docid": "47fd07d8f2f540ee064e1c674c550637", "text": "Virtual reality and 360-degree video streaming are growing rapidly, yet, streaming high-quality 360-degree video is still challenging due to high bandwidth requirements. Existing solutions reduce bandwidth consumption by streaming high-quality video only for the user's viewport. However, adding the spatial domain (viewport) to the video adaptation space prevents the existing solutions from buffering future video chunks for a duration longer than the interval that user's viewport is predictable. This makes playback more prone to video freezes due to rebuffering, which severely degrades the user's Quality of Experience especially under challenging network conditions. We propose a new method that alleviates the restrictions on buffer duration by utilizing scalable video coding. Our method significantly reduces the occurrence of rebuffering on links with varying bandwidth without compromising playback quality or bandwidth efficiency compared to the existing solutions. We demonstrate the efficiency of our proposed method using experimental results with real world cellular network bandwidth traces.", "title": "" }, { "docid": "da5baa030c79c87461db66728201d386", "text": "The goal of this study is to provide a seizure detection algorithm that is relatively simple to implement on a microcontroller, so it can be used for an implantable closed loop stimulation device. We propose a set of 11 simple time domain and power bands features, computed from one intracranial EEG contact located in the seizure onset zone. The classification of the features is performed using a random forest classifier. Depending on the training datasets and the optimization preferences, the performance of the algorithm were: 93.84% mean sensitivity (100% median sensitivity), 3.03 s mean (1.75 s median) detection delays and 0.33/h mean (0.07/h median) false detections per hour.", "title": "" }, { "docid": "6d2903f82ec382b4214d9322e545e71f", "text": "We review the pros and cons of analog and digital computation. We propose that computation that is most efficient in its use of resources is neither analog computation nor digital computation but, rather, a mixture of the two forms. For maximum efficiency, the information and information-processing resources of the hybrid form must be distributed over many wires, with an optimal signal-to-noise ratio per wire. Our results suggest that it is likely that the brain computes in a hybrid fashion and that an underappreciated and important reason for the efficiency of the human brain, which consumes only 12 W, is the hybrid and distributed nature of its architecture.", "title": "" }, { "docid": "0add9f22db24859da50e1a64d14017b9", "text": "Light field imaging offers powerful new capabilities through sophisticated digital processing techniques that are tightly merged with unconventional optical designs. This combination of imaging technology and computation necessitates a fundamentally different view of the optical properties of imaging systems and poses new challenges for the traditional signal and image processing domains. In this article, we aim to provide a comprehensive review of the considerations involved and the difficulties encountered in working with light field data.", "title": "" }, { "docid": "1051cb1eb8d9306e1419dbad0ad53ee9", "text": "The goal of this paper is to design a statistical test for the camera model identification problem. The approach is based on the heteroscedastic noise model, which more accurately describes a natural raw image. This model is characterized by only two parameters, which are considered as unique fingerprint to identify camera models. The camera model identification problem is cast in the framework of hypothesis testing theory. In an ideal context where all model parameters are perfectly known, the likelihood ratio test (LRT) is presented and its performances are theoretically established. For a practical use, two generalized LRTs are designed to deal with unknown model parameters so that they can meet a prescribed false alarm probability while ensuring a high detection performance. Numerical results on simulated images and real natural raw images highlight the relevance of the proposed approach.", "title": "" }, { "docid": "713ade80a6c2e0164a0d6fe6ef07be37", "text": "We review recent work on the role of intrinsic amygdala networks in the regulation of classically conditioned defensive behaviors, commonly known as conditioned fear. These new developments highlight how conditioned fear depends on far more complex networks than initially envisioned. Indeed, multiple parallel inhibitory and excitatory circuits are differentially recruited during the expression versus extinction of conditioned fear. Moreover, shifts between expression and extinction circuits involve coordinated interactions with different regions of the medial prefrontal cortex. However, key areas of uncertainty remain, particularly with respect to the connectivity of the different cell types. Filling these gaps in our knowledge is important because much evidence indicates that human anxiety disorders results from an abnormal regulation of the networks supporting fear learning.", "title": "" }, { "docid": "2088be2c5623d7491c5692b6ebd4f698", "text": "Machine learning (ML) is now widespread. Traditional software engineering can be applied to the development ML applications. However, we have to consider specific problems with ML applications in therms of their quality. In this paper, we present a survey of software quality for ML applications to consider the quality of ML applications as an emerging discussion. From this survey, we raised problems with ML applications and discovered software engineering approaches and software testing research areas to solve these problems. We classified survey targets into Academic Conferences, Magazines, and Communities. We targeted 16 academic conferences on artificial intelligence and software engineering, including 78 papers. We targeted 5 Magazines, including 22 papers. The results indicated key areas, such as deep learning, fault localization, and prediction, to be researched with software engineering and testing.", "title": "" }, { "docid": "dd45f296e623857262bd65e5d3843f33", "text": "In their original versions, nature-inspired search algorithms such as evolutionary algorithms and those based on swarm intelligence, lack a mechanism to deal with the constraints of a numerical optimization problem. Nowadays, however, there exists a considerable amount of research devoted to design techniques for handling constraints within a nature-inspired algorithm. This paper presents an analysis of the most relevant types of constraint-handling techniques that have been adopted with nature-inspired algorithms. From them, the most popular approaches are analyzed in more detail. For each of them, some representative instantiations are further discussed. In the last part of the paper, some of the future trends in the area, which have been only scarcely explored, are briefly discussed and then the conclusions of this paper are presented. © 2011 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "c9d801183e3629e6231f48b180c5ee4e", "text": "This paper presents a robust watermarking algorithm with informed detection for 3D polygonal meshes. The algorithm is based on our previous algorithm [22] that employs mesh-spectral analysis to modify mesh shapes in their transformed domain. This paper presents extensions to our previous algorithm so that (1) much larger meshes can be watermarked within a reasonable time, and that (2) the watermark is robust against connectivity alteration (e.g., mesh simplification), and that (3) the watermark is robust against attacks that combine similarity transformation with such other attacks as cropping, mesh simplification, and smoothing. Experiment showed that our new watermarks are resistant against mesh simplification and remeshing combined with resection, similarity transformation, and other operations..", "title": "" }, { "docid": "de39f498f28cf8cfc01f851ca3582d32", "text": "Program autotuning has been shown to achieve better or more portable performance in a number of domains. However, autotuners themselves are rarely portable between projects, for a number of reasons: using a domain-informed search space representation is critical to achieving good results; search spaces can be intractably large and require advanced machine learning techniques; and the landscape of search spaces can vary greatly between different problems, sometimes requiring domain specific search techniques to explore efficiently.\n This paper introduces OpenTuner, a new open source framework for building domain-specific multi-objective program autotuners. OpenTuner supports fully-customizable configuration representations, an extensible technique representation to allow for domain-specific techniques, and an easy to use interface for communicating with the program to be autotuned. A key capability inside OpenTuner is the use of ensembles of disparate search techniques simultaneously; techniques that perform well will dynamically be allocated a larger proportion of tests. We demonstrate the efficacy and generality of OpenTuner by building autotuners for 7 distinct projects and 16 total benchmarks, showing speedups over prior techniques of these projects of up to 2.8x with little programmer effort.", "title": "" }, { "docid": "c6f17a0d5f91c3cab9183bbc5fa2dfc3", "text": "In human beings, head is one of the most important parts. Injuries in this part can cause serious damages to overall health. In some cases, they can be fatal. The present paper analyses the deformations of a helmet mounted on a human head, using finite element method. It studies the amount of von Mises pressure and stress caused by a vertical blow from above on the skull. The extant paper aims at developing new methods for improving the design and achieving more energy absorption by applying more appropriate models. In this study, a thermoplastic damper is applied and modelled in order to reduce the amount of energy transferred to the skull and to minimize the damages inflicted on human head.", "title": "" }, { "docid": "bfc85b95287e4abc2308849294384d1e", "text": "& 10 0 YE A RS A G O 50 YEARS AGO A Congress was held in Singapore during December 2–9 to celebrate “the Centenary of the formulation of the theory of Evolution by Charles Darwin and Alfred Russel Wallace and the Bicentenary of the publication of the tenth edition of the ‘Systema Naturae’ by Linnaeus”. It was particularly fitting that this Congress should have been held in Singapore for ... it directed special attention to the work of Wallace, who was one of the greatest biologists ever to have worked in south-east Asia ... Prof. Haldane then delivered his presidential address ... The president emphasised the stimuli gained by Linnaeus, Darwin and Wallace through working in peripheral areas where lack of knowledge was a challenge. He suggested that the next major biological advance may well come for similar reasons from peripheral places such as Singapore, or Calcutta, where this challenge still remains and where the lack of complex scientific apparatus drives biologists into different and long-neglected fields of research. From Nature 14 March 1959.", "title": "" }, { "docid": "c2aa1c74d0569a068b6e381f314aa1ff", "text": "For the purpose of discovering security flaws in software, many dynamic and static taint analyzing techniques have been proposed. By analyzing information flow at runtime, dynamic taint analysis can precisely find security flaws of software. However, on one hand, it suffers from substantial runtime overhead and is incapable of discovering the potential threats. On the other hand, static taint analysis analyzes program’s code without actually executing it which incurs no runtime overhead, and can cover all the code, but it is often not accurate enough. In addition, since the source code of most software is hard to acquire and intruders simply do not attach target program’s source code in practice, software flaw tracking becomes rather complicated. In order to cope with these issues, this paper proposes HYBit, a novel hybrid framework which integrates dynamic and static taint analysis to diagnose the flaws or vulnerabilities for binary programs. In the framework, the source binary is first analyzed by the dynamic taint analyzer. Then, with the runtime information provided by its dynamic counterpart, the static taint analyzer can process the unexecuted part of the target program easily. Furthermore, a taint behavior filtration mechanism is proposed to optimize the performance of the framework. We evaluate our framework from three perspectives: efficiency, coverage, and effectiveness. The results are encouraging.", "title": "" }, { "docid": "d994b23ea551f23215232c0771e7d6b3", "text": "It is said that there’s nothing so practical as good theory. It may also be said that there’s nothing so theoretically interesting as good practice1. This is particularly true of efforts to relate constructivism as a theory of learning to the practice of instruction. Our goal in this paper is to provide a clear link between the theoretical principles of constructivism, the practice of instructional design, and the practice of teaching. We will begin with a basic characterization of constructivism identifying what we believe to be the central principles in learning and understanding. We will then identify and elaborate on eight instructional principles for the design of a constructivist learning environment. Finally, we will examine what we consider to be one of the best exemplars of a constructivist learning environment -Problem Based Learning as described by Barrows (1985, 1986, 1992).", "title": "" }, { "docid": "0368fdfe05918134e62e0f7b106130ee", "text": "Scientific charts are an effective tool to visualize numerical data trends. They appear in a wide range of contexts, from experimental results in scientific papers to statistical analyses in business reports. The abundance of scientific charts in the web has made it inevitable for search engines to include them as indexed content. However, the queries based on only the textual data used to tag the images can limit query results. Many studies exist to address the extraction of data from scientific diagrams in order to improve search results. In our approach to achieving this goal, we attempt to enhance the semantic labeling of the charts by using the original data values that these charts were designed to represent. In this paper, we describe a method to extract data values from a specific class of charts, bar charts. The extraction process is fully automated using image processing and text recognition techniques combined with various heuristics derived from the graphical properties of bar charts. The extracted information can be used to enrich the indexing content for bar charts and improve search results. We evaluate the effectiveness of our method on bar charts drawn from the web as well as charts embedded in digital documents.", "title": "" }, { "docid": "a60adf12308186ebde27fe216fab6f71", "text": "The advent of quantum computing processors with possibility to scale beyond experimental capacities magnifies the importance of studying their applications. Combinatorial optimization problems can be one of the promising applications of these new devices. These problems are recurrent in industrial applications and they are in general difficult for classical computing hardware. In this work, we provide a survey of the approaches to solving different types of combinatorial optimization problems, in particular quadratic unconstrained binary optimization (QUBO) problems on a gate model quantum computer. We focus mainly on four different approaches including digitizing the adiabatic quantum computing, global quantum optimization algorithms, the quantum algorithms that approximate the ground state of a general QUBO problem, and quantum sampling. We also discuss the quantum algorithms that are custom designed to solve certain types of QUBO problems.", "title": "" }, { "docid": "164fca8833981d037f861aada01d5f7f", "text": "Kernel methods provide a principled way to perform non linear, nonparametric learning. They rely on solid functional analytic foundations and enjoy optimal statistical properties. However, at least in their basic form, they have limited applicability in large scale scenarios because of stringent computational requirements in terms of time and especially memory. In this paper, we take a substantial step in scaling up kernel methods, proposing FALKON, a novel algorithm that allows to efficiently process millions of points. FALKON is derived combining several algorithmic principles, namely stochastic subsampling, iterative solvers and preconditioning. Our theoretical analysis shows that optimal statistical accuracy is achieved requiring essentially O(n) memory and O(n √ n) time. An extensive experimental analysis on large scale datasets shows that, even with a single machine, FALKON outperforms previous state of the art solutions, which exploit parallel/distributed architectures.", "title": "" }, { "docid": "67925645b590cba622dd101ed52cf9e2", "text": "This study is the first to demonstrate that features of psychopathy can be reliably and validly detected by lay raters from \"thin slices\" (i.e., small samples) of behavior. Brief excerpts (5 s, 10 s, and 20 s) from interviews with 96 maximum-security inmates were presented in video or audio form or in both modalities combined. Forty raters used these excerpts to complete assessments of overall psychopathy and its Factor 1 and Factor 2 components, various personality disorders, violence proneness, and attractiveness. Thin-slice ratings of psychopathy correlated moderately and significantly with psychopathy criterion measures, especially those related to interpersonal features of psychopathy, particularly in the 5- and 10-s excerpt conditions and in the video and combined channel conditions. These findings demonstrate that first impressions of psychopathy and related constructs, particularly those pertaining to interpersonal functioning, can be reasonably reliable and valid. They also raise intriguing questions regarding how individuals form first impressions and about the extent to which first impressions may influence the assessment of personality disorders. (PsycINFO Database Record (c) 2009 APA, all rights reserved).", "title": "" }, { "docid": "b44f24b54e45974421f799527391a9db", "text": "Dengue fever is a noncontagious infectious disease caused by dengue virus (DENV). DENV belongs to the family Flaviviridae, genus Flavivirus, and is classified into four antigenically distinct serotypes: DENV-1, DENV-2, DENV-3, and DENV-4. The number of nations and people affected has increased steadily and today is considered the most widely spread arbovirus (arthropod-borne viral disease) in the world. The absence of an appropriate animal model for studying the disease has hindered the understanding of dengue pathogenesis. In our study, we have found that immunocompetent C57BL/6 mice infected intraperitoneally with DENV-1 presented some signs of dengue disease such as thrombocytopenia, spleen hemorrhage, liver damage, and increase in production of IFNγ and TNFα cytokines. Moreover, the animals became viremic and the virus was detected in several organs by real-time RT-PCR. Thus, this animal model could be used to study mechanism of dengue virus infection, to test antiviral drugs, as well as to evaluate candidate vaccines.", "title": "" } ]
scidocsrr
b58d8091c14dcd0a377a0a4551ac0461
MARTA GANs: Unsupervised Representation Learning for Remote Sensing Image Classification
[ { "docid": "b6da971f13c1075ce1b4aca303e7393f", "text": "In this paper, we evaluate the generalization power of deep features (ConvNets) in two new scenarios: aerial and remote sensing image classification. We evaluate experimentally ConvNets trained for recognizing everyday objects for the classification of aerial and remote sensing images. ConvNets obtained the best results for aerial images, while for remote sensing, they performed well but were outperformed by low-level color descriptors, such as BIC. We also present a correlation analysis, showing the potential for combining/fusing different ConvNets with other descriptors or even for combining multiple ConvNets. A preliminary set of experiments fusing ConvNets obtains state-of-the-art results for the well-known UCMerced dataset.", "title": "" } ]
[ { "docid": "d0205fc884821d10d6939748012bcfcb", "text": "We establish a data-dependent notion of algorithmic stability for Stochastic Gradient Descent (SGD), and employ it to develop novel generalization bounds. This is in contrast to previous distribution-free algorithmic stability results for SGD which depend on the worst-case constants. By virtue of the data-dependent argument, our bounds provide new insights into learning with SGD on convex and non-convex problems. In the convex case, we show that the bound on the generalization error depends on the risk at the initialization point. In the non-convex case, we prove that the expected curvature of the objective function around the initialization point has crucial influence on the generalization error. In both cases, our results suggest a simple data-driven strategy to stabilize SGD by pre-screening its initialization. As a corollary, our results allow us to show optimistic generalization bounds that exhibit fast convergence rates for SGD subject to a vanishing empirical risk and low noise of stochastic gradient.", "title": "" }, { "docid": "443d86b6121da824590dd38236495445", "text": "OBJECTIVE\nTo identify and evaluate coping strategies advocated by experienced animal shelter workers who directly engaged in euthanizing animals.\n\n\nDESIGN\nCross-sectional study.\n\n\nSAMPLE POPULATION\nAnimal shelters across the United States in which euthanasia was conducted (5 to 100 employees/shelter).\n\n\nPROCEDURES\nWith the assistance of experts associated with the Humane Society of the United States, the authors identified 88 animal shelters throughout the United States in which animal euthanasia was actively conducted and for which contact information regarding the shelter director was available. Staff at 62 animal shelters agreed to participate in the survey. Survey packets were mailed to the 62 shelter directors, who then distributed them to employees. The survey included questions regarding respondent age, level of education, and role and asked those directly involved in the euthanasia of animals to provide advice on strategies for new euthanasia technicians to deal with the related stress. Employees completed the survey and returned it by mail. Content analysis techniques were used to summarize survey responses.\n\n\nRESULTS\nCoping strategies suggested by 242 euthanasia technicians were summarized into 26 distinct coping recommendations in 8 categories: competence or skills strategies, euthanasia behavioral strategies, cognitive or self-talk strategies, emotional regulation strategies, separation strategies, get-help strategies, seek long-term solution strategies, and withdrawal strategies.\n\n\nCONCLUSIONS AND CLINICAL RELEVANCE\nEuthanizing animals is a major stressor for many animal shelter workers. Information regarding the coping strategies identified in this study may be useful for training new euthanasia technicians.", "title": "" }, { "docid": "4c9391f334ca2640e07b63b5a9764045", "text": "The mobile phone landscape changed last year with the introduction of smart phones running Android, a platform marketed by Google. Android phones are the first credible threat to the iPhone market. Not only did Google target the same consumers as iPhone, it also aimed to win the hearts and minds of mobile application developers. On the basis of market share and the number of available apps, Android is a success.", "title": "" }, { "docid": "07e82c630ead780ad9e2382a1f713290", "text": "The telecommunication world is evolving towards networks and different services. It is necessary to ensure interoperability between different networks to provide seamless and on-demand services. As the number of services and users in Internet Protocol Multimedia Subsystem (IMS) keeps increasing, network virtualization and cloud computing technologies seem to be a good alternative for Mobile Virtual Network Operators (MVNOs) in order to provide better services to customers and save cost and time. Cloud computing known as an IT environment that includes all elements of the IT and network stack, enabling the development, delivery, and consumption of Cloud Services. In this paper, we will present the challenges and issues of these emerging technologies. The first part of this paper describes Cloud computing as the networks of the future. It presents an overview of some works in this area. Some concepts like cloud services and Service oriented Architecture designed to facilitate rapid prototyping and deployment of on demand services that enhance flexibility, communication performance, robustness, and scalability are detailed. The second part exposes SOA and its concept, the third one deals with virtualization. Keywordscloud computing; services; SOA architecture; virtualization, IMS", "title": "" }, { "docid": "367c3fd4401e30d4982509733d908d38", "text": "Markov logic networks (MLNs) are a statistical relational model that consists of weighted firstorder clauses and generalizes first-order logic and Markov networks. The current state-of-the-art algorithm for learning MLN structure follows a top-down paradigm where many potential candidate structures are systematically generated without considering the data and then evaluated using a statistical measure of their fit to the data. Even though this existing algorithm outperforms an impressive array of benchmarks, its greedy search is susceptible to local maxima or plateaus. We present a novel algorithm for learning MLN structure that follows a more bottom-up approach to address this problem. Our algorithm uses a \"propositional\" Markov network learning method to construct \"template\" networks that guide the construction of candidate clauses. Our algorithm significantly improves accuracy and learning time over the existing topdown approach in three real-world domains.", "title": "" }, { "docid": "70e82da805e5bb21d35d552afe68bc61", "text": "The consumption of pomegranate juice (PJ), a rich source of antioxidant polyphenols, has grown tremendously due to its reported health benefits. Pomegranate extracts, which incorporate the major antioxidants found in pomegranates, namely, ellagitannins, have been developed as botanical dietary supplements to provide an alternative convenient form for consuming the bioactive polyphenols found in PJ. Despite the commercial availability of pomegranate extract dietary supplements, there have been no studies evaluating their safety in human subjects. A pomegranate ellagitannin-enriched polyphenol extract (POMx) was prepared for dietary supplement use and evaluated in two pilot clinical studies. Study 1 was designed for safety assessment in 64 overweight individuals with increased waist size. The subjects consumed either one or two POMx capsules per day providing 710 mg (435 mg of gallic acid equivalents, GAEs) or 1420 mg (870 mg of GAEs) of extracts, respectively, and placebo (0 mg of GAEs). Safety laboratory determinations, including complete blood count (CBC), chemistry, and urinalysis, were made at each of three visits. Study 2 was designed for antioxidant activity assessment in 22 overweight subjects by administration of two POMx capsules per day providing 1000 mg (610 mg of GAEs) of extract versus baseline measurements. Measurement of antioxidant activity as evidenced by thiobarbituric acid reactive substances (TBARS) in plasma were measured before and after POMx supplementation. There was evidence of antioxidant activity through a significant reduction in TBARS linked with cardiovascular disease risk. There were no serious adverse events in any subject studied at either site. These studies demonstrate the safety of a pomegranate ellagitannin-enriched polyphenol dietary supplement in humans and provide evidence of antioxidant activity in humans.", "title": "" }, { "docid": "df055f6cc146f73f14ec26daedfdda5f", "text": "This report describes concept inventories, specialized assessment instruments that enable educational researchers to investigate student (mis)understandings of concepts in a particular domain. While students experience a concept inventory as a set of multiple-choice items taken as a test, this belies its purpose, its careful development, and its validation. A concept inventory is not intended to be a comprehensive instrument, but rather a tool that probes student comprehension of a carefully selected subset of concepts that give rise to the most common and pervasive mismodelings. The report explains how concept inventories have been developed and used in other STEM fields, then outlines a project to explore the feasibility of concept inventories in the computing field. We use the domain of discrete mathematics to illustrate a suggested plan of action.", "title": "" }, { "docid": "b1cad8dde7d9ceb1bb973fb323652d05", "text": "Sites for online classified ads selling sex are widely used by human traffickers to support their pernicious business. The sheer quantity of ads makes manual exploration and analysis unscalable. In addition, discerning whether an ad is advertising a trafficked victim or an independent sex worker is a very difficult task. Very little concrete ground truth (i.e., ads definitively known to be posted by a trafficker) exists in this space. In this work, we develop tools and techniques that can be used separately and in conjunction to group sex ads by their true owner (and not the claimed author in the ad). Specifically, we develop a machine learning classifier that uses stylometry to distinguish between ads posted by the same vs. different authors with 90% TPR and 1% FPR. We also design a linking technique that takes advantage of leakages from the Bitcoin mempool, blockchain and sex ad site, to link a subset of sex ads to Bitcoin public wallets and transactions. Finally, we demonstrate via a 4-week proof of concept using Backpage as the sex ad site, how an analyst can use these automated approaches to potentially find human traffickers.", "title": "" }, { "docid": "f0f17b4d7bf858e84ed12d0f5f309d4e", "text": "KEY CLINICAL MESSAGE\nPatient complained of hearing loss and tinnitus after the onset of Reiter's syndrome. Audiometry confirmed the hearing loss on the left ear; blood work showed increased erythrocyte sedimentation rate and C3 fraction of the complement. Genotyping for HLA-B27 was positive. Treatment with prednisolone did not improve the hearing levels.", "title": "" }, { "docid": "e033eddbc92ee813ffcc69724e55aa84", "text": "Over the past few years, weblogs have emerged as a new communication and publication medium on the Internet. In this paper, we describe the application of data mining, information extraction and NLP algorithms for discovering trends across our subset of approximately 100,000 weblogs. We publish daily lists of key persons, key phrases, and key paragraphs to a public web site, BlogPulse.com. In addition, we maintain a searchable index of weblog entries. On top of the search index, we have implemented trend search, which graphs the normalized trend line over time for a search query and provides a way to estimate the relative buzz of word of mouth for given topics over time.", "title": "" }, { "docid": "372b2aa9810ec12ebf033632cffd5739", "text": "A simple CFD tool, coupled to a discrete surface representation and a gradient-based optimization procedure, is applied to the design of optimal hull forms and optimal arrangement of hulls for a wave cancellation multihull ship. The CFD tool, which is used to estimate the wave drag, is based on the zeroth-order slender ship approximation. The hull surface is represented by a triangulation, and almost every grid point on the surface can be used as a design variable. A smooth surface is obtained via a simplified pseudo-shell problem. The optimal design process consists of two steps. The optimal center and outer hull forms are determined independently in the first step, where each hull keeps the same displacement as the original design while the wave drag is minimized. The optimal outer-hull arrangement is determined in the second step for the optimal center and outer hull forms obtained in the first step. Results indicate that the new design can achieve a large wave drag reduction in comparison to the original design configuration.", "title": "" }, { "docid": "92fb73e03b487d5fbda44e54cf59640d", "text": "The eyes and periocular area are the central aesthetic unit of the face. Facial aging is a dynamic process that involves skin, subcutaneous soft tissues, and bony structures. An understanding of what is perceived as youthful and beautiful is critical for success. Knowledge of the functional aspects of the eyelid and periocular area can identify pre-preoperative red flags.", "title": "" }, { "docid": "67db336c7de0cff2df34e265a219e838", "text": "Machine reading aims to automatically extract knowledge from text. It is a long-standing goal of AI and holds the promise of revolutionizing Web search and other fields. In this paper, we analyze the core challenges of machine reading and show that statistical relational AI is particularly well suited to address these challenges. We then propose a unifying approach to machine reading in which statistical relational AI plays a central role. Finally, we demonstrate the promise of this approach by presenting OntoUSP, an end-toend machine reading system that builds on recent advances in statistical relational AI and greatly outperforms state-of-theart systems in a task of extracting knowledge from biomedical abstracts and answering questions.", "title": "" }, { "docid": "d7065dccb396b0a47526fc14e0a9e796", "text": "A modified compact antipodal Vivaldi antenna is proposed with good performance for different applications including microwave and millimeter wave imaging. A step-by-step procedure is applied in this design including conventional antipodal Vivaldi antenna (AVA), AVA with a periodic slit edge, and AVA with a trapezoid-shaped dielectric lens to feature performances including wide bandwidth, small size, high gain, front-to-back ratio and directivity, modification on E-plane beam tilt, and small sidelobe levels. By adding periodic slit edge at the outer brim of the antenna radiators, lower-end limitation of the conventional AVA extended twice without changing the overall dimensions of the antenna. The optimized antenna is fabricated and tested, and the results show that S11 <; -10 dB frequency band is from 3.4 to 40 GHz, and it is in good agreement with simulation one. Gain of the antenna has been elevated by the periodic slit edge and the trapezoid dielectric lens at lower frequencies up to 8 dB and at higher frequencies up to 15 dB, respectively. The E-plane beam tilts and sidelobe levels are reduced by the lens.", "title": "" }, { "docid": "17c12cc27cd66d0289fe3baa9ab4124d", "text": "In this paper we review classification algorithms used to design brain-computer interface (BCI) systems based on electroencephalography (EEG). We briefly present the commonly employed algorithms and describe their critical properties. Based on the literature, we compare them in terms of performance and provide guidelines to choose the suitable classification algorithm(s) for a specific BCI.", "title": "" }, { "docid": "cce0a40468b97ced3c501f3efd1a2170", "text": "Personality Assessment is an emerging research area. In recent years, the interest of the scientific community towards personality assessment has grown incredibly. Personality is a psychological model that can be used to explain the wide variety of human behaviors with the help of individual characteristics. The applications of personality assessment range from behavior analysis to disease diagnosis, counseling, employee recruitment, social network analysis, security systems, mood prediction, and many others. Automatic personality assessment consists of the automatic classification of users' personality traits from the data such as video, speech and text. Despite growing number of works in personality assessment, it is still very difficult to say what the current state-of-the-art is. The objective of this survey paper is to discuss various approaches used for personality assessment and to present current state-of-art related to it. It also provides guidelines for further research.", "title": "" }, { "docid": "6aee06316a24005ee2f8f4f1906e2692", "text": "Sir, The origin of vestibular papillomatosis (VP) is controversial. VP describes the condition of multiple papillae that may cover the entire surface of the vestibule (1). Our literature search for vestibular papillomatosis revealed 13 reports in gynaecological journals and only one in a dermatological journal. Furthermore, searching for vulvar squamous papillomatosis revealed 6 reports in gynaecological journals and again only one in a dermatological journal. We therefore conclude that it is worthwhile drawing the attention of dermatologists to this entity.", "title": "" }, { "docid": "4cc9083bd050969933367166c2245b05", "text": "Emotion regulation involves the pursuit of desired emotional states (i.e., emotion goals) in the service of superordinate motives. The nature and consequences of emotion regulation, therefore, are likely to depend on the motives it is intended to serve. Nonetheless, limited attention has been devoted to studying what motivates emotion regulation. By mapping the potential benefits of emotion to key human motives, this review identifies key classes of motives in emotion regulation. The proposed taxonomy distinguishes between hedonic motives that target the immediate phenomenology of emotions, and instrumental motives that target other potential benefits of emotions. Instrumental motives include behavioral, epistemic, social, and eudaimonic motives. The proposed taxonomy offers important implications for understanding the mechanism of emotion regulation, variation across individuals and contexts, and psychological function and dysfunction, and points to novel research directions.", "title": "" }, { "docid": "6e407c69a2f53cc0e6991bc6c2c065ab", "text": "Wireless sensor networks are becoming very popular technology, it is very important to understand the architecture for this kind of networks before deploying it in any application. This work explores the WSN architecture according to the OSI model with some protocols in order to achieve good background on the wireless sensor networks and help readers to find a summary for ideas, protocols and problems towards an appropriate design model for WSNs.", "title": "" }, { "docid": "394854761e27aa7baa6fa2eea60f347d", "text": "Our goal is to complement an entity ranking with human-readable explanations of how those retrieved entities are connected to the information need. Relation extraction technology should aid in finding such support passages, especially in combination with entities and query terms. This work explores how the current state of the art in unsupervised relation extraction (OpenIE) contributes to a solution for the task, assessing potential, limitations, and avenues for further investigation.", "title": "" } ]
scidocsrr
dc9a261d828010d52f43c088d5c3d5c4
Voltage and Power Balance Control for a Cascaded H-Bridge Converter-Based Solid-State Transformer
[ { "docid": "264fef3aa71df1f661f2b94461f9634c", "text": "This paper presents a new control method for cascaded connected H-bridge converter-based static compensators. These converters have classically been commutated at fundamental line frequencies, but the evolution of power semiconductors has allowed the increase of switching frequencies and power ratings of these devices, permitting the use of pulsewidth modulation techniques. This paper mainly focuses on dc-bus voltage balancing problems and proposes a new control technique (individual voltage balancing strategy), which solves these balancing problems, maintaining the delivered reactive power equally distributed among all the H-bridges of the converter.", "title": "" }, { "docid": "9f58c2c2a9675d868abb4e0a5a299def", "text": "This paper presents the design of new high-frequency transformer isolated bidirectional dc-dc converter modules connected in input-series-output-parallel (ISOP) for 20-kVA-solid-state transformer. The ISOP modular structure enables the use of low-voltage MOSFETs, featuring low on-state resistance and resulted conduction losses, to address medium-voltage input. A phase-shift dual-half-bridge (DHB) converter is employed to achieve high-frequency galvanic isolation, bidirectional power flow, and zero voltage switching (ZVS) of all switching devices, which leads to low switching losses even with high-frequency operation. Furthermore, an adaptive inductor is proposed as the main energy transfer element of a phase-shift DHB converter so that the circulating energy can be optimized to maintain ZVS at light load and minimize the conduction losses at heavy load as well. As a result, high efficiency over wide load range and high power density can be achieved. In addition, current stress of switching devices can be reduced. A planar transformer adopting printed-circuit-board windings arranged in an interleaved structure is designed to obtain low core and winding loss, solid isolation, and identical parameters in multiple modules. Moreover, the modular structure along with a distributed control provides plug-and-play capability and possible high-level fault tolerance. The experimental results on 1 kW DHB converter modules switching at 50 kHz are presented to validate the theoretical analysis.", "title": "" }, { "docid": "707b75a5fa5e796c18bcaf17cd43075d", "text": "This paper presents a new feedback control strategy for balancing individual DC capacitor voltages in a three-phase cascade multilevel inverter-based static synchronous compensator. The design of the control strategy is based on the detailed small-signal model. The key part of the proposed controller is a compensator to cancel the variation parts in the model. The controller can balance individual DC capacitor voltages when H-bridges run with different switching patterns and have parameter variations. It has two advantages: 1) the controller can work well in all operation modes (the capacitive mode, the inductive mode, and the standby mode) and 2) the impact of the individual DC voltage controller on the voltage quality is small. Simulation results and experimental results verify the performance of the controller.", "title": "" } ]
[ { "docid": "e805148f883204562e25a052d6b35505", "text": "In patients with chronic stroke, the primary motor cortex of the intact hemisphere (M1(intact hemisphere)) may influence functional recovery, possibly through transcallosal effects exerted over M1 in the lesioned hemisphere (M1(lesioned hemisphere)). Here, we studied interhemispheric inhibition (IHI) between M1(intact hemisphere) and M1(lesioned hemisphere) in the process of generation of a voluntary movement by the paretic hand in patients with chronic subcortical stroke and in healthy volunteers. IHI was evaluated in both hands preceding the onset of unilateral voluntary index finger movements (paretic hand in patients, right hand in controls) in a simple reaction time paradigm. IHI at rest and shortly after the Go signal were comparable in patients and controls. Closer to movement onset, IHI targeting the moving index finger turned into facilitation in controls but remained deep in patients, a finding that correlated with poor motor performance. These results document an abnormally high interhemispheric inhibitory drive from M1(intact hemisphere) to M1(lesioned hemisphere) in the process of generation of a voluntary movement by the paretic hand. It is conceivable that this abnormality could adversely influence motor recovery in some patients with subcortical stroke, an interpretation consistent with models of interhemispheric competition in motor and sensory systems.", "title": "" }, { "docid": "e0fb9500cb497e51f33d943ca663438a", "text": "For many years, researchers have searched for the factors affecting the use of computers in the classroom. In studying the antecedents of educational computer use, many studies adopt a rather limited view because only technology-related variables, such as attitudes to computers and computer experience were taken into account. The present study centres on teachers’ educational beliefs (constructivist beliefs, traditional beliefs) as antecedent of computer use, while controlling for the impact of technology-related variables (computer experience, general computer attitudes) and demographical variables (sex, age). In order to identify differences in determinants of computer use in the classroom, multilevel modelling was used (N = 525). For measuring primary teachers’ use of computers to support the leaching or learning process a modified version of the ‘Class Use of Computers’ scale of van Braak et al. [van Braak, J., Tondeur, J., & Valcke, M. (2004). Explaining different types of computer use among primary school teachers. European Journal of Psychology of Education, 19(4), 407–422] was used. The present article supports the hypothesis that teacher beliefs are significant determinants in explaining why teachers adopt computers in the classroom. Next to the impact of computer experience, general computer attitudes and gender, the results show a positive effect of constructivist beliefs on the classroom use of computers. Traditional beliefs have a negative impact on the classroom use of computers. 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "23ceda789c34807a577ad683fdaaac38", "text": "This paper describes a generalisation of the unscented transformation (UT) which allows sigma points to be scaled to an arbitrary dimension. The UT is a method for predicting means and covariances in nonlinear systems. A set of samples are deterministically chosen which match the mean and covariance of a (not necessarily Gaussian-distributed) probability distribution. These samples can be scaled by an arbitrary constant. The method guarantees that the mean and covariance second order accuracy in mean and covariance, giving the same performance as a second order truncated filter but without the need to calculate any Jacobians or Hessians. The impacts of scaling issues are illustrated by considering conversions from polar to Cartesian coordinates with large angular uncertainties.", "title": "" }, { "docid": "fb3002fff98d4645188910989638af69", "text": "Stress is important in substance use disorders (SUDs). Mindfulness training (MT) has shown promise for stress-related maladies. No studies have compared MT to empirically validated treatments for SUDs. The goals of this study were to assess MT compared to cognitive behavioral therapy (CBT) in substance use and treatment acceptability, and specificity of MT compared to CBT in targeting stress reactivity. Thirty-six individuals with alcohol and/or cocaine use disorders were randomly assigned to receive group MT or CBT in an outpatient setting. Drug use was assessed weekly. After treatment, responses to personalized stress provocation were measured. Fourteen individuals completed treatment. There were no differences in treatment satisfaction or drug use between groups. The laboratory paradigm suggested reduced psychological and physiological indices of stress during provocation in MT compared to CBT. This pilot study provides evidence of the feasibility of MT in treating SUDs and suggests that MT may be efficacious in targeting stress.", "title": "" }, { "docid": "95727de088955aff88366de2c0f57dfe", "text": "Current software for AI development requires the use of programming languages to develop intelligent agents. This can be disadvantageous for AI designers, as their work needs to be debugged and treated as a generic piece of software code. Moreover, such approaches are designed for experts; often requiring a steep initial learning curve, as they are tailored for programmers. This can be also disadvantageous for implementing transparency to agents, an important ethical consideration [1], [2], as additional work is needed to expose and represent information to end users. We are working towards the development of a new editor, ABOD3. It allows the graphical visualisation of Behaviour Oriented Design based plans [3], including its two major derivatives: Parallel-rooted, Ordered Slip-stack Hierarchical (POSH) and Instinct [4]. The new editor is designed to allow not only the development of reactive plans, but also to debug such plans in real time to reduce the time required to develop an agent. This allows the development and testing of plans from a same application.", "title": "" }, { "docid": "2382ab2b71be5dfbd1ba9fb4bf6536fc", "text": "A full-bridge converter which employs a coupled inductor to achieve zero-voltage switching of the primary switches in the entire line and load range is described. Because the coupled inductor does not appear as a series inductance in the load current path, it does not cause a loss of duty cycle or severe voltage ringing across the output rectifier. The operation and performance of the proposed converter is verified on a 670-W prototype.", "title": "" }, { "docid": "398effb89faa1ac819ee5ae489908ed1", "text": "There are many interpretations of quantum mechanics, and new ones continue to appear. The Many-Worlds Interpretation (MWI) introduced by Everett (1957) impresses me as the best candidate for the interpretation of quantum theory. My belief is not based on a philosophical affinity for the idea of plurality of worlds as in Lewis (1986), but on a judgment that the physical difficulties of other interpretations are more serious. However, the scope of this paper does not allow a comparative analysis of all alternatives, and my main purpose here is to present my version of MWI, to explain why I believe it is true, and to answer some common criticisms of MWI. The MWI is not a theory about many objective “worlds”. A mathematical formalism by itself does not define the concept of a “world”. The “world” is a subjective concept of a sentient observer. All (subjective) worlds are incorporated in one objective Universe. I think, however, that the name Many-Worlds Interpretation does represent this theory fairly well. Indeed, according to MWI (and contrary to the standard approach) there are many worlds of the sort we call in everyday life “the world”. And although MWI is not just an interpretation of quantum theory – it differs from the standard quantum theory in certain experimental predictions – interpretation is an essential part of MWI; it explains the tremendous gap between what we experience as our world and what appears in the formalism of the quantum state of the Universe. Schrödinger’s equation (the basic equation of quantum theory) predicts very accurately the results of experiments performed on microscopic systems. I shall argue in what follows that it also implies the existence of many worlds. The purpose of addition of the collapse postulate, which represents the difference between MWI and the standard approach, is to escape the implications of Schrödinger’s equation for the existence of many worlds. Today’s technology does not allow us to test the existence of the “other” worlds. So only God or “superman” (i.e., a superintelligence equipped with supertechnology) can take full", "title": "" }, { "docid": "30260d1a4a936c79e6911e1e91c3a84a", "text": "Two recent approaches have achieved state-of-the-art results in image captioning. The first uses a pipelined process where a set of candidate words is generated by a convolutional neural network (CNN) trained on images, and then a maximum entropy (ME) language model is used to arrange these words into a coherent sentence. The second uses the penultimate activation layer of the CNN as input to a recurrent neural network (RNN) that then generates the caption sequence. In this paper, we compare the merits of these different language modeling approaches for the first time by using the same state-ofthe-art CNN as input. We examine issues in the different approaches, including linguistic irregularities, caption repetition, and data set overlap. By combining key aspects of the ME and RNN methods, we achieve a new record performance over previously published results on the benchmark COCO dataset. However, the gains we see in BLEU do not translate to human judgments.", "title": "" }, { "docid": "74ef26e332b12329d8d83f80169de5c0", "text": "It has been claimed that the discovery of association rules is well-suited for applications of market basket analysis to reveal regularities in the purchase behaviour of customers. Moreover, recent work indicates that the discovery of interesting rules can in fact only be addressed within a microeconomic framework. This study integrates the discovery of frequent itemsets with a (microeconomic) model for product selection (PROFSET). The model enables the integration of both quantitative and qualitative (domain knowledge) criteria. Sales transaction data from a fullyautomated convenience store is used to demonstrate the effectiveness of the model against a heuristic for product selection based on product-specific profitability. We show that with the use of frequent itemsets we are able to identify the cross-sales potential of product items and use this information for better product selection. Furthermore, we demonstrate that the impact of product assortment decisions on overall assortment profitability can easily be evaluated by means of sensitivity analysis.", "title": "" }, { "docid": "9c417f20053930108fe6f3e10beafe8c", "text": "Artificial intelligence (AI) has been integrated to people’s daily lives. However, the AI system’s lack of understanding of human emotions impede an e↵ective communication between the system and the human interactants. As an example, if a search engine understands the intent of the searcher, it should be able to return favorable results regardless of the correct queries. Emotions take a big part in communication as they allow humans to empathize and understand one another. An AI would also need to understand emotions for an e↵ective interaction with humans. A↵ective computing research has surged recently in order to tackle this problem. Sentiment analysis, which can be thought as a subset of a↵ective computing, allows an AI system to understand a limited portion of human emotions, and has been used widely in systems that involve reviews to recommend related products, such as movies, electronics, or books. The problem with the traditional sentiment analysis is that it only uses the polarity label as a proxy to the full emotion, which makes the system di cult to make fine-grained judgments. There exists a model of emotion that represents an emotion as a point in the emotion space [26], a more detailed model than the one-dimension polarity model. The model consists of three independent axes: evaluation, potency, and activity in the EPA format [29], or equivalently, valence, dominance, and arousal in the VAD format [36]. We believe that by learning the three-dimension emotion model and extracting sentiment from it, we can predict the sentiment more precisely than the use of the typical polarity based model. In this paper, we explore validity of using the three-dimension emotion model as opposed to the naive polarity model in predicting the sentiment of given text data. We set the work of Tang et al. [34] as the baseline where they construct word embeddings with integrated sentiment information, called sentiment embeddings. As opposed to their approach of using polarity label to guide the word embedding and the corresponding classifier, we use the three-dimension emotion model, namely the VAD vectors [36], and train emotion embeddings. In the experiment, a recently established corpus with emotion labels, EmoBank [5], is used along with a common corpus for sentiment, Stanford Sentiment Treebank (SST) [32], and a large text dataset, text8 [22]. We compare and contrast the prediction power between the sentiment embeddings and the emotion embeddings on EmoBank corpus, while checking their generality on the SST corpus. We also analyze emotion embedding itself by visualizing the embeddings using t-distributed Stochastic Neighbor Embedding (t-SNE) [20]. The visualization showed the lack of context generalization in emotion embedding,", "title": "" }, { "docid": "07a48f2e72f8c6ed99abfe407854b863", "text": "Systematic study of abnormal repetitive behaviors in autism has been lacking despite the diagnostic significance of such behavior. The occurrence of specific topographies of repetitive behaviors as well as their severity was assessed in individuals with mental retardation with and without autism. The occurrence of each behavior category, except dyskinesias, was higher in the autism group and autistic subjects exhibited a significantly greater number of topographies of stereotypy and compulsions. Both groups had significant patterns of repetitive behavior co-occurrence. Autistic subjects had significantly greater severity ratings for compulsions, stereotypy, and self-injury. Repetitive behavior severity also predicted severity of autism. Although abnormal repetition is not specific to autism, an elevated pattern of occurrence and severity appears to characterize the disorder.", "title": "" }, { "docid": "c01fe072e205f1ac92ba7f18b0759fbb", "text": "Millimeter wave (mmWave) technologies promise to revolutionize wireless networks by enabling multi-gigabit data rates. However, they suffer from high attenuation, and hence have to use highly directional antennas to focus their power on the receiver. Existing radios have to scan the space to find the best alignment between the transmitter’s and receiver’s beams, a process that takes up to a few seconds. This delay is problematic in a network setting where the base station needs to quickly switch between users and accommodate mobile clients.\n We present Agile-Link, the first mmWave beam steering system that is demonstrated to find the correct beam alignment without scanning the space. Instead of scanning, Agile- Link hashes the beam directions using a few carefully chosen hash functions. It then identifies the correct alignment by tracking how the energy changes across different hash functions. Our results show that Agile-Link reduces beam steering delay by orders of magnitude.", "title": "" }, { "docid": "e5bbf88eedf547551d28a731bd4ebed7", "text": "We conduct an empirical study to test the ability of convolutional neural networks (CNNs) to reduce the effects of nuisance transformations of the input data, such as location, scale and aspect ratio. We isolate factors by adopting a common convolutional architecture either deployed globally on the image to compute class posterior distributions, or restricted locally to compute class conditional distributions given location, scale and aspect ratios of bounding boxes determined by proposal heuristics. In theory, averaging the latter should yield inferior performance compared to proper marginalization. Yet empirical evidence suggests the converse, leading us to conclude that - at the current level of complexity of convolutional architectures and scale of the data sets used to train them - CNNs are not very effective at marginalizing nuisance variability. We also quantify the effects of context on the overall classification task and its impact on the performance of CNNs, and propose improved sampling techniques for heuristic proposal schemes that improve end-to-end performance to state-of-the-art levels. We test our hypothesis on a classification task using the ImageNet Challenge benchmark and on a wide-baseline matching task using the Oxford and Fischer's datasets.", "title": "" }, { "docid": "5e04372f08336da5b8ab4d41d69d3533", "text": "Purpose – This research aims at investigating the role of certain factors in organizational culture in the success of knowledge sharing. Such factors as interpersonal trust, communication between staff, information systems, rewards and organization structure play an important role in defining the relationships between staff and in turn, providing possibilities to break obstacles to knowledge sharing. This research is intended to contribute in helping businesses understand the essential role of organizational culture in nourishing knowledge and spreading it in order to become leaders in utilizing their know-how and enjoying prosperity thereafter. Design/methodology/approach – The conclusions of this study are based on interpreting the results of a survey and a number of interviews with staff from various organizations in Bahrain from the public and private sectors. Findings – The research findings indicate that trust, communication, information systems, rewards and organization structure are positively related to knowledge sharing in organizations. Research limitations/implications – The authors believe that further research is required to address governmental sector institutions, where organizational politics dominate a role in hoarding knowledge, through such methods as case studies and observation. Originality/value – Previous research indicated that the Bahraini society is influenced by traditions of household, tribe, and especially religion of the Arab and Islamic world. These factors define people’s beliefs and behaviours, and thus exercise strong influence in the performance of business organizations. This study is motivated by the desire to explore the role of the national organizational culture on knowledge sharing, which may be different from previous studies conducted abroad.", "title": "" }, { "docid": "6b2ef609c474b015b21e903e953efdb9", "text": "This paper reviews applications of the lattice-Boltzmann method to simulations of particle-fluid suspensions. We first summarize the available simulation methods for colloidal suspensions together with some of the important applications of these methods, and then describe results from lattice-gas and latticeBoltzmann simulations in more detail. The remainder of the paper is an update of previously published work, (69, 70) taking into account recent research by ourselves and other groups. We describe a lattice-Boltzmann model that can take proper account of density fluctuations in the fluid, which may be important in describing the short-time dynamics of colloidal particles. We then derive macrodynamical equations for a collision operator with separate shear and bulk viscosities, via the usual multi-time-scale expansion. A careful examination of the second-order equations shows that inclusion of an external force, such as a pressure gradient, requires terms that depend on the eigenvalues of the collision operator. Alternatively, the momentum density must be redefined to include a contribution from the external force. Next, we summarize recent innovations and give a few numerical examples to illustrate critical issues. Finally, we derive the equations for a lattice-Boltzmann model that includes transverse and longitudinal fluctuations in momentum. The model leads to a discrete version of the Green–Kubo relations for the shear and bulk viscosity, which agree with the viscosities obtained from the macro-dynamical analysis. We believe that inclusion of longitudinal fluctuations will improve the equipartition of energy in lattice-Boltzmann simulations of colloidal suspensions.", "title": "" }, { "docid": "9292601d14f70925920d3b2ab06a39ce", "text": "Internet review sites allow consumers to write detailed reviews of products potentially containing information related to user experience (UX) and usability. Using 5198 sentences from 3492 online reviews of software and video games, we investigate the content of online reviews with the aims of (i) charting the distribution of information in reviews among different dimensions of usability and UX, and (ii) extracting an associated vocabulary for each dimension using techniques from natural language processing and machine learning. We (a) find that 13%-49% of sentences in our online reviews pool contain usability or UX information; (b) chart the distribution of four sets of dimensions of usability and UX across reviews from two product categories; (c) extract a catalogue of important word stems for a number of dimensions. Our results suggest that a greater understanding of users' preoccupation with different dimensions of usability and UX may be inferred from the large volume of self-reported experiences online, and that research focused on identifying pertinent dimensions of usability and UX may benefit further from empirical studies of user-generated experience reports.", "title": "" }, { "docid": "c9d95b3656c703f4ce49c591a3f0a00f", "text": "Due to cellular heterogeneity, cell nuclei classification, segmentation, and detection from pathological images are challenging tasks. In the last few years, Deep Convolutional Neural Networks (DCNN) approaches have been shown state-of-the-art (SOTA) performance on histopathological imaging in different studies. In this work, we have proposed different advanced DCNN models and evaluated for nuclei classification, segmentation, and detection. First, the Densely Connected Recurrent Convolutional Network (DCRN) model is used for nuclei classification. Second, Recurrent Residual U-Net (R2U-Net) is applied for nuclei segmentation. Third, the R2U-Net regression model which is named UD-Net is used for nuclei detection from pathological images. The experiments are conducted with different datasets including Routine Colon Cancer(RCC) classification and detection dataset, and Nuclei Segmentation Challenge 2018 dataset. The experimental results show that the proposed DCNN models provide superior performance compared to the existing approaches for nuclei classification, segmentation, and detection tasks. The results are evaluated with different performance metrics including precision, recall, Dice Coefficient (DC), Means Squared Errors (MSE), F1-score, and overall accuracy. We have achieved around 3.4% and 4.5% better F-1 score for nuclei classification and detection tasks compared to recently published DCNN based method. In addition, R2U-Net shows around 92.15% testing accuracy in term of DC. These improved methods will help for pathological practices for better quantitative analysis of nuclei in Whole Slide Images(WSI) which ultimately will help for better understanding of different types of cancer in clinical workflow.", "title": "" }, { "docid": "a56a3592d704c917d5e8452eabb74cb0", "text": "Current text-to-speech synthesis (TTS) systems are often perceived as lacking expressiveness, limiting the ability to fully convey information. This paper describes initial investigations into improving expressiveness for statistical speech synthesis systems. Rather than using hand-crafted definitions of expressive classes, an unsupervised clustering approach is described which is scalable to large quantities of training data. To incorporate this “expression cluster” information into an HMM-TTS system two approaches are described: cluster questions in the decision tree construction; and average expression speech synthesis (AESS) using cluster-based linear transform adaptation. The performance of the approaches was evaluated on audiobook data in which the reader exhibits a wide range of expressiveness. A subjective listening test showed that synthesising with AESS results in speech that better reflects the expressiveness of human speech than a baseline expression-independent system.", "title": "" }, { "docid": "bfcef77dedf22118700737904be13c0e", "text": "Autonomous operation is becoming an increasingly important factor for UAVs. It enables a vehicle to decide on the most appropriate action under consideration of the current vehicle and environment state. We investigated the decision-making process using the cognitive agent-based architecture Soar, which uses techniques adapted from human decision-making. Based on Soar an agent was developed which enables UAVs to autonomously make decisions and interact with a dynamic environment. One or more UAV agents were then tested in a simulation environment which has been developed using agent-based modelling. By simulating a dynamic environment, the capabilities of a UAV agent can be tested under defined conditions and additionally its behaviour can be visualised. The agent’s abilities were demonstrated using a scenario consisting of a highly dynamic border-surveillance mission with multiple autonomous UAVs. We can show that the autonomous agents are able to execute the mission successfully and can react adaptively to unforeseen events. We conclude that using a cognitive architecture is a promising approach for modelling autonomous behaviour.", "title": "" }, { "docid": "2cc36985606c3d82b230165a8f025228", "text": "This paper is aimed at designing a congestion control system that scales gracefully with network capacity, providing high utilization, low queueing delay, dynamic stability, and fairness among users. In earlier work we had developed fluid-level control laws that achieve the first three objectives for arbitrary networks and delays, but were forced to constrain the resource allocation policy. In this paper we extend the theory to include dynamics at TCP sources, preserving the earlier features at fast time-scales, but permitting sources to match their steady-state preferences, provided a bound on round-trip-times is known. We develop two packet-level implementations of this protocol, using (i) ECN marking, and (ii) queueing delay, as means of communicating the congestion measure from links to sources. We discuss parameter choices and demonstrate using ns-2 simulations the stability of the protocol and its equilibrium features in terms of utilization, queueing and fairness. We also demonstrate the scalability of these features to increases in capacity, delay, and load, in comparison with other deployed and proposed protocols.", "title": "" } ]
scidocsrr
9eeac0fa8aacf08b2adf89d5eacb302c
Information Hiding Techniques: A Tutorial Review
[ { "docid": "efc1a6efe55805609ffc5c0fb6e3115b", "text": "A Note to All Readers This is not an original electronic copy of the master's thesis, but a reproduced version of the authentic hardcopy of the thesis. I lost the original electronic copy during transit from India to USA in December 1999. I could get hold of some of the older version of the files and figures. Some of the missing figures have been scanned from the photocopy version of the hardcopy of the thesis. The scanned figures have been earmarked with an asterisk. Acknowledgement I would like to profusely thank my guide Prof. K. R. Ramakrishnan for is timely advice and encouragement throughout my project work. I would also like to acknowledge Prof. M. Kankanhalli for reviewing my work from time to time. A special note of gratitude goes to Dr. S. H. Srinivas for the support he extended to this work. I would also like to thank all who helped me during my project work.", "title": "" } ]
[ { "docid": "a0c1f5a7e283e1deaff38edff2d8a3b2", "text": "BACKGROUND\nEarly detection of abused children could help decrease mortality and morbidity related to this major public health problem. Several authors have proposed tools to screen for child maltreatment. The aim of this systematic review was to examine the evidence on accuracy of tools proposed to identify abused children before their death and assess if any were adapted to screening.\n\n\nMETHODS\nWe searched in PUBMED, PsycINFO, SCOPUS, FRANCIS and PASCAL for studies estimating diagnostic accuracy of tools identifying neglect, or physical, psychological or sexual abuse of children, published in English or French from 1961 to April 2012. We extracted selected information about study design, patient populations, assessment methods, and the accuracy parameters. Study quality was assessed using QUADAS criteria.\n\n\nRESULTS\nA total of 2 280 articles were identified. Thirteen studies were selected, of which seven dealt with physical abuse, four with sexual abuse, one with emotional abuse, and one with any abuse and physical neglect. Study quality was low, even when not considering the lack of gold standard for detection of abused children. In 11 studies, instruments identified abused children only when they had clinical symptoms. Sensitivity of tests varied between 0.26 (95% confidence interval [0.17-0.36]) and 0.97 [0.84-1], and specificity between 0.51 [0.39-0.63] and 1 [0.95-1]. The sensitivity was greater than 90% only for three tests: the absence of scalp swelling to identify children victims of inflicted head injury; a decision tool to identify physically-abused children among those hospitalized in a Pediatric Intensive Care Unit; and a parental interview integrating twelve child symptoms to identify sexually-abused children. When the sensitivity was high, the specificity was always smaller than 90%.\n\n\nCONCLUSIONS\nIn 2012, there is low-quality evidence on the accuracy of instruments for identifying abused children. Identified tools were not adapted to screening because of low sensitivity and late identification of abused children when they have already serious consequences of maltreatment. Development of valid screening instruments is a pre-requisite before considering screening programs.", "title": "" }, { "docid": "c6b6b7c1955cafa70c4a0c2498591934", "text": "In all Fitzgerald’s fiction women characters are decorative figures of seemingly fragile beauty, though in fact they are often vain, egoistical, even destructive and ruthless and thus frequently the survivors. As prime consumers, they are never capable of idealism or intellectual or artistic interests, nor do they experience passion. His last novel, The Last Tycoon, shows some development; for the first time the narrator is a young woman bent on trying to find the truth about the ruthless social and economic complexity of 1920s Hollywood, but she has no adult role to play in its sexual, artistic or political activities. Women characters are marginalized into purely personal areas of experience.", "title": "" }, { "docid": "b3166dafafda819052f1d40ef04cc304", "text": "Convolutional neural networks (CNNs) have been widely deployed in the fields of computer vision and pattern recognition because of their high accuracy. However, large convolution operations are computing intensive and often require a powerful computing platform such as a graphics processing unit. This makes it difficult to apply CNNs to portable devices. The state-of-the-art CNNs, such as MobileNetV2 and Xception, adopt depthwise separable convolution to replace the standard convolution for embedded platforms, which significantly reduces operations and parameters with only limited loss in accuracy. This highly structured model is very suitable for field-programmable gate array (FPGA) implementation. In this brief, a scalable high performance depthwise separable convolution optimized CNN accelerator is proposed. The accelerator can be fit into an FPGA of different sizes, provided the balancing between hardware resources and processing speed. As an example, MobileNetV2 is implemented on Arria 10 SoC FPGA, and the results show this accelerator can classify each picture from ImageNet in 3.75 ms, which is about 266.6 frames per second. The FPGA design achieves 20x speedup if compared to CPU.", "title": "" }, { "docid": "5454fbb1a924f3360a338c11a88bea89", "text": "PURPOSE OF REVIEW\nThis review describes the most common motor neuron disease, ALS. It discusses the diagnosis and evaluation of ALS and the current understanding of its pathophysiology, including new genetic underpinnings of the disease. This article also covers other motor neuron diseases, reviews how to distinguish them from ALS, and discusses their pathophysiology.\n\n\nRECENT FINDINGS\nIn this article, the spectrum of cognitive involvement in ALS, new concepts about protein synthesis pathology in the etiology of ALS, and new genetic associations will be covered. This concept has changed over the past 3 to 4 years with the discovery of new genes and genetic processes that may trigger the disease. As of 2014, two-thirds of familial ALS and 10% of sporadic ALS can be explained by genetics. TAR DNA binding protein 43 kDa (TDP-43), for instance, has been shown to cause frontotemporal dementia as well as some cases of familial ALS, and is associated with frontotemporal dysfunction in ALS.\n\n\nSUMMARY\nThe anterior horn cells control all voluntary movement: motor activity, respiratory, speech, and swallowing functions are dependent upon signals from the anterior horn cells. Diseases that damage the anterior horn cells, therefore, have a profound impact. Symptoms of anterior horn cell loss (weakness, falling, choking) lead patients to seek medical attention. Neurologists are the most likely practitioners to recognize and diagnose damage or loss of anterior horn cells. ALS, the prototypical motor neuron disease, demonstrates the impact of this class of disorders. ALS and other motor neuron diseases can represent diagnostic challenges. Neurologists are often called upon to serve as a \"medical home\" for these patients: coordinating care, arranging for durable medical equipment, and leading discussions about end-of-life care with patients and caregivers. It is important for neurologists to be able to identify motor neuron diseases and to evaluate and treat patients affected by them.", "title": "" }, { "docid": "1bd9cedbbbd26d670dd718fe47c952e7", "text": "Recent advances in conversational systems have changed the search paradigm. Traditionally, a user poses a query to a search engine that returns an answer based on its index, possibly leveraging external knowledge bases and conditioning the response on earlier interactions in the search session. In a natural conversation, there is an additional source of information to take into account: utterances produced earlier in a conversation can also be referred to and a conversational IR system has to keep track of information conveyed by the user during the conversation, even if it is implicit. We argue that the process of building a representation of the conversation can be framed as a machine reading task, where an automated system is presented with a number of statements about which it should answer questions. The questions should be answered solely by referring to the statements provided, without consulting external knowledge. The time is right for the information retrieval community to embrace this task, both as a stand-alone task and integrated in a broader conversational search setting. In this paper, we focus on machine reading as a stand-alone task and present the Attentive Memory Network (AMN), an end-to-end trainable machine reading algorithm. Its key contribution is in efficiency, achieved by having an hierarchical input encoder, iterating over the input only once. Speed is an important requirement in the setting of conversational search, as gaps between conversational turns have a detrimental effect on naturalness. On 20 datasets commonly used for evaluating machine reading algorithms we show that the AMN achieves performance comparable to the state-of-theart models, while using considerably fewer computations.", "title": "" }, { "docid": "9c37d9388908cd15c2e4d639de686371", "text": "In this paper, novel small-signal averaged models for dc-dc converters operating at variable switching frequency are derived. This is achieved by separately considering the on-time and the off-time of the switching period. The derivation is shown in detail for a synchronous buck converter and the model for a boost converter is also presented. The model for the buck converter is then used for the design of two digital feedback controllers, which exploit the additional insight in the converter dynamics. First, a digital multiloop PID controller is implemented, where the design is based on loop-shaping of the proposed frequency-domain transfer functions. And second, the design and the implementation of a digital LQG state-feedback controller, based on the proposed time-domain state-space model, is presented for the same converter topology. Experimental results are given for the digital multiloop PID controller integrated on an application-specified integrated circuit in a 0.13 μm CMOS technology, as well as for the state-feedback controller implemented on an FPGA. Tight output voltage regulation and an excellent dynamic performance is achieved, as the dynamics of the converter under variable frequency operation are considered during the design of both implementations.", "title": "" }, { "docid": "0f71e64aaf081b6624f442cb95b2220c", "text": "Objective\nElectronic health record (EHR)-based phenotyping infers whether a patient has a disease based on the information in his or her EHR. A human-annotated training set with gold-standard disease status labels is usually required to build an algorithm for phenotyping based on a set of predictive features. The time intensiveness of annotation and feature curation severely limits the ability to achieve high-throughput phenotyping. While previous studies have successfully automated feature curation, annotation remains a major bottleneck. In this paper, we present PheNorm, a phenotyping algorithm that does not require expert-labeled samples for training.\n\n\nMethods\nThe most predictive features, such as the number of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes or mentions of the target phenotype, are normalized to resemble a normal mixture distribution with high area under the receiver operating curve (AUC) for prediction. The transformed features are then denoised and combined into a score for accurate disease classification.\n\n\nResults\nWe validated the accuracy of PheNorm with 4 phenotypes: coronary artery disease, rheumatoid arthritis, Crohn's disease, and ulcerative colitis. The AUCs of the PheNorm score reached 0.90, 0.94, 0.95, and 0.94 for the 4 phenotypes, respectively, which were comparable to the accuracy of supervised algorithms trained with sample sizes of 100-300, with no statistically significant difference.\n\n\nConclusion\nThe accuracy of the PheNorm algorithms is on par with algorithms trained with annotated samples. PheNorm fully automates the generation of accurate phenotyping algorithms and demonstrates the capacity for EHR-driven annotations to scale to the next level - phenotypic big data.", "title": "" }, { "docid": "4a86a0707e6ac99766f89e81cccc5847", "text": "Magnetic core loss is an emerging concern for integrated POL converters. As switching frequency increases, core loss is comparable to or even higher than winding loss. Accurate measurement of core loss is important for magnetic design and converter loss estimation. And exploring new high frequency magnetic materials need a reliable method to evaluate their losses. However, conventional method is limited to low frequency due to sensitivity to phase discrepancy. In this paper, a new method is proposed for high frequency (1MHz∼50MHz) core loss measurement. The new method reduces the phase induced error from over 100% to <5%. So with the proposed methods, the core loss can be accurately measured.", "title": "" }, { "docid": "b5d54f10aebd99d898dfb52d75e468e6", "text": "As the technology to secure information improves, hackers will employ less technical means to get access to unauthorized data. The use of Social Engineering as a non tech method of hacking has been increasingly used during the past few years. There are different types of social engineering methods reported but what is lacking is a unifying effort to understand these methods in the aggregate. This paper aims to classify these methods through taxonomy so that organizations can gain a better understanding of these attack methods and accordingly be vigilant against them.", "title": "" }, { "docid": "3a9bba31f77f4026490d7a0faf4aeaa4", "text": "We explore several different document representation models and two query expansion models for the task of recommending blogs to a user in response to a query. Blog relevance ranking differs from traditional document ranking in ad-hoc information retrieval in several ways: (1) the unit of output (the blog) is composed of a collection of documents (the blog posts) rather than a single document, (2) the query represents an ongoing – and typically multifaceted – interest in the topic rather than a passing ad-hoc information need and (3) due to the propensity of spam, splogs, and tangential comments, the blogosphere is particularly challenging to use as a source for high-quality query expansion terms. We address these differences at the document representation level, by comparing retrieval models that view either the blog or its constituent posts as the atomic units of retrieval, and at the query expansion level, by making novel use of the links and anchor text in Wikipedia to expand a user’s initial query. We develop two complementary models of blog retrieval that perform at comparable levels of precision and recall. We also show consistent and significant improvement across all models using our Wikipedia expansion strategy.", "title": "" }, { "docid": "26deedfae0fd167d35df79f28c75e09c", "text": "In content-based image retrieval, SIFT feature and the feature from deep convolutional neural network (CNN) have demonstrated promising performance. To fully explore both visual features in a unified framework for effective and efficient retrieval, we propose a collaborative index embedding method to implicitly integrate the index matrices of them. We formulate the index embedding as an optimization problem from the perspective of neighborhood sharing and solve it with an alternating index update scheme. After the iterative embedding, only the embedded CNN index is kept for on-line query, which demonstrates significant gain in retrieval accuracy, with very economical memory cost. Extensive experiments have been conducted on the public datasets with million-scale distractor images. The experimental results reveal that, compared with the recent state-of-the-art retrieval algorithms, our approach achieves competitive accuracy performance with less memory overhead and efficient query computation.", "title": "" }, { "docid": "710febdd18f40c9fc82f8a28039362cc", "text": "The paper deals with engineering an electric wheelchair from a common wheelchair and then developing a Brain Computer Interface (BCI) between the electric wheelchair and the human brain. A portable EEG headset and firmware signal processing together facilitate the movement of the wheelchair integrating mind activity and frequency of eye blinks of the patient sitting on the wheelchair with the help of Microcontroller Unit (MCU). The target population for the mind controlled wheelchair is the patients who are paralyzed below the neck and are unable to use conventional wheelchair interfaces. This project aims at creating a cost efficient solution, later intended to be distributed as an add-on conversion unit for a common manual wheelchair. A Neurosky mind wave headset is used to pick up EEG signals from the brain. This is a commercialized version of the Open-EEG Project. The signal obtained from EEG sensor is processed by the ARM microcontroller FRDM KL-25Z, a Freescale board. The microcontroller takes decision for determining the direction of motion of wheelchair based on floor detection and obstacle avoidance sensors mounted on wheelchair’s footplate. The MCU shows real time information on a color LCD interfaced to it. Joystick control of the wheelchair is also provided as an additional interface option that can be chosen from the menu system of the project.", "title": "" }, { "docid": "49f42fd1e0b684f24714bd9c1494fe4a", "text": "We propose a transition-based model for joint word segmentation, POS tagging and text normalization. Different from previous methods, the model can be trained on standard text corpora, overcoming the lack of annotated microblog corpora. To evaluate our model, we develop an annotated corpus based on microblogs. Experimental results show that our joint model can help improve the performance of word segmentation on microblogs, giving an error reduction in segmentation accuracy of 12.02%, compared to the traditional approach.", "title": "" }, { "docid": "9071d7349dccb07a5c3f93075e8d9458", "text": "AIM\nA discussion on how nurse leaders are using social media and developing digital leadership in online communities.\n\n\nBACKGROUND\nSocial media is relatively new and how it is used by nurse leaders and nurses in a digital space is under explored.\n\n\nDESIGN\nDiscussion paper.\n\n\nDATA SOURCES\nSearches used CINAHL, the Royal College of Nursing webpages, Wordpress (for blogs) and Twitter from 2000-2015. Search terms used were Nursing leadership + Nursing social media.\n\n\nIMPLICATIONS FOR NURSING\nUnderstanding the development and value of nursing leadership in social media is important for nurses in formal and informal (online) leadership positions. Nurses in formal leadership roles in organizations such as the National Health Service are beginning to leverage social media. Social media has the potential to become a tool for modern nurse leadership, as it is a space where can you listen on a micro level to each individual. In addition to listening, leadership can be achieved on a much larger scale through the use of social media monitoring tools and exploration of data and crowd sourcing. Through the use of data and social media listening tools nursing leaders can seek understanding and insight into a variety of issues. Social media also places nurse leaders in a visible and accessible position as role models.\n\n\nCONCLUSION\nSocial media and formal nursing leadership do not have to be against each other, but they can work in harmony as both formal and online leadership possess skills that are transferable. If used wisely social media has the potential to become a tool for modern nurse leadership.", "title": "" }, { "docid": "5876bb91b0cbe851b8af677c93c5e708", "text": "This paper proposes an effective end-to-end face detection and recognition framework based on deep convolutional neural networks for home service robots. We combine the state-of-the-art region proposal based deep detection network with the deep face embedding network into an end-to-end system, so that the detection and recognition networks can share the same deep convolutional layers, enabling significant reduction of computation through sharing convolutional features. The detection network is robust to large occlusion, and scale, pose, and lighting variations. The recognition network does not require explicit face alignment, which enables an effective training strategy to generate a unified network. A practical robot system is also developed based on the proposed framework, where the system automatically asks for a minimum level of human supervision when needed, and no complicated region-level face annotation is required. Experiments are conducted over WIDER and LFW benchmarks, as well as a personalized dataset collected from an office setting, which demonstrate state-of-the-art performance of our system.", "title": "" }, { "docid": "0528bc602b9a48e30fbce70da345c0ee", "text": "The power system is a dynamic system and it is constantly being subjected to disturbances. It is important that these disturbances do not drive the system to unstable conditions. For this purpose, additional signal derived from deviation, excitation deviation and accelerating power are injected into voltage regulators. The device to provide these signals is referred as power system stabilizer. The use of power system stabilizer has become very common in operation of large electric power systems. The conventional PSS which uses lead-lag compensation, where gain setting designed for specific operating conditions, is giving poor performance under different loading conditions. Therefore, it is very difficult to design a stabilizer that could present good performance in all operating points of electric power systems. In an attempt to cover a wide range of operating conditions, Fuzzy logic control has been suggested as a possible solution to overcome this problem, thereby using linguist information and avoiding a complex system mathematical model, while giving good performance under different operating conditions.", "title": "" }, { "docid": "6d56e0db0ebdfe58152cb0faa73453c4", "text": "Chatbot is a computer application that interacts with users using natural language in a similar way to imitate a human travel agent. A successful implementation of a chatbot system can analyze user preferences and predict collective intelligence. In most cases, it can provide better user-centric recommendations. Hence, the chatbot is becoming an integral part of the future consumer services. This paper is an implementation of an intelligent chatbot system in travel domain on Echo platform which would gather user preferences and model collective user knowledge base and recommend using the Restricted Boltzmann Machine (RBM) with Collaborative Filtering. With this chatbot based on DNN, we can improve human to machine interaction in the travel domain.", "title": "" }, { "docid": "81780f32d64eb7c5e3662268f48a67ec", "text": "Mobile ad hoc network (MANET) is a group of mobile nodes which communicates with each other without any supporting infrastructure. Routing in MANET is extremely challenging because of MANETs dynamic features, its limited bandwidth and power energy. Nature-inspired algorithms (swarm intelligence) such as ant colony optimization (ACO) algorithms have shown to be a good technique for developing routing algorithms for MANETs. Swarm intelligence is a computational intelligence technique that involves collective behavior of autonomous agents that locally interact with each other in a distributed environment to solve a given problem in the hope of finding a global solution to the problem. In this paper, we propose a hybrid routing algorithm for MANETs based on ACO and zone routing framework of bordercasting. The algorithm, HOPNET, based on ants hopping from one zone to the next, consists of the local proactive route discovery within a node’s neighborhood and reactive communication between the neighborhoods. The algorithm has features extracted from ZRP and DSR protocols and is simulated on GlomoSim and is compared to AODV routing protocol. The algorithm is also compared to the well known hybrid routing algorithm, AntHocNet, which is not based on zone routing framework. Results indicate that HOPNET is highly scalable for large networks compared to AntHocNet. The results also indicate that the selection of the zone radius has considerable impact on the delivery packet ratio and HOPNET performs significantly better than AntHocNet for high and low mobility. The algorithm has been compared to random way point model and random drunken model and the results show the efficiency and inefficiency of bordercasting. Finally, HOPNET is compared to ZRP and the strength of nature-inspired algorithm", "title": "" }, { "docid": "404a662b55baea9402d449fae6192424", "text": "Emotion is expressed in multiple modalities, yet most research has considered at most one or two. This stems in part from the lack of large, diverse, well-annotated, multimodal databases with which to develop and test algorithms. We present a well-annotated, multimodal, multidimensional spontaneous emotion corpus of 140 participants. Emotion inductions were highly varied. Data were acquired from a variety of sensors of the face that included high-resolution 3D dynamic imaging, high-resolution 2D video, and thermal (infrared) sensing, and contact physiological sensors that included electrical conductivity of the skin, respiration, blood pressure, and heart rate. Facial expression was annotated for both the occurrence and intensity of facial action units from 2D video by experts in the Facial Action Coding System (FACS). The corpus further includes derived features from 3D, 2D, and IR (infrared) sensors and baseline results for facial expression and action unit detection. The entire corpus will be made available to the research community.", "title": "" }, { "docid": "456fd41267a82663fee901b111ff7d47", "text": "The tagging of Named Entities, the names of particular things or classes, is regarded as an important component technology for many NLP applications. The first Named Entity set had 7 types, organization, location, person, date, time, money and percent expressions. Later, in the IREX project artifact was added and ACE added two, GPE and facility, to pursue the generalization of the technology. However, 7 or 8 kinds of NE are not broad enough to cover general applications. We proposed about 150 categories of NE (Sekine et al. 2002) and now we have extended it again to 200 categories. Also we have developed dictionaries and an automatic tagger for NEs in Japanese.", "title": "" } ]
scidocsrr
7e380c297fa3bd050b8775eb5853f45a
Addressing vital sign alarm fatigue using personalized alarm thresholds
[ { "docid": "913b3e09f6b12744a8044d95a67d8dc7", "text": "Research has demonstrated that 72% to 99% of clinical alarms are false. The high number of false alarms has led to alarm fatigue. Alarm fatigue is sensory overload when clinicians are exposed to an excessive number of alarms, which can result in desensitization to alarms and missed alarms. Patient deaths have been attributed to alarm fatigue. Patient safety and regulatory agencies have focused on the issue of alarm fatigue, and it is a 2014 Joint Commission National Patient Safety Goal. Quality improvement projects have demonstrated that strategies such as daily electrocardiogram electrode changes, proper skin preparation, education, and customization of alarm parameters have been able to decrease the number of false alarms. These and other strategies need to be tested in rigorous clinical trials to determine whether they reduce alarm burden without compromising patient safety.", "title": "" } ]
[ { "docid": "f7d535f9a5eeae77defe41318d642403", "text": "On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and re-using them when a previous context re-appears; and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' performance under various conditions such as different levels of noise and different extent and rate of concept drift.", "title": "" }, { "docid": "596949afaabdbcc68cd8bda175400f30", "text": "We propose improved Deep Neural Network (DNN) training loss functions for more accurate single keyword spotting on resource-constrained embedded devices. The loss function modifications consist of a combination of multi-task training and weighted cross entropy. In the multi-task architecture, the keyword DNN acoustic model is trained with two tasks in parallel the main task of predicting the keyword-specific phone states, and an auxiliary task of predicting LVCSR senones. We show that multi-task learning leads to comparable accuracy over a previously proposed transfer learning approach where the keyword DNN training is initialized by an LVCSR DNN of the same input and hidden layer sizes. The combination of LVCSRinitialization and Multi-task training gives improved keyword detection accuracy compared to either technique alone. We also propose modifying the loss function to give a higher weight on input frames corresponding to keyword phone targets, with a motivation to balance the keyword and background training data. We show that weighted cross-entropy results in additional accuracy improvements. Finally, we show that the combination of 3 techniques LVCSR-initialization, multi-task training and weighted cross-entropy gives the best results, with significantly lower False Alarm Rate than the LVCSR-initialization technique alone, across a wide range of Miss Rates.", "title": "" }, { "docid": "78e8d8b0508e011f5dc0e63fa1f0a1ee", "text": "This paper proposes chordal surface transform for representation and discretization of thin section solids, such as automobile bodies, plastic injection mold components and sheet metal parts. A multiple-layered all-hex mesh with a high aspect ratio is a typical requirement for mold flow simulation of thin section objects. The chordal surface transform reduces the problem of 3D hex meshing to 2D quad meshing on the chordal surface. The chordal surface is generated by cutting a tet mesh of the input CAD model at its mid plane. Radius function and curvature of the chordal surface are used to provide sizing function for quad meshing. Two-way mapping between the chordal surface and the boundary is used to sweep the quad elements from the chordal surface onto the boundary, resulting in a layered all-hex mesh. The algorithm has been tested on industrial models, whose chordal surface is 2-manifold. The graphical results of the chordal surface and the multiple-layered all-hex mesh are presented along with the quality measures. The results show geometrically adaptive high aspect ratio all-hex mesh, whose average scaled Jacobean, is close to 1.0.", "title": "" }, { "docid": "ea048488791219be809072862a061444", "text": "Our object oriented programming approach have great ability to improve the programming behavior for modern system and software engineering but it does not give the proper interaction of real world .In real world , programming required powerful interlinking among properties and characteristics towards the various objects. Basically this approach of programming gives the better presentation of object with real world and provide the better relationship among the objects. I have explained the new concept of my neuro object oriented approach .This approach contains many new features like originty , new concept of inheritance , new concept of encapsulation , object relation with dimensions , originty relation with dimensions and time , category of NOOPA like high order thinking object and low order thinking object , differentiation model for achieving the various requirements from the user and a rotational model .", "title": "" }, { "docid": "20fbb79c467e70dccf28f438e3c99efb", "text": "Surface water is a source of drinking water in most rural communities in Nigeria. This study evaluated the total heterotrophic bacteria (THB) counts and some physico-chemical characteristics of Rivers surrounding Wilberforce Island, Nigeria.Samples were collected in July 2007 and analyzed using standard procedures. The result of the THB ranged from 6.389 – 6.434Log cfu/ml. The physico-chemical parameters results ranged from 6.525 – 7.105 (pH), 56.075 – 64.950μS/cm (Conductivity), 0.010 – 0.050‰ (Salinity), 103.752 – 117.252 NTU (Turbidity), 27.250 – 27.325 oC (Temperature), 10.200 – 14.225 mg/l (Dissolved oxygen), 28.180 – 32.550 mg/l (Total dissolved solid), 0.330 – 0.813 mg/l (Nitrate), 0.378 – 0.530 mg/l (Ammonium). Analysis of variance showed that there were significant variation (P<0.05) in the physicochemical properties except for Salinity and temperature between the two rivers. Also no significant different (P>0.05) exist in the THB density of both rivers; upstream (Agudama-Ekpetiama) and downstream (Akaibiri) of River Nun with regard to ammonium and nitrate. Significant positive correlation (P<0.01) exist between dissolved oxygen with ammonium, Conductivity with salinity and total dissolved solid, salinity with total dissolved solid, turbidity with nitrate, and pH with nitrate. The positive correlation (P<0.05) also exist between pH with turbidity. High turbidity and bacteria density in the water samples is an indication of pollution and contamination respectively. Hence, the consumption of these surface water without treatment could cause health related effects. Keyword: Drinking water sources, microorganisms, physico-chemistry, surface water, Wilberforce Island", "title": "" }, { "docid": "a310039e0fd3f732805a6088ad3d1777", "text": "Unsupervised learning of visual similarities is of paramount importance to computer vision, particularly due to lacking training data for fine-grained similarities. Deep learning of similarities is often based on relationships between pairs or triplets of samples. Many of these relations are unreliable and mutually contradicting, implying inconsistencies when trained without supervision information that relates different tuples or triplets to each other. To overcome this problem, we use local estimates of reliable (dis-)similarities to initially group samples into compact surrogate classes and use local partial orders of samples to classes to link classes to each other. Similarity learning is then formulated as a partial ordering task with soft correspondences of all samples to classes. Adopting a strategy of self-supervision, a CNN is trained to optimally represent samples in a mutually consistent manner while updating the classes. The similarity learning and grouping procedure are integrated in a single model and optimized jointly. The proposed unsupervised approach shows competitive performance on detailed pose estimation and object classification.", "title": "" }, { "docid": "d73b277bf829a3295dfa86b33ad19c4a", "text": "Biodiesel is a renewable and environmentally friendly liquid fuel. However, the feedstock, predominantly crop oil, is a limited and expensive food resource which prevents large scale application of biodiesel. Development of non-food feedstocks are therefore, needed to fully utilize biodiesel’s potential. In this study, the larvae of a high fat containing insect, black soldier fly (Hermetia illucens) (BSFL), was evaluated for biodiesel production. Specifically, the BSFL was grown on organic wastes for 10 days and used for crude fat extraction by petroleum ether. The extracted crude fat was then converted into biodiesel by acid-catalyzed (1% H2SO4) esterification and alkaline-catalyzed (0.8% NaOH) transesterification, resulting in 35.5 g, 57.8 g and 91.4 g of biodiesel being produced from 1000 BSFL growing on 1 kg of cattle manure, pig manure and chicken manure, respectively. The major ester components of the resulting biodiesel were lauric acid methyl ester (35.5%), oleinic acid methyl ester (23.6%) and palmitic acid methyl ester (14.8%). Fuel properties of the BSFL fat-based biodiesel, such as density (885 kg/m), viscosity (5.8 mm/s), ester content (97.2%), flash point (123 C), and cetane number (53) were comparable to those of rapeseed-oil-based biodiesel. These results demonstrated that the organic waste-grown BSFL could be a feasible non-food feedstock for biodiesel production. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "dfc26119288ee136d00c6306377b93f6", "text": "Part-of-speech tagging is a basic step in Natural Language Processing that is often essential. Labeling the word forms of a text with fine-grained word-class information adds new value to it and can be a prerequisite for downstream processes like a dependency parser. Corpus linguists and lexicographers also benefit greatly from the improved search options that are available with tagged data. The Albanian language has some properties that pose difficulties for the creation of a part-of-speech tagset. In this paper, we discuss those difficulties and present a proposal for a part-of-speech tagset that can adequately represent the underlying linguistic phenomena.", "title": "" }, { "docid": "62999806021ff2533ddf7f06117f7d1a", "text": "In response to the new challenges in the design and operation of communication networks, and taking inspiration from how living beings deal with complexity and scalability, in this paper we introduce an innovative system concept called COgnition-BAsed NETworkS (COBANETS). The proposed approach develops around the systematic application of advanced machine learning techniques and, in particular, unsupervised deep learning and probabilistic generative models for system-wide learning, modeling, optimization, and data representation. Moreover, in COBANETS, we propose to combine this learning architecture with the emerging network virtualization paradigms, which make it possible to actuate automatic optimization and reconfiguration strategies at the system level, thus fully unleashing the potential of the learning approach. Compared with the past and current research efforts in this area, the technical approach outlined in this paper is deeply interdisciplinary and more comprehensive, calling for the synergic combination of expertise of computer scientists, communications and networking engineers, and cognitive scientists, with the ultimate aim of breaking new ground through a profound rethinking of how the modern understanding of cognition can be used in the management and optimization of telecommunication networks.", "title": "" }, { "docid": "bd3cc8370fd8669768f62d465f2c5531", "text": "Cognitive radio technology has been proposed to improve spectrum efficiency by having the cognitive radios act as secondary users to opportunistically access under-utilized frequency bands. Spectrum sensing, as a key enabling functionality in cognitive radio networks, needs to reliably detect signals from licensed primary radios to avoid harmful interference. However, due to the effects of channel fading/shadowing, individual cognitive radios may not be able to reliably detect the existence of a primary radio. In this paper, we propose an optimal linear cooperation framework for spectrum sensing in order to accurately detect the weak primary signal. Within this framework, spectrum sensing is based on the linear combination of local statistics from individual cognitive radios. Our objective is to minimize the interference to the primary radio while meeting the requirement of opportunistic spectrum utilization. We formulate the sensing problem as a nonlinear optimization problem. By exploiting the inherent structures in the problem formulation, we develop efficient algorithms to solve for the optimal solutions. To further reduce the computational complexity and obtain solutions for more general cases, we finally propose a heuristic approach, where we instead optimize a modified deflection coefficient that characterizes the probability distribution function of the global test statistics at the fusion center. Simulation results illustrate significant cooperative gain achieved by the proposed strategies. The insights obtained in this paper are useful for the design of optimal spectrum sensing in cognitive radio networks.", "title": "" }, { "docid": "1e30732092d2bcdeff624364c27e4c9c", "text": "Beliefs that individuals hold about whether emotions are malleable or fixed, also referred to as emotion malleability beliefs, may play a crucial role in individuals' emotional experiences and their engagement in changing their emotions. The current review integrates affective science and clinical science perspectives to provide a comprehensive review of how emotion malleability beliefs relate to emotionality, emotion regulation, and specific clinical disorders and treatment. Specifically, we discuss how holding more malleable views of emotion could be associated with more active emotion regulation efforts, greater motivation to engage in active regulatory efforts, more effort expended regulating emotions, and lower levels of pathological distress. In addition, we explain how extending emotion malleability beliefs into the clinical domain can complement and extend current conceptualizations of major depressive disorder, social anxiety disorder, and generalized anxiety disorder. This may prove important given the increasingly central role emotion dysregulation has been given in conceptualization and intervention for these psychiatric conditions. Additionally, discussion focuses on how emotion beliefs could be more explicitly addressed in existing cognitive therapies. Promising future directions for research are identified throughout the review.", "title": "" }, { "docid": "0382ad43b6d31a347d9826194a7261ce", "text": "In this paper, we present a representation for three-dimensional geometric animation sequences. Different from standard key-frame techniques, this approach is based on the determination of principal animation components and decouples the animation from the underlying geometry. The new representation supports progressive animation compression with spatial, as well as temporal, level-of-detail and high compression ratios. The distinction of animation and geometry allows for mapping animations onto other objects.", "title": "" }, { "docid": "faf0e45405b3c31135a20d7bff6e7a5a", "text": "Law enforcement is in a perpetual race with criminals in the application of digital technologies, and requires the development of tools to systematically search digital devices for pertinent evidence. Another part of this race, and perhaps more crucial, is the development of a methodology in digital forensics that encompasses the forensic analysis of all genres of digital crime scene investigations. This paper explores the development of the digital forensics process, compares and contrasts four particular forensic methodologies, and finally proposes an abstract model of the digital forensic procedure. This model attempts to address some of the shortcomings of previous methodologies, and provides the following advantages: a consistent and standardized framework for digital forensic tool development; a mechanism for applying the framework to future digital technologies; a generalized methodology that judicial members can use to relate technology to non-technical observers; and, the potential for incorporating nondigital electronic technologies within the abstractionmodel of the digital forensic procedure. This model attempts to address some of the shortcomings of previous methodologies, and provides the following advantages: a consistent and standardized framework for digital forensic tool development; a mechanism for applying the framework to future digital technologies; a generalized methodology that judicial members can use to relate technology to non-technical observers; and, the potential for incorporating nondigital electronic technologies within the abstraction Introduction The digital age can be characterized as the application of computer technology as a tool that enhances traditional methodologies. The incorporation of computer systems as a tool into private, commercial, educational, governmental, and other facets of modern life has improved", "title": "" }, { "docid": "3ca2d95885303f1ab395bd31d32df0c2", "text": "Curiosity to predict personality, behavior and need for this is not as new as invent of social media. Personality prediction to better accuracy could be very useful for society. There are many papers and researches conducted on usefulness of the data for various purposes like in marketing, dating suggestions, organization development, personalized recommendations and health care to name a few. With the introduction and extreme popularity of Online Social Networking Sites like Facebook, Twitter and LinkedIn numerous researches were conducted based on public data available, online social networking applications and social behavior towards friends and followers to predict the personality. Structured mining of the social media content can provide us the ability to predict some personality traits. This survey aims at providing researchers with an overview of various strategies used for studies and research concentrating on predicting user personality and behavior using online social networking site content. There positives, limitations are well summarized as reported in the literature. Finally, a brief discussion including open issues for further research in the area of social networking site based personality prediction preceding conclusion.", "title": "" }, { "docid": "0ca477c017da24940bb5af79b2c8826a", "text": "Code comprehension is critical in software maintenance. Towards providing tools and approaches to support maintenance tasks, researchers have investigated various research lines related to how software code can be described in an abstract form. So far, studies on change pattern mining, code clone detection, or semantic patch inference have mainly adopted text-, tokenand tree-based representations as the basis for computing similarity among code fragments. Although, in general, existing techniques form clusters of “similar” code, our experience in patch mining has revealed that clusters of patches formed by such techniques do not usually carry explainable semantics that can be associated to bug-fixing patterns. In this paper, we propose a novel, automated approach for mining semantically-relevant fix patterns based on an iterative, three-fold, clustering strategy. Our technique, FixMiner, leverages different tree representations for each round of clustering: the Abstract syntax tree, the edit actions tree, and the code context tree. We have evaluated FixMiner on thousands of software patches collected from open source projects. Preliminary results show that we are able to mine accurate patterns, efficiently exploiting change information in AST diff trees. Eventually, FixMiner yields patterns which can be associated to the semantics of the bugs that the associated patches address. We further leverage the mined patterns to implement an automated program repair pipeline with which we are able to correctly fix 25 bugs from the Defects4J benchmark. Beyond this quantitative performance, we show that the mined fix patterns are sufficiently relevant to produce patches with a high probability of correctness: 80% of FixMiner’s A. Koyuncu, K. Liu, T. F. Bissyandé, D. Kim, J. Klein, K. Kim, and Y. Le Traon SnT, University of Luxembourg E-mail: {firstname.lastname}@uni.lu M. Monperrus KTH Royal Institute of Technology E-mail: martin.monperrus@csc.kth.se ar X iv :1 81 0. 01 79 1v 1 [ cs .S E ] 3 O ct 2 01 8 2 Anil Koyuncu et al. generated plausible patches are correct, while the closest related works, namely HDRepair and SimFix, achieve respectively 26% and 70% of correctness.", "title": "" }, { "docid": "79eafa032a3f0cb367a008e5a7345dd5", "text": "Data Mining techniques are widely used in educational field to find new hidden patterns from student’s data. The hidden patterns that are discovered can be used to understand the problem arise in the educational field. This paper surveys the three elements needed to make prediction on Students’ Academic Performances which are parameters, methods and tools. This paper also proposes a framework for predicting the performance of first year bachelor students in computer science course. Naïve Bayes Classifier is used to extract patterns using the Data Mining Weka tool. The framework can be used as a basis for the system implementation and prediction of Students’ Academic Performance in Higher Learning Institutions.", "title": "" }, { "docid": "3e70a22831b064bff3ff784a932d068b", "text": "An ultrawideband (UWB) antenna that rejects extremely sharply the two narrow and closely-spaced U.S. WLAN 802.11a bands is presented. The antenna is designed on a single surface (it is uniplanar) and uses only linear sections for easy scaling and fine-tuning. Distributed-element and lumped-element equivalent circuit models of this dual band-reject UWB antenna are presented and used to support the explanation of the physical principles of operation of the dual band-rejection mechanism thoroughly. The circuits are evaluated by comparing with the response of the presented UWB antenna that has very high selectivity and achieves dual-frequency rejection of the WLAN 5.25 GHz and 5.775 GHz bands, while it receives signal from the intermediate band between 5.35-5.725 GHz. The rejection is achieved using double open-circuited stubs, which is uncommon and are chosen based on their narrowband performance. The antenna was fabricated on a single side of a thin, flexible, LCP substrate. The measured achieved rejection is the best reported for a dual-band reject antenna with so closely-spaced rejected bands. The measured group delay of the antenna validates its suitability for UWB links. Such antennas improve both UWB and WLAN communication links at practically zero cost.", "title": "" }, { "docid": "77ce917536f59d5489d0d6f7000c7023", "text": "In this supplementary document, we present additional results to complement the paper. First, we provide the detailed configurations and parameters of the generator and discriminator in the proposed Generative Adversarial Network. Second, we present the qualitative comparisons with the state-ofthe-art CNN-based optical flow methods. The complete results and source code are publicly available on http://vllab.ucmerced.edu/wlai24/semiFlowGAN.", "title": "" }, { "docid": "cc4458a843a2a6ffa86b4efd1956ffca", "text": "There is a growing interest in the use of chronic deep brain stimulation (DBS) for the treatment of medically refractory movement disorders and other neurological and psychiatric conditions. Fundamental questions remain about the physiologic effects and safety of DBS. Previous basic research studies have focused on the direct polarization of neuronal membranes by electrical stimulation. The goal of this paper is to provide information on the thermal effects of DBS using finite element models to investigate the magnitude and spatial distribution of DBS induced temperature changes. The parameters investigated include: stimulation waveform, lead selection, brain tissue electrical and thermal conductivity, blood perfusion, metabolic heat generation during the stimulation. Our results show that clinical deep brain stimulation protocols will increase the temperature of surrounding tissue by up to 0.8degC depending on stimulation/tissue parameters", "title": "" }, { "docid": "5d9112213e6828d5668ac4a33d4582f9", "text": "This paper describes four patients whose chief symptoms were steatorrhoea and loss of weight. Despite the absence of a history of abdominal pain investigations showed that these patients had chronic pancreatitis, which responded to medical treatment. The pathological findings in two of these cases and in six which came to necropsy are reported.", "title": "" } ]
scidocsrr
7b59b1fa74b8dd6c7bc30b7716c2763f
Image Crowd Counting Using Convolutional Neural Network and Markov Random Field
[ { "docid": "1db45c5e93fc29a4d0969d38dad858bb", "text": "We propose to leverage multiple sources of information to compute an estimate of the number of individuals present in an extremely dense crowd visible in a single image. Due to problems including perspective, occlusion, clutter, and few pixels per person, counting by human detection in such images is almost impossible. Instead, our approach relies on multiple sources such as low confidence head detections, repetition of texture elements (using SIFT), and frequency-domain analysis to estimate counts, along with confidence associated with observing individuals, in an image region. Secondly, we employ a global consistency constraint on counts using Markov Random Field. This caters for disparity in counts in local neighborhoods and across scales. We tested our approach on a new dataset of fifty crowd images containing 64K annotated humans, with the head counts ranging from 94 to 4543. This is in stark contrast to datasets used for existing methods which contain not more than tens of individuals. We experimentally demonstrate the efficacy and reliability of the proposed approach by quantifying the counting performance.", "title": "" }, { "docid": "66e91cdcb987e6f9ee48360414c993d6", "text": "We propose a new supervised learning framework for visual object counting tasks, such as estimating the number of cells in a microscopic image or the number of humans in surveillance video frames. We focus on the practically-attractive case when the training images are annotated with dots (one dot per object). Our goal is to accurately estimate the count. However, we evade the hard task of learning to detect and localize individual object instances. Instead, we cast the problem as that of estimating an image density whose integral over any image region gives the count of objects within that region. Learning to infer such density can be formulated as a minimization of a regularized risk quadratic cost function. We introduce a new loss function, which is well-suited for such learning, and at the same time can be computed efficiently via a maximum subarray algorithm. The learning can then be posed as a convex quadratic program solvable with cutting-plane optimization. The proposed framework is very flexible as it can accept any domain-specific visual features. Once trained, our system provides accurate object counts and requires a very small time overhead over the feature extraction step, making it a good candidate for applications involving real-time processing or dealing with huge amount of visual data.", "title": "" } ]
[ { "docid": "ae4874afc1b1358bb436018c5d5d8c09", "text": "In the past years, network theory has successfully characterized the interaction among the constituents of a variety of complex systems, ranging from biological to technological, and social systems. However, up until recently, attention was almost exclusively given to networks in which all components were treated on equivalent footing, while neglecting all the extra information about the temporalor context-related properties of the interactions under study. Only in the last years, taking advantage of the enhanced resolution in real data sets, network scientists have directed their interest to the multiplex character of real-world systems, and explicitly considered the time-varying and multilayer nature of networks. We offer here a comprehensive review on both structural and dynamical organization of graphs made of diverse relationships (layers) between its constituents, and cover several relevant issues, from a full redefinition of the basic structural measures, to understanding how the multilayer nature of the network affects processes and dynamics. © 2014 Elsevier B.V. All rights reserved. ∗ Corresponding author. E-mail address: stefano.boccaletti@gmail.com (S. Boccaletti). http://dx.doi.org/10.1016/j.physrep.2014.07.001 0370-1573/© 2014 Elsevier B.V. All rights reserved. 2 S. Boccaletti et al. / Physics Reports 544 (2014) 1–122", "title": "" }, { "docid": "f5a4d05c8b8c42cdca540794000afad5", "text": "Design thinking (DT) is regarded as a system of three overlapping spaces—viability, desirability, and feasibility—where innovation increases when all three perspectives are addressed. Understanding how innovation within teams can be supported by DT methods and tools captivates the interest of business communities. This paper aims to examine how DT methods and tools foster innovation in teams. A case study approach, based on two workshops, examined three DT methods with a software tool. The findings support the use of DT methods and tools as a way of incubating ideas and creating innovative solutions within teams when team collaboration and software limitations are balanced. The paper proposes guidelines for utilizing DT methods and tools in innovation", "title": "" }, { "docid": "51da24a6bdd2b42c68c4465624d2c344", "text": "Hashing based Approximate Nearest Neighbor (ANN) search has attracted much attention due to its fast query time and drastically reduced storage. However, most of the hashing methods either use random projections or extract principal directions from the data to derive hash functions. The resulting embedding suffers from poor discrimination when compact codes are used. In this paper, we propose a novel data-dependent projection learning method such that each hash function is designed to correct the errors made by the previous one sequentially. The proposed method easily adapts to both unsupervised and semi-supervised scenarios and shows significant performance gains over the state-ofthe-art methods on two large datasets containing up to 1 million points.", "title": "" }, { "docid": "94f1de78a229dc542a67ea564a0b259f", "text": "Voice enabled personal assistants like Microsoft Cortana are becoming better every day. As a result more users are relying on such software to accomplish more tasks. While these applications are significantly improving due to great advancements in the underlying technologies, there are still shortcomings in their performance resulting in a class of user queries that such assistants cannot yet handle with satisfactory results. We analyze the data from millions of user queries, and build a machine learning system capable of classifying user queries into two classes; a class of queries that are addressable by Cortana with high user satisfaction, and a class of queries that are not. We then use unsupervised learning to cluster similar queries and assign them to human assistants who can complement Cortana functionality.", "title": "" }, { "docid": "4191648ada97ecc5a906468369c12bf4", "text": "Dermoscopy is a widely used technique whose role in the clinical (and preoperative) diagnosis of melanocytic and non-melanocytic skin lesions has been well established in recent years. The aim of this paper is to clarify the correlations between the \"local\" dermoscopic findings in melanoma and the underlying histology, in order to help clinicians in routine practice.", "title": "" }, { "docid": "a69c0322df088f7bd83c94f3363cf851", "text": "This paper presents new algorithms to trace objects represented by densities within a volume grid, e.g. clouds, fog, flames, dust, particle systems. We develop the light scattering equations, discuss previous methods of solution, and present a new approximate solution to the full three-dimensional radiative scattering problem suitable for use in computer graphics. Additionally we review dynamical models for clouds used to make an animated movie.", "title": "" }, { "docid": "b31f5af2510461479d653be1ddadaa22", "text": "Integrating smart temperature sensors into digital platforms facilitates information to be processed and transmitted, and open up new applications. Furthermore, temperature sensors are crucial components in computing platforms to manage power-efficiency trade-offs reliably under a thermal budget. This paper presents a holistic perspective about smart temperature sensor design from system- to device-level including manufacturing concerns. Through smart sensor design evolutions, we identify some scaling paths and circuit techniques to surmount analog/mixed-signal design challenges in 32-nm and beyond. We close with opportunities to design smarter temperature sensors.", "title": "" }, { "docid": "2f1862591d5f9ee80d7cdcb930f86d8d", "text": "In this research convolutional neural networks are used to recognize whether a car on a given image is damaged or not. Using transfer learning to take advantage of available models that are trained on a more general object recognition task, very satisfactory performances have been achieved, which indicate the great opportunities of this approach. In the end, also a promising attempt in classifying car damages into a few different classes is presented. Along the way, the main focus was on the influence of certain hyper-parameters and on seeking theoretically founded ways to adapt them, all with the objective of progressing to satisfactory results as fast as possible. This research open doors for future collaborations on image recognition projects in general and for the car insurance field in particular.", "title": "" }, { "docid": "717d1c31ac6766fcebb4ee04ca8aa40f", "text": "We present an incremental maintenance algorithm for leapfrog triejoin. The algorithm maintains rules in time proportional (modulo log factors) to the edit distance between leapfrog triejoin traces.", "title": "" }, { "docid": "9a3a73f35b27d751f237365cc34c8b28", "text": "The development of brain metastases in patients with advanced stage melanoma is common, but the molecular mechanisms responsible for their development are poorly understood. Melanoma brain metastases cause significant morbidity and mortality and confer a poor prognosis; traditional therapies including whole brain radiation, stereotactic radiotherapy, or chemotherapy yield only modest increases in overall survival (OS) for these patients. While recently approved therapies have significantly improved OS in melanoma patients, only a small number of studies have investigated their efficacy in patients with brain metastases. Preliminary data suggest that some responses have been observed in intracranial lesions, which has sparked new clinical trials designed to evaluate the efficacy in melanoma patients with brain metastases. Simultaneously, recent advances in our understanding of the mechanisms of melanoma cell dissemination to the brain have revealed novel and potentially therapeutic targets. In this review, we provide an overview of newly discovered mechanisms of melanoma spread to the brain, discuss preclinical models that are being used to further our understanding of this deadly disease and provide an update of the current clinical trials for melanoma patients with brain metastases.", "title": "" }, { "docid": "c2055f8366e983b45d8607c877126797", "text": "This paper proposes and investigates an offline finite-element-method (FEM)-assisted position and speed observer for brushless dc permanent-magnet (PM) (BLDC-PM) motor drive sensorless control based on the line-to-line PM flux linkage estimation. The zero crossing of the line-to-line PM flux linkage occurs right in the middle of two commutation points (CPs) and is used as a basis for the position and speed observer. The position between CPs is obtained by comparing the estimated line-to-line PM flux with the FEM-calculated line-to-line PM flux. Even if the proposed observer relies on the fundamental model of the machine, a safe starting strategy under heavy load torque, called I-f control, is used, with seamless transition to the proposed sensorless control. The I-f starting method allows low-speed sensorless control, without knowing the initial position and without machine parameter identification. Digital simulations and experimental results are shown, demonstrating the reliability of the FEM-assisted position and speed observer for BLDC-PM motor sensorless control operation.", "title": "" }, { "docid": "123b93071e0ae555734c0ab27d29b6bf", "text": "Computer-Assisted Pronunciation Training System (CAPT) has become an important learning aid in second language (L2) learning. Our approach to CAPT is based on the use of phonological rules to capture language transfer effects that may cause mispronunciations. This paper presents an approach for automatic derivation of phonological rules from L2 speech. The rules are used to generate an extended recognition network (ERN) that captures the canonical pronunciations of words, as well as the possible mispronunciations. The ERN is used with automatic speech recognition for mispronunciation detection. Experimentation with an L2 speech corpus that contains recordings from 100 speakers aims to compare the automatically derived rules with manually authored rules. Comparable performance is achieved in mispronunciation detection (i.e. telling which phone is wrong). The automatically derived rules also offer improved performance in diagnostic accuracy (i.e. identify how the phone is wrong).", "title": "" }, { "docid": "3293e4e0d7dd2e29505db0af6fbb13d1", "text": "A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive testbed it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.", "title": "" }, { "docid": "223b020f16692871644798958b80a25f", "text": "This paper is an extended description of SemEval-2014 Task 1, the task on the evaluation of Compositional Distributional Semantics Models on full sentences. Systems participating in the task were presented with pairs of sentences and were evaluated on their ability to predict human judgments on (i) semantic relatedness and (ii) entailment. Training and testing data were subsets of the SICK (Sentences Involving Compositional Knowledge) data set. SICK was developed with the aim of providing a proper benchmark to evaluate compositional semantic systems, though task participation was open to systems based on any approach. Taking advantage of the SemEval experience, in this paper we analyze the SICK data set, in order to evaluate the extent to which it meets its design goal and to shed light on the linguistic phenomena that are still challenging for state-of-the-art computational semantic systems. Qualitative and quantitative error analyses show that many systems are quite sensitive to changes in the proportion of sentence pair types, and degrade in the presence of additional lexico-syntactic complexities which do not affect human judgements. More compositional systems seem to perform better when the task proportions are changed, but the effect needs further confirmation.", "title": "" }, { "docid": "be5e1336187b80bc418b2eb83601fbd4", "text": "Pedestrian detection has been an important problem for decades, given its relevance to a number of applications in robotics, including driver assistance systems, road scene understanding and surveillance systems. The two main practical requirements for fielding such systems are very high accuracy and real-time speed: we need pedestrian detectors that are accurate enough to be relied on and are fast enough to run on systems with limited compute power. This paper addresses both of these requirements by combining very accurate deep-learning-based classifiers within very efficient cascade classifier frameworks. Deep neural networks (DNN) have been shown to excel at classification tasks [5], and their ability to operate on raw pixel input without the need to design special features is very appealing. However, deep nets are notoriously slow at inference time. In this paper, we propose an approach that cascades deep nets and fast features, that is both very fast and accurate. We apply it to the challenging task of pedestrian detection. Our algorithm runs in real-time at 15 frames per second (FPS). The resulting approach achieves a 26.2% average miss rate on the Caltech Pedestrian detection benchmark, which is the first work we are aware of that achieves high accuracy while running in real-time. To achieve this, we combine a fast cascade [2] with a cascade of classifiers, which we propose to be DNNs. Our approach is unique, as it is the only one to produce a pedestrian detector at real-time speeds (15 FPS) that is also very accurate. Figure 1 visualizes existing methods as plotted on the accuracy computational time axis, measured on the challenging Caltech pedestrian detection benchmark [4]. As can be seen in this figure, our approach is the only one to reside in the high accuracy, high speed region of space, which makes it particularly appealing for practical applications. Fast Deep Network Cascade. Our main architecture is a cascade structure in which we take advantage of the fast features for elimination, VeryFast [2] as an initial stage and combine it with small and large deep networks [1, 5] for high accuracy. The VeryFast algorithm is a cascade itself, but of boosting classifiers. It reduces recall with each stage, producing a high average miss rate in the end. Since the goal is eliminate many non-pedestrian patches and at the same time keep the recall high, we used only 10% of the stages in that cascade. Namely, we use a cascade of only 200 stages, instead of the 2000 in the original work. The first stage of our deep cascade processes all image patches that have high confidence values and pass through the VeryFast classifier. We here utilize the idea of a tiny convolutional network proposed by our prior work [1]. The tiny deep network has three layers only and features a 5x5 convolution, a 1x1 convolution and a very shallow fully-connected layer of 512 units. It reduces the massive computational time that is needed to evaluate a full DNN at all candidate locations filtered by the previous stage. The speedup produced by the tiny network, is a crucial component in achieving real-time performance in our fast cascade method. The baseline deep neural network is based on the original deep network of Krizhevsky et al [5]. As mentioned, this network in general is extremely slow to be applied alone. To achieve real-time speeds, we first apply it to only the remaining filtered patches from the previous two stages. Another key difference is that we reduced the depths of some of the convolutional layers and the sizes of the receptive fields, which is specifically done to gain speed advantage. Runtime. Our deep cascade works at 67ms on a standard NVIDIA K20 Tesla GPU per 640x480 image, which is a runtime of 15 FPS. The time breakdown is as follows. The soft-cascade takes about 7 milliseconds (ms). About 1400 patches are passed through per image from the fast cascade. The tiny DNN runs at 0.67 ms per batch of 128, so it can process the patches in 7.3 ms. The final stage of the cascade (which is the baseline classifier) takes about 53ms. This is an overall runtime of 67ms. Experimental evaluation. We evaluate the performance of the Fast Deep Network Cascade using the training and test protocols established in the Caltech pedestrian benchmark [4]. We tested several scenarios by training on the Caltech data only, denoted as DeepCascade, on an indeFigure 1: Performance of pedestrian detection methods on the accuracy vs speed axis. Our DeepCascade method achieves both smaller missrates and real-time speeds. Methods for which the runtime is more than 5 seconds per image, or is unknown, are plotted on the left hand side. The SpatialPooling+/Katamari methods use additional motion information.", "title": "" }, { "docid": "71b5708fb9d078b370689cac22a66013", "text": "This paper presents a model, synthesized from the literature, of factors that explain how business analytics contributes to business value. It also reports results from a preliminary test of that model. The model consists of two parts: a process and a variance model. The process model depicts the analyze-insight-decision-action process through which use of an organization’s business-analytic capabilities create business value. The variance model proposes that the five factors in Davenport et al.’s (2010) DELTA model of BA success factors, six from Watson and Wixom (2007), and three from Seddon et al.’s (2010) model of organizational benefits from enterprise systems, assist a firm to gain business value from business analytics. A preliminary test of the model was conducted using data from 100 customer-success stories from vendors such as IBM, SAP, and Teradata. Our conclusion is that the model is likely to be a useful basis for future research.", "title": "" }, { "docid": "54fa080265b45a8a542bb47dce75ce11", "text": "The aims of this research were to investigate the applicability of the Systematic Literature Review (SLR) process within the constraints of a 13-week master’s level project and to aggregate evidence about the effectiveness of pair programming for teaching introductory programming. It was found that, with certain modifications to the process, it was possible to undertake an SLR within a limited time period and to produce valid results. Based on pre-defined inclusion and exclusion criteria, the student found 28 publications reporting empirical studies of pair programming, of which nine publications were used for data extraction and analysis. Results of the review indicates that whilst pair programming has little effect on the marks obtained for examinations and assignments, it can significantly improve the pass and retention rates and the students’ confidence and enjoyment of programming. Following the student study, experienced reviewers re-applied the inclusion and exclusion criteria to the 28 publications and carried out data extraction and synthesis using the resulting papers. A comparison of the student’s results and those of the experienced reviewers is presented.", "title": "" }, { "docid": "6682c9fd29c8e406844d24258fc2dd80", "text": "Fast service placement, finding a set of nodes with enough free capacity of computation, storage, and network connectivity, is a routine task in daily cloud administration. In this work, we formulate this as a subgraph matching problem. Different from the traditional setting, including approximate and probabilistic graphs, subgraph matching on data-center networks has two unique properties. (1) Node/edge labels representing vacant CPU cycles and network bandwidth change rapidly, while the network topology varies little. (2) There is a partial order on node/edge labels. Basically, one needs to place service in nodes with enough free capacity. Existing graph indexing techniques have not considered very frequent label updates, and none of them supports partial order on numeric labels. Therefore, we resort to a new graph index framework, Gradin, to address both challenges. Gradin encodes subgraphs into multi-dimensional vectors and organizes them with indices such that it can efficiently search the matches of a query's subgraphs and combine them to form a full match. In particular, we analyze how the index parameters affect update and search performance with theoretical results. Moreover, a revised pruning algorithm is introduced to reduce unnecessary search during the combination of partial matches. Using both real and synthetic datasets, we demonstrate that Gradin outperforms the baseline approaches up to 10 times.", "title": "" }, { "docid": "f78a01a4337e2f2e7c3a6341d273f3e8", "text": "We consider the problem of assigning stockkeeping units to distribution centers (DCs) belonging to different DC types of a retail network, e.g., central, regional, and local DCs. The problem is motivated by the real situation of a retail company and solved by an MIP solution approach. The MIP model reflects the interdependencies between inbound transportation, outbound transportation and instore logistics as well as capital tied up in inventories and differences in picking costs between the warehouses. A novel solution approach is developed and applied to a real-life case of a leading European grocery retail chain. The application of the new approach results in cost savings of 6% of total operational costs compared to the present assignment. These savings amount to several million euros per year. In-depth analyses of the results and sensitivity analyses provide insights into the solution structure and the major related issues.", "title": "" }, { "docid": "a92f788b44411691a8ad5372b2fa4b55", "text": "We study the problem of minimizing the average of a large number of smooth convex functions penalized with a strongly convex regularizer. We propose and analyze a novel primal-dual method (Quartz) which at every iteration samples and updates a random subset of the dual variables, chosen according to an arbitrary distribution. In contrast to typical analysis, we directly bound the decrease of the primal-dual error (in expectation), without the need to first analyze the dual error. Depending on the choice of the sampling, we obtain efficient serial and mini-batch variants of the method. In the serial case, our bounds match the best known bounds for SDCA (both with uniform and importance sampling). With standard mini-batching, our bounds predict initial data-independent speedup as well as additional data-driven speedup which depends on spectral and sparsity properties of the data.", "title": "" } ]
scidocsrr
45496e802019324e75a7495fe0651307
The Berlin brain-computer interface: EEG-based communication without subject training
[ { "docid": "5d247482bb06e837bf04c04582f4bfa2", "text": "This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.", "title": "" } ]
[ { "docid": "06abf2a7c6d0c25cfe54422268300e58", "text": "The purpose of the present study is to provide useful data that could be applied to various types of periodontal plastic surgery by detailing the topography of the greater palatine artery (GPA), looking in particular at its depth from the palatal masticatory mucosa (PMM) and conducting a morphometric analysis of the palatal vault. Forty-three hemisectioned hard palates from embalmed Korean adult cadavers were used in this study. The morphometry of the palatal vault was analyzed, and then the specimens were decalcified and sectioned. Six parameters were measured using an image-analysis system after performing a standard calibration. In one specimen, the PMM was separated from the hard palate and subjected to a partial Sihler's staining technique, allowing the branching pattern of the GPA to be observed in a new method. The distances between the GPA and the gingival margin, and between the GPA and the cementoenamel junction were greatest at the maxillary second premolar. The shortest vertical distance between the GPA and the PMM decreased gradually as it proceeded anteriorly. The GPA was located deeper in the high-vault group than in the low-vault group. The premolar region should be recommended as the optimal donor site for tissue grafting, and in particular the second premolar region. The maximum size and thickness of tissue that can be harvested from the region were 9.3 mm and 4.0 mm, respectively.", "title": "" }, { "docid": "dfdf2581010777e51ff3e29c5b9aee7f", "text": "This paper proposes a parallel architecture with resistive crosspoint array. The design of its two essential operations, read and write, is inspired by the biophysical behavior of a neural system, such as integrate-and-fire and local synapse weight update. The proposed hardware consists of an array with resistive random access memory (RRAM) and CMOS peripheral circuits, which perform matrix-vector multiplication and dictionary update in a fully parallel fashion, at the speed that is independent of the matrix dimension. The read and write circuits are implemented in 65 nm CMOS technology and verified together with an array of RRAM device model built from experimental data. The overall system exploits array-level parallelism and is demonstrated for accelerated dictionary learning tasks. As compared to software implementation running on a 8-core CPU, the proposed hardware achieves more than 3000 × speedup, enabling high-speed feature extraction on a single chip.", "title": "" }, { "docid": "d9789c6dc7febc25732617f0d57a43a1", "text": "When a binary or ordinal regression model incorrectly assumes that error variances are the same for all cases, the standard errors are wrong and (unlike OLS regression) the parameter estimates are biased. Heterogeneous choice (also known as location-scale or heteroskedastic ordered) models explicitly specify the determinants of heteroskedasticity in an attempt to correct for it. Such models are also useful when the variance itself is of substantive interest. This paper illustrates how the author’s Stata program oglm (Ordinal Generalized Linear Models) can be used to estimate heterogeneous choice and related models. It shows that two other models that have appeared in the literature (Allison’s model for group comparisons and Hauser and Andrew’s logistic response model with proportionality constraints) are special cases of a heterogeneous choice model and alternative parameterizations of it. The paper further argues that heterogeneous choice models may sometimes be an attractive alternative to other ordinal regression models, such as the generalized ordered logit model estimated by gologit2. Finally, the paper offers guidelines on how to interpret, test and modify heterogeneous choice models.", "title": "" }, { "docid": "6c106d560d8894d941851386d96afe2b", "text": "Cooperative vehicular networks require the exchange of positioning and basic status information between neighboring nodes to support higher layer protocols and applications, including active safety applications. The information exchange is based on the periodic transmission/reception of 1-hop broadcast messages on the so called control channel. The dynamic adaptation of the transmission parameters of such messages will be key for the reliable and efficient operation of the system. On one hand, congestion control protocols need to be applied to control the channel load, typically through the adaptation of the transmission parameters based on certain channel load metrics. On the other hand, awareness control protocols are also required to adequately support cooperative vehicular applications. Such protocols typically adapt the transmission parameters of periodic broadcast messages to ensure each vehicle's capacity to detect, and possibly communicate, with the relevant vehicles and infrastructure nodes present in its local neighborhood. To date, congestion and awareness control protocols have been normally designed and evaluated separately, although both will be required for the reliable and efficient operation of the system. To this aim, this paper proposes and evaluates INTERN, a new control protocol that integrates two congestion and awareness control processes. The simulation results obtained demonstrate that INTERN is able to satisfy the application's requirements of all vehicles, while effectively controlling the channel load.", "title": "" }, { "docid": "645395d46f653358d942742711d50c0b", "text": "Feature descriptors play a crucial role in a wide range of geometry analysis and processing applications, including shape correspondence, retrieval, and segmentation. In this paper, we propose ShapeNet, a generalization of the popular convolutional neural networks (CNN) paradigm to non-Euclidean manifolds. Our construction is based on a local geodesic system of polar coordinates to extract “patches”, which are then passed through a cascade of filters and linear and non-linear operators. The coefficients of the filters and linear combination weights are optimization variables that are learned to minimize a task-specific cost function. We use ShapeNet to learn invariant shape feature descriptors that significantly outperform recent state-of-the-art methods, and show that previous approaches such as heat and wave kernel signatures, optimal spectral descriptors, and intrinsic shape contexts can be obtained as particular configurations of ShapeNet. CR Categories: I.2.6 [Artificial Intelligence]: Learning— Connectionism and neural nets", "title": "" }, { "docid": "24ac33300d3ea99441068c20761e8305", "text": "Purpose – The purpose of this research is to examine the critical success factors of mobile web site adoption. Design/methodology/approach – Based on the valid responses collected from a questionnaire survey, the structural equation modelling technique was employed to examine the research model. Findings – The results indicate that system quality is the main factor affecting perceived ease of use, whereas information quality is the main factor affecting perceived usefulness. Service quality has significant effects on trust and perceived ease of use. Perceived usefulness, perceived ease of use and trust determine user satisfaction. Practical implications – Mobile service providers need to improve the system quality, information quality and service quality of mobile web sites to enhance user satisfaction. Originality/value – Previous research has mainly focused on e-commerce web site success and seldom examined the factors affecting mobile web site success. This research fills the gap. The research draws on information systems success theory, the technology acceptance model and trust theory as the theoretical bases.", "title": "" }, { "docid": "b92d89fec6f0e1cfd869290b015a7be5", "text": "Vertex-centric graph processing is employed by many popular algorithms (e.g., PageRank) due to its simplicity and efficient use of asynchronous parallelism. The high compute power provided by SIMT architecture presents an opportunity for accelerating these algorithms using GPUs. Prior works of graph processing on a GPU employ Compressed Sparse Row (CSR) form for its space-efficiency; however, CSR suffers from irregular memory accesses and GPU underutilization that limit its performance. In this paper, we present CuSha, a CUDA-based graph processing framework that overcomes the above obstacle via use of two novel graph representations: G-Shards and Concatenated Windows (CW). G-Shards uses a concept recently introduced for non-GPU systems that organizes a graph into autonomous sets of ordered edges called shards. CuSha's mapping of GPU hardware resources on to shards allows fully coalesced memory accesses. CW is a novel representation that enhances the use of shards to achieve higher GPU utilization for processing sparse graphs. Finally, CuSha fully utilizes the GPU power by processing multiple shards in parallel on GPU's streaming multiprocessors. For ease of programming, CuSha allows the user to define the vertex-centric computation and plug it into its framework for parallel processing of large graphs. Our experiments show that CuSha provides significant speedups over the state-of-the-art CSR-based virtual warp-centric method for processing graphs on GPUs.", "title": "" }, { "docid": "8fe823702191b4a56defaceee7d19db6", "text": "We propose a method of stacking multiple long short-term memory (LSTM) layers for modeling sentences. In contrast to the conventional stacked LSTMs where only hidden states are fed as input to the next layer, our architecture accepts both hidden and memory cell states of the preceding layer and fuses information from the left and the lower context using the soft gating mechanism of LSTMs. Thus the proposed stacked LSTM architecture modulates the amount of information to be delivered not only in horizontal recurrence but also in vertical connections, from which useful features extracted from lower layers are effectively conveyed to upper layers. We dub this architecture Cell-aware Stacked LSTM (CAS-LSTM) and show from experiments that our models achieve state-of-the-art results on benchmark datasets for natural language inference, paraphrase detection, and sentiment classification.", "title": "" }, { "docid": "77ec15fd35f9bceee4537afc63c82079", "text": "Grapheme-to-phoneme conversion plays an important role in text-to-speech applications and other fields of computational linguistics. Although Korean uses a phonemic writing system, it must have a grapheme-to-phoneme conversion for speech synthesis because Korean writing system does not always reflect its actual pronunciations. This paper describes a grapheme-to-phoneme conversion method based on sound patterns to convert Korean text strings into phonemic representations. In the experiment with Korean news broadcasting evaluation set of 20 sentences, the accuracy of our system achieve as high as 98.70% on conversion. The performance of our rule-based system shows that the rule-based sound patterns are effective on Korean grapheme-to-phoneme conversion.", "title": "" }, { "docid": "617db9b325e211b45571db6fb8dc6c87", "text": "This paper gives a review of acoustic and ultrasonic optical fiber sensors (OFSs). The review covers optical fiber sensing methods for detecting dynamic strain signals, including general sound and acoustic signals, high-frequency signals, i.e., ultrasonic/ultrasound, and other signals such as acoustic emissions, and impact induced dynamic strain. Several optical fiber sensing methods are included, in an attempted to summarize the majority of optical fiber sensing methods used to date. The OFS include single fiber sensors and optical fiber devices, fiber-optic interferometers, and fiber Bragg gratings (FBGs). The single fiber and fiber device sensors include optical fiber couplers, microbend sensors, refraction-based sensors, and other extrinsic intensity sensors. The optical fiber interferometers include Michelson, Mach-Zehnder, Fabry-Perot, Sagnac interferometers, as well as polarization and model interference. The specific applications addressed in this review include optical fiber hydrophones, biomedical sensors, and sensors for nondestructive evaluation and structural health monitoring. Future directions are outlined and proposed for acousto-ultrasonic OFS.", "title": "" }, { "docid": "368e72277a5937cb8ee94cea3fa11758", "text": "Monoclinic Gd2O3:Eu(3+) nanoparticles (NPs) possess favorable magnetic and optical properties for biomedical application. However, how to obtain small enough NPs still remains a challenge. Here we combined the standard solid-state reaction with the laser ablation in liquids (LAL) technique to fabricate sub-10 nm monoclinic Gd2O3:Eu(3+) NPs and explained their formation mechanism. The obtained Gd2O3:Eu(3+) NPs exhibit bright red fluorescence emission and can be successfully used as fluorescence probe for cells imaging. In vitro and in vivo magnetic resonance imaging (MRI) studies show that the product can also serve as MRI good contrast agent. Then, we systematically investigated the nanotoxicity including cell viability, apoptosis in vitro, as well as the immunotoxicity and pharmacokinetics assays in vivo. This investigation provides a platform for the fabrication of ultrafine monoclinic Gd2O3:Eu(3+) NPs and evaluation of their efficiency and safety in preclinical application.", "title": "" }, { "docid": "3dc3e680c68aefb6968fbe120d203cdf", "text": "A procedure for reflection and discourse on the behavior of bots in the context of law, deception, and societal norms.", "title": "" }, { "docid": "49e5f9e36efb6b295868a307c1486c60", "text": "This paper reviews ultrasound segmentation methods, in a broad sense, focusing on techniques developed for medical B-mode ultrasound images. First, we present a review of articles by clinical application to highlight the approaches that have been investigated and degree of validation that has been done in different clinical domains. Then, we present a classification of methodology in terms of use of prior information. We conclude by selecting ten papers which have presented original ideas that have demonstrated particular clinical usefulness or potential specific to the ultrasound segmentation problem", "title": "" }, { "docid": "e7d5dd2926238db52cf406f20947f90e", "text": "The development of the capital markets is changing the relevance and empirical validity of the efficient market hypothesis. The dynamism of capital markets determines the need for efficiency research. The authors analyse the development and the current status of the efficient market hypothesis with an emphasis on the Baltic stock market. Investors often fail to earn an excess profit, but yet stock market anomalies are observed and market prices often deviate from their intrinsic value. The article presents an analysis of the concept of efficient market. Also, the market efficiency evolution is reviewed and its current status is analysed. This paper presents also an examination of stock market efficiency in the Baltic countries. Finally, the research methods are reviewed and the methodology of testing the weak-form efficiency in a developing market is suggested.", "title": "" }, { "docid": "059583d1d8a6f99bae3736d900008caa", "text": "Ultraviolet disinfection is a frequent option for eliminating viable organisms in ballast water to fulfill international and national regulations. The objective of this work is to evaluate the reduction of microalgae able to reproduce after UV irradiation, based on their growth features. A monoculture of microalgae Tisochrysis lutea was irradiated with different ultraviolet doses (UV-C 254 nm) by a flow-through reactor. A replicate of each treated sample was held in the dark for 5 days simulating a treatment during the ballasting; another replicate was incubated directly under the light, corresponding to the treatment application during de-ballasting. Periodic measurements of cell density were taken in order to obtain the corresponding growth curves. Irradiated samples depicted a regrowth following a logistic curve in concordance with the applied UV dose. By modeling these curves, it is possible to obtain the initial concentration of organisms able to reproduce for each applied UV dose, thus obtaining the dose-survival profiles, needed to determine the disinfection kinetics. These dose-survival profiles enable detection of a synergic effect between the ultraviolet irradiation and a subsequent dark period; in this sense, the UV dose applied during the ballasting operation and subsequent dark storage exerts a strong influence on microalgae survival. The proposed methodology, based on growth modeling, established a framework for comparing the UV disinfection by different devices and technologies on target organisms. This procedure may also assist the understanding of the evolution of treated organisms in more complex assemblages such as those that exist in natural ballast water.", "title": "" }, { "docid": "2c95ebadb6544904b791cdbbbd70dc1c", "text": "This report describes a small heartbeat monitoring system using capacitively coupled ECG sensors. Capacitively coupled sensors using an insulated electrode have been proposed to obtain ECG signals without pasting electrodes directly onto the skin. Although the sensors have better usability than conventional ECG sensors, it is difficult to remove noise contamination. Power-line noise can be a severe noise source that increases when only a single electrode is used. However, a multiple electrode system degrades usability. To address this problem, we propose a noise cancellation technique using an adaptive noise feedback approach, which can improve the availability of the capacitive ECG sensor using a single electrode. An instrumental amplifier is used in the proposed method for the first stage amplifier instead of voltage follower circuits. A microcontroller predicts the noise waveform from an ADC output. To avoid saturation caused by power-line noise, the predicted noise waveform is fed back to an amplifier input through a DAC. We implemented the prototype sensor system to evaluate the noise reduction performance. Measurement results using a prototype board show that the proposed method can suppress 28-dB power-line noise.", "title": "" }, { "docid": "1dd4bed5dd52b18f39c0e96c0a14c153", "text": "Understanding the generalization of deep learning has raised lots of concerns recently, where the learning algorithms play an important role in generalization performance, such as stochastic gradient descent (SGD). Along this line, we particularly study the anisotropic noise introduced by SGD, and investigate its importance for the generalization in deep neural networks. Through a thorough empirical analysis, it is shown that the anisotropic diffusion of SGD tends to follow the curvature information of the loss landscape, and thus is beneficial for escaping from sharp and poor minima effectively, towards more stable and flat minima. We verify our understanding through comparing this anisotropic diffusion with full gradient descent plus isotropic diffusion (i.e. Langevin dynamics) and other types of positiondependent noise.", "title": "" }, { "docid": "6f242ee8418eebdd9fdce50ca1e7cfa2", "text": "HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt età la diffusion de documents scientifiques de niveau recherche, publiés ou non, ´ emanant desétablissements d'enseignement et de recherche français oú etrangers, des laboratoires publics ou privés. Summary. This paper describes the construction and functionality of an Autonomous Fruit Picking Machine (AFPM) for robotic apple harvesting. The key element for the success of the AFPM is the integrated approach which combines state of the art industrial components with the newly designed flexible gripper. The gripper consist of a silicone funnel with a camera mounted inside. The proposed concepts guarantee adequate control of the autonomous fruit harvesting operation globally and of the fruit picking cycle particularly. Extensive experiments in the field validate the functionality of the AFPM.", "title": "" }, { "docid": "aa4b36c95058177167c58d4e192c8c1d", "text": "Face detection is a prominent research domain in the field of digital image processing. Out of various algorithms developed so far, Viola–Jones face detection has been highly successful. However, because of its complex nature, there is need to do more exploration in its various phases including training as well as actual face detection to find the scope of further improvement in terms of efficiency as well as accuracy under various constraints so as to detect and process the faces in real time. Its training phase for the screening of large amount of Haar features and generation of cascade classifiers is quite tedious and computationally intensive task. Any modification for improvement in its features or cascade classifiers requires re-training of all the features through example images, which are very large in number. Therefore, there is need to enhance the computational efficiency of training process of Viola–Jones face detection algorithm so that further enhancement in this framework is made easy. There are three main contributions in this research work. Firstly, we have achieved a considerable speedup by parallelizing the training as well as detection of rectangular Haar features based upon Viola–Jones framework on GPU. Secondly, the analysis of features selected through AdaBoost has been done, which can give intuitiveness in developing more innovative and efficient techniques for selecting competitive classifiers for the task of face detection, which can further be generalized for any type of object detection. Thirdly, implementation of parallelization techniques of modified version of Viola–Jones face detection algorithm in combination with skin color filtering to reduce the search space has been done. We have been able to achieve considerable reduction in the search space and time cost by using the skin color filtering in conjunction with the Viola–Jones algorithm. Time cost reduction of the order of 54.31% at the image resolution of 640*480 of GPU time versus CPU time has been achieved by the proposed parallelized algorithm.", "title": "" }, { "docid": "45ec93ccf4b2f6a6b579a4537ca73e9c", "text": "Concurrent collections provide thread-safe, highly-scalable operations, and are widely used in practice. However, programmers can misuse these concurrent collections when composing two operations where a check on the collection (such as non-emptiness) precedes an action (such as removing an entry). Unless the whole composition is atomic, the program contains an atomicity violation bug. In this paper we present the first empirical study of CHECK-THEN-ACT idioms of Java concurrent collections in a large corpus of open-source applications. We catalog nine commonly misused CHECK-THEN-ACT idioms and show the correct usage. We quantitatively and qualitatively analyze 28 widely-used open source Java projects that use Java concurrency collections - comprising 6.4M lines of code. We classify the commonly used idioms, the ones that are the most error-prone, and the evolution of the programs with respect to misused idioms. We implemented a tool, CTADetector, to detect and correct misused CHECK-THEN-ACT idioms. Using CTADetector we found 282 buggy instances. We reported 155 to the developers, who examined 90 of them. The developers confirmed 60 as new bugs and accepted our patch. This shows that CHECK-THEN-ACT idioms are commonly misused in practice, and correcting them is important.", "title": "" } ]
scidocsrr
91d3aaa0c760b2f9d43f6f7e15235d23
Can a mind have two time lines? Exploring space-time mapping in Mandarin and English speakers.
[ { "docid": "d159042f8f88d86ffe8e8e186953ba86", "text": "How do we construct abstract ideas like justice, mathematics, or time-travel? In this paper we investigate whether mental representations that result from physical experience underlie people's more abstract mental representations, using the domains of space and time as a testbed. People often talk about time using spatial language (e.g., a long vacation, a short concert). Do people also think about time using spatial representations, even when they are not using language? Results of six psychophysical experiments revealed that people are unable to ignore irrelevant spatial information when making judgments about duration, but not the converse. This pattern, which is predicted by the asymmetry between space and time in linguistic metaphors, was demonstrated here in tasks that do not involve any linguistic stimuli or responses. These findings provide evidence that the metaphorical relationship between space and time observed in language also exists in our more basic representations of distance and duration. Results suggest that our mental representations of things we can never see or touch may be built, in part, out of representations of physical experiences in perception and motor action.", "title": "" }, { "docid": "5b55b1c913aa9ec461c6c51c3d00b11b", "text": "Grounded cognition rejects traditional views that cognition is computation on amodal symbols in a modular system, independent of the brain's modal systems for perception, action, and introspection. Instead, grounded cognition proposes that modal simulations, bodily states, and situated action underlie cognition. Accumulating behavioral and neural evidence supporting this view is reviewed from research on perception, memory, knowledge, language, thought, social cognition, and development. Theories of grounded cognition are also reviewed, as are origins of the area and common misperceptions of it. Theoretical, empirical, and methodological issues are raised whose future treatment is likely to affect the growth and impact of grounded cognition.", "title": "" } ]
[ { "docid": "79c7bf1036877ca867da7595e8cef6e2", "text": "A two-process theory of human information processing is proposed and applied to detection, search, and attention phenomena. Automatic processing is activation of a learned sequence of elements in long-term memory that is initiated by appropriate inputs and then proceeds automatically—without subject control, without stressing the capacity limitations of the system, and without necessarily demanding attention. Controlled processing is a temporary activation of a sequence of elements that can be set up quickly and easily but requires attention, is capacity-limited (usually serial in nature), and is controlled by the subject. A series of studies using both reaction time and accuracy measures is presented, which traces these concepts in the form of automatic detection and controlled, search through the areas of detection, search, and attention. Results in these areas are shown to arise from common mechanisms. Automatic detection is shown to develop following consistent mapping of stimuli to responses over trials. Controlled search is utilized in varied-mapping paradigms, and in our studies, it takes the form of serial, terminating search. The approach resolves a number of apparent conflicts in the literature.", "title": "" }, { "docid": "e591165d8e141970b8263007b076dee1", "text": "Treating a human mind like a machine is an essential component of dehumanization, whereas attributing a humanlike mind to a machine is an essential component of anthropomorphism. Here we tested how a cue closely connected to a person's actual mental experience-a humanlike voice-affects the likelihood of mistaking a person for a machine, or a machine for a person. We predicted that paralinguistic cues in speech are particularly likely to convey the presence of a humanlike mind, such that removing voice from communication (leaving only text) would increase the likelihood of mistaking the text's creator for a machine. Conversely, adding voice to a computer-generated script (resulting in speech) would increase the likelihood of mistaking the text's creator for a human. Four experiments confirmed these hypotheses, demonstrating that people are more likely to infer a human (vs. computer) creator when they hear a voice expressing thoughts than when they read the same thoughts in text. Adding human visual cues to text (i.e., seeing a person perform a script in a subtitled video clip), did not increase the likelihood of inferring a human creator compared with only reading text, suggesting that defining features of personhood may be conveyed more clearly in speech (Experiments 1 and 2). Removing the naturalistic paralinguistic cues that convey humanlike capacity for thinking and feeling, such as varied pace and intonation, eliminates the humanizing effect of speech (Experiment 4). We discuss implications for dehumanizing others through text-based media, and for anthropomorphizing machines through speech-based media. (PsycINFO Database Record", "title": "" }, { "docid": "eee51fc5cd3bee512b01193fa396e19a", "text": "Croston’s method is a widely used to predict inventory demand when it is inter­ mittent. However, it is an ad hoc method with no properly formulated underlying stochastic model. In this paper, we explore possible models underlying Croston’s method and three related methods, and we show that any underlying model will be inconsistent with the prop­ erties of intermittent demand data. However, we find that the point forecasts and prediction intervals based on such underlying models may still be useful. [JEL: C53, C22, C51]", "title": "" }, { "docid": "bbcd26c47892476092a779869be7040c", "text": "This article reviews the thyroid system, mainly from a mammalian standpoint. However, the thyroid system is highly conserved among vertebrate species, so the general information on thyroid hormone production and feedback through the hypothalamic-pituitary-thyroid (HPT) axis should be considered for all vertebrates, while species-specific differences are highlighted in the individual articles. This background article begins by outlining the HPT axis with its components and functions. For example, it describes the thyroid gland, its structure and development, how thyroid hormones are synthesized and regulated, the role of iodine in thyroid hormone synthesis, and finally how the thyroid hormones are released from the thyroid gland. It then progresses to detail areas within the thyroid system where disruption could occur or is already known to occur. It describes how thyroid hormone is transported in the serum and into the tissues on a cellular level, and how thyroid hormone is metabolized. There is an in-depth description of the alpha and beta thyroid hormone receptors and their functions, including how they are regulated, and what has been learned from the receptor knockout mouse models. The nongenomic actions of thyroid hormone are also described, such as in glucose uptake, mitochondrial effects, and its role in actin polymerization and vesicular recycling. The article discusses the concept of compensation within the HPT axis and how this fits into the paradigms that exist in thyroid toxicology/endocrinology. There is a section on thyroid hormone and its role in mammalian development: specifically, how it affects brain development when there is disruption to the maternal, the fetal, the newborn (congenital), or the infant thyroid system. Thyroid function during pregnancy is critical to normal development of the fetus, and several spontaneous mutant mouse lines are described that provide research tools to understand the mechanisms of thyroid hormone during mammalian brain development. Overall this article provides a basic understanding of the thyroid system and its components. The complexity of the thyroid system is clearly demonstrated, as are new areas of research on thyroid hormone physiology and thyroid hormone action developing within the field of thyroid endocrinology. This review provides the background necessary to review the current assays and endpoints described in the following articles for rodents, fishes, amphibians, and birds.", "title": "" }, { "docid": "16c6e41746c451d66b43c5736f622cda", "text": "In this study, we report a multimodal energy harvesting device that combines electromagnetic and piezoelectric energy harvesting mechanism. The device consists of piezoelectric crystals bonded to a cantilever beam. The tip of the cantilever beam has an attached permanent magnet which, oscillates within a stationary coil fixed to the top of the package. The permanent magnet serves two purpose (i) acts as a tip mass for the cantilever beam and lowers the resonance frequency, and (ii) acts as a core which oscillates between the inductive coils resulting in electric current generation through Faraday’s effect. Thus, this design combines the energy harvesting from two different mechanisms, piezoelectric and electromagnetic, on the same platform. The prototype system was optimized using the finite element software, ANSYS, to find the resonance frequency and stress distribution. The power generated from the fabricated prototype was found to be 0.25W using the electromagnetic mechanism and 0.25mW using the piezoelectric mechanism at 35 g acceleration and 20Hz frequency.", "title": "" }, { "docid": "18738a644f88af299d9e94157f804812", "text": "Twitter is among the fastest-growing microblogging and online social networking services. Messages posted on Twitter (tweets) have been reporting everything from daily life stories to the latest local and global news and events. Monitoring and analyzing this rich and continuous user-generated content can yield unprecedentedly valuable information, enabling users and organizations to acquire actionable knowledge. This article provides a survey of techniques for event detection from Twitter streams. These techniques aim at finding real-world occurrences that unfold over space and time. In contrast to conventional media, event detection from Twitter streams poses new challenges. Twitter streams contain large amounts of meaningless messages and polluted content, which negatively affect the detection performance. In addition, traditional text mining techniques are not suitable, because of the short length of tweets, the large number of spelling and grammatical errors, and the frequent use of informal and mixed language. Event detection techniques presented in literature address these issues by adapting techniques from various fields to the uniqueness of Twitter. This article classifies these techniques according to the event type, detection task, and detection method and discusses commonly used features. Finally, it highlights the need for public benchmarks to evaluate the performance of different detection approaches and various features.", "title": "" }, { "docid": "bd963a55c28304493118028fe5f47bab", "text": "Tables are a common structuring element in many documents, s uch as PDF files. To reuse such tables, appropriate methods need to b e develop, which capture the structure and the content information. We have d e loped several heuristics which together recognize and decompose tables i n PDF files and store the extracted data in a structured data format (XML) for easi er reuse. Additionally, we implemented a prototype, which gives the user the ab ility of making adjustments on the extracted data. Our work shows that purel y heuristic-based approaches can achieve good results, especially for lucid t ables.", "title": "" }, { "docid": "cb4966a838bbefccbb1b74e5f541ce76", "text": "Theories of human behavior are an important but largely untapped resource for software engineering research. They facilitate understanding of human developers’ needs and activities, and thus can serve as a valuable resource to researchers designing software engineering tools. Furthermore, theories abstract beyond specific methods and tools to fundamental principles that can be applied to new situations. Toward filling this gap, we investigate the applicability and utility of Information Foraging Theory (IFT) for understanding information-intensive software engineering tasks, drawing upon literature in three areas: debugging, refactoring, and reuse. In particular, we focus on software engineering tools that aim to support information-intensive activities, that is, activities in which developers spend time seeking information. Regarding applicability, we consider whether and how the mathematical equations within IFT can be used to explain why certain existing tools have proven empirically successful at helping software engineers. Regarding utility, we applied an IFT perspective to identify recurring design patterns in these successful tools, and consider what opportunities for future research are revealed by our IFT perspective.", "title": "" }, { "docid": "a92772d3d3b6bf34ddf750f8d111f511", "text": "More than 20 years ago, researchers proposed that individual differences in performance in such domains as music, sports, and games largely reflect individual differences in amount of deliberate practice, which was defined as engagement in structured activities created specifically to improve performance in a domain. This view is a frequent topic of popular-science writing-but is it supported by empirical evidence? To answer this question, we conducted a meta-analysis covering all major domains in which deliberate practice has been investigated. We found that deliberate practice explained 26% of the variance in performance for games, 21% for music, 18% for sports, 4% for education, and less than 1% for professions. We conclude that deliberate practice is important, but not as important as has been argued.", "title": "" }, { "docid": "424239765383edd8079d90f63b3fde1d", "text": "The availability of huge amounts of medical data leads to the need for powerful data analysis tools to extract useful knowledge. Researchers have long been concerned with applying statistical and data mining tools to improve data analysis on large data sets. Disease diagnosis is one of the applications where data mining tools are proving successful results. Heart disease is the leading cause of death all over the world in the past ten years. Several researchers are using statistical and data mining tools to help health care professionals in the diagnosis of heart disease. Using single data mining technique in the diagnosis of heart disease has been comprehensively investigated showing acceptable levels of accuracy. Recently, researchers have been investigating the effect of hybridizing more than one technique showing enhanced results in the diagnosis of heart disease. However, using data mining techniques to identify a suitable treatment for heart disease patients has received less attention. This paper identifies gaps in the research on heart disease diagnosis and treatment and proposes a model to systematically close those gaps to discover if applying data mining techniques to heart disease treatment data can provide as reliable performance as that achieved in diagnosing heart disease.", "title": "" }, { "docid": "5fefeace0e6b5db92fa26e5201429c4b", "text": "For a real-time visualization of one of the Dutch harbors we needed a realistic looking water surface. The old shader showed the same waves everywhere, but inside a harbor waves have many different directions and sizes. To solve this problem we needed a shader capable of visualizing flow. We developed a new algorithm called Tiled Directional Flow which has several advantages over other implementations.", "title": "" }, { "docid": "33df4246544a1847b09018cc65ffc995", "text": "In this paper, we propose a method for computing partial functional correspondence between non-rigid shapes. We use perturbation analysis to show how removal of shape parts changes the Laplace-Beltrami eigenfunctions, and exploit it as a prior on the spectral representation of the correspondence. Corresponding parts are optimization variables in our problem and are used to weight the functional correspondence; we are looking for the largest and most regular (in the Mumford-Shah sense) parts that minimize correspondence distortion. We show that our approach can cope with very challenging correspondence settings.", "title": "" }, { "docid": "255de21131ccf74c3269cc5e7c21820b", "text": "This paper discusses the effect of driving current on frequency response of the two types of light emitting diodes (LEDs), namely, phosphor-based LED and single color LED. The experiments show that the influence of the change of driving current on frequency response of phosphor-based LED is not obvious compared with the single color LED(blue, red and green). The experiments also find that the bandwidth of the white LED was expanded from 1MHz to 32MHz by the pre-equalization strategy and 26Mbit/s transmission speed was taken under Bit Error Ratio of 7.55×10-6 within 3m by non-return-to-zero on-off-keying modulation. Especially, the frequency response intensity of the phosphor-based LED is little influenced by the fluctuation of the driving current, which meets the requirements that the indoor light source needs to be adjusted in real-time by driving current. As the bandwidth of the single color LED is changed by the driving current obviously, the LED modulation bandwidth should be calculated according to the minimum driving current while we consider the requirement of the VLC transmission speed.", "title": "" }, { "docid": "aed7f6b54aeaf11ec6596d1f04b9db48", "text": "Discourse modes play an important role in writing composition and evaluation. This paper presents a study on the manual and automatic identification of narration, exposition, description, argumentandemotion expressingsentences in narrative essays. We annotate a corpus to study the characteristics of discourse modes and describe a neural sequence labeling model for identification. Evaluation results show that discourse modes can be identified automatically with an average F1-score of 0.7. We further demonstrate that discourse modes can be used as features that improve automatic essay scoring (AES). The impacts of discourse modes for AES are also discussed.", "title": "" }, { "docid": "c30d53cd8c350615f20d5baef55de6d0", "text": "The Internet of Things (IoT) is everywhere around us. Smart communicating objects offer the digitalization of lives. Thus, IoT opens new opportunities in criminal investigations such as a protagonist or a witness to the event. Any investigation process involves four phases: firstly the identification of an incident and its evidence, secondly device collection and preservation, thirdly data examination and extraction and then finally data analysis and formalization.\n In recent years, the scientific community sought to develop a common digital framework and methodology adapted to IoT-based infrastructure. However, the difficulty of IoT lies in the heterogeneous nature of the device, lack of standards and the complex architecture. Although digital forensics are considered and adopted in IoT investigations, this work only focuses on collection. Indeed the identification phase is relatively unexplored. It addresses challenges of finding the best evidence and locating hidden devices. So, the traditional method of digital forensics does not fully fit the IoT environment.\n In this paperwork, we investigate the mobility in the context of IoT at the crime scene. This paper discusses the data identification and the classification methodology from IoT to looking for the best evidences. We propose tools and techniques to identify and locate IoT devices. We develop the recent concept of \"digital footprint\" in the crime area based on frequencies and interactions mapping between devices. We propose technical and data criteria to efficiently select IoT devices. Finally, the paper introduces a generalist classification table as well as the limits of such an approach.", "title": "" }, { "docid": "f87a4ddb602d9218a0175a9e804c87c6", "text": "We present a novel online audio-score alignment approach for multi-instrument polyphonic music. This approach uses a 2-dimensional state vector to model the underlying score position and tempo of each time frame of the audio performance. The process model is defined by dynamic equations to transition between states. Two representations of the observed audio frame are proposed, resulting in two observation models: a multi-pitch-based and a chroma-based. Particle filtering is used to infer the hidden states from observations. Experiments on 150 music pieces with polyphony from one to four show the proposed approach outperforms an existing offline global string alignment-based score alignment approach. Results also show that the multi-pitch-based observation model works better than the chroma-based one.", "title": "" }, { "docid": "1d1caa539215e7051c25a9f28da48651", "text": "Physiological changes occur in pregnancy to nurture the developing foetus and prepare the mother for labour and delivery. Some of these changes influence normal biochemical values while others may mimic symptoms of medical disease. It is important to differentiate between normal physiological changes and disease pathology. This review highlights the important changes that take place during normal pregnancy.", "title": "" }, { "docid": "cb8fa49be63150e1b85f98a44df691a5", "text": "SQL tuning---the attempt to improve a poorly-performing execution plan produced by the database query optimizer---is a critical aspect of database performance tuning. Ironically, as commercial databases strive to improve on the manageability front, SQL tuning is becoming more of a black art. It requires a high level of expertise in areas like (i) query optimization, run-time execution of query plan operators, configuration parameter settings, and other database internals; (ii) identification of missing indexes and other access structures; (iii) statistics maintained about the data; and (iv) characteristics of the underlying storage system. Since database systems, their workloads, and the data that they manage are not getting any simpler, database users and administrators often rely on trial and error for SQL tuning.\n In this paper, we take the position that the trial-and-error (or, experiment-driven) process of SQL tuning can be automated by the database system in an efficient manner; freeing the user or administrator from this burden in most cases. A number of current approaches to SQL tuning indeed take an experiment-driven approach. We are prototyping a tool, called zTuned, that automates experiment-driven SQL tuning. This paper describes the design choices in zTuned to address three nontrivial issues: (i) how is the SQL tuning logic integrated with the regular query optimizer, (ii) how to plan the experiments to conduct so that a satisfactory (new) plan can be found quickly, and (iii) how to conduct experiments with minimal impact on the user-facing production workload. We conclude with a preliminary empirical evaluation and outline promising new directions in automated SQL tuning.", "title": "" }, { "docid": "2f1acb3378e5281efac7db5b3371b131", "text": "Model-based reinforcement learning (RL) is considered to be a promising approach to reduce the sample complexity that hinders model-free RL. However, the theoretical understanding of such methods has been rather limited. This paper introduces a novel algorithmic framework for designing and analyzing model-based RL algorithms with theoretical guarantees. We design a meta-algorithm with a theoretical guarantee of monotone improvement to a local maximum of the expected reward. The meta-algorithm iteratively builds a lower bound of the expected reward based on the estimated dynamical model and sample trajectories, and then maximizes the lower bound jointly over the policy and the model. The framework extends the optimism-in-face-of-uncertainty principle to non-linear dynamical models in a way that requires no explicit uncertainty quantification. Instantiating our framework with simplification gives a variant of model-based RL algorithms Stochastic Lower Bounds Optimization (SLBO). Experiments demonstrate that SLBO achieves stateof-the-art performance when only one million or fewer samples are permitted on a range of continuous control benchmark tasks.1", "title": "" }, { "docid": "6de71e8106d991d2c3d2b845a9e0a67e", "text": "XML repositories are now a widespread means for storing and exchanging information on the Web. As these repositories become increasingly used in dynamic applications such as e-commerce, there is a rapidly growing need for a mechanism to incorporate reactive functionality in an XML setting. Event-condition-action (ECA) rules are a technology from active databases and are a natural method for supporting suchfunctionality. ECA rules can be used for activities such as automatically enforcing document constraints, maintaining repository statistics, and facilitating publish/subscribe applications. An important question associated with the use of a ECA rules is how to statically predict their run-time behaviour. In this paper, we define a language for ECA rules on XML repositories. We then investigate methods for analysing the behaviour of a set of ECA rules, a task which has added complexity in this XML setting compared with conventional active databases.", "title": "" } ]
scidocsrr
c7ad58e8d6ae5583edaeddf08591f9c8
The multi-objective next release problem
[ { "docid": "56d2bd8c5d498d619a5e280afdb93d8f", "text": "The high cost of software maintenance could potentially be greatly reduced by the automatic refactoring of object-oriented programs to increase their understandability, adaptability and extensibility. This paper describes a novel approach in providing automated refactoring support for software maintenance; the formulation of the task as a search problem in the space of alternative designs. Such a search is guided by a quality evaluation function that must accurately reflect refactoring goals. We have constructed a search-based software maintenance tool and report here the results of experimental refactoring of two Java programs, which yielded improvements in terms of the quality functions used. We also discuss the comparative merits of the three quality functions employed and the actual effect on program design that resulted from their use", "title": "" } ]
[ { "docid": "f65ceb6e61ffa4d54555d157226ef784", "text": "In-home healthcare services based on the Internet-of-Things (IoT) have great business potential; however, a comprehensive platform is still missing. In this paper, an intelligent home-based platform, the iHome Health-IoT, is proposed and implemented. In particular, the platform involves an open-platform-based intelligent medicine box (iMedBox) with enhanced connectivity and interchangeability for the integration of devices and services; intelligent pharmaceutical packaging (iMedPack) with communication capability enabled by passive radio-frequency identification (RFID) and actuation capability enabled by functional materials; and a flexible and wearable bio-medical sensor device (Bio-Patch) enabled by the state-of-the-art inkjet printing technology and system-on-chip. The proposed platform seamlessly fuses IoT devices (e.g., wearable sensors and intelligent medicine packages) with in-home healthcare services (e.g., telemedicine) for an improved user experience and service efficiency. The feasibility of the implemented iHome Health-IoT platform has been proven in field trials.", "title": "" }, { "docid": "1d7035cc5b85e13be6ff932d39740904", "text": "This paper investigates an application of mobile sensing: detection of potholes on roads. We describe a system and an associated algorithm to monitor the pothole conditions on the road. This system, that we call the Pothole Detection System, uses Accelerometer Sensor of Android smartphone for detection of potholes and GPS for plotting the location of potholes on Google Maps. Using a simple machine-learning approach, we show that we are able to identify the potholes from accelerometer data. The pothole detection algorithm detects the potholes in real-time. A runtime graph has been shown with the help of a charting software library ‘AChartEngine’. Accelerometer data and pothole data can be mailed to any email address in the form of a ‘.csv’ file. While designing the pothole detection algorithm we have assumed some threshold values on x-axis and z-axis. These threshold values are justified using a neural network technique which confirms an accuracy of 90%-95%. The neural network has been implemented using a machine learning framework available for Android called ‘Encog’. We evaluate our system on the outputs obtained using two, three and four wheelers. Keywords— Machine Learning, Context, Android, Neural Networks, Pothole, Sensor", "title": "" }, { "docid": "303be39b438b8f50eef76ab17f099748", "text": "Incentive-driven advanced attacks have become a major concern to cyber-security. Traditional defense techniques that adopt a passive and static approach by assuming a fixed attack type are insufficient in the face of highly adaptive and stealthy attacks. In particular, a passive defense approach often creates information asymmetry where the attacker knows more about the defender. To this end, moving target defense (MTD) has emerged as a promising way to reverse this information asymmetry. The main idea of MTD is to (continuously) change certain aspects of the system under control to increase the attacker's uncertainty, which in turn increases attack cost/complexity and reduces the chance of a successful exploit in a given amount of time. In this paper, we go one step beyond and show that MTD can be further improved when combined with information disclosure. In particular, we consider that the defender adopts a MTD strategy to protect a critical resource across a network of nodes, and propose a Bayesian Stackelberg game model with the defender as the leader and the attacker as the follower. After fully characterizing the defender's optimal migration strategies, we show that the defender can design a signaling scheme to exploit the uncertainty created by MTD to further affect the attacker's behavior for its own advantage. We obtain conditions under which signaling is useful, and show that strategic information disclosure can be a promising way to further reverse the information asymmetry and achieve more efficient active defense.", "title": "" }, { "docid": "ecdb07716fa81f01b5bdcb4f05e988f1", "text": "With the advent of blockchain-enabled IoT applications, there is an increased need for related software patterns, middleware concepts, and testing practices to ensure adequate quality and productivity. IoT and blockchain each provide different design goals, concepts, and practices that must be integrated, including the distributed actor model and fault tolerance from IoT and transactive information integrity over untrustworthy sources from blockchain. Both IoT and blockchain are emerging technologies and both lack codified patterns and practices for development of applications when combined. This paper describes PlaTIBART, which is a platform for transactive IoT blockchain applications with repeatable testing that combines the Actor pattern (which is a commonly used model of computation in IoT) together with a custom Domain Specific Language (DSL) and test network management tools. We show how PlaTIBART has been applied to develop, test, and analyze fault-tolerant IoT blockchain applications.", "title": "" }, { "docid": "5238ae08b15854af54274e1c2b118d54", "text": "One-dimensional fractional anomalous sub-diffusion equations on an unbounded domain are considered in our work. Beginning with the derivation of the exact artificial boundary conditions, the original problem on an unbounded domain is converted into mainly solving an initial-boundary value problem on a finite computational domain. The main contribution of our work, as compared with the previous work, lies in the reduction of fractional differential equations on an unbounded domain by using artificial boundary conditions and construction of the corresponding finite difference scheme with the help of method of order reduction. The difficulty is the treatment of Neumann condition on the artificial boundary, which involves the time-fractional derivative operator. The stability and convergence of the scheme are proven using the discrete energy method. Two numerical examples clarify the effectiveness and accuracy of the proposed method. 2011 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "11c7faadd17458c726c3373d22feb51a", "text": "Where do partisans get their election news and does this influence their candidate assessments? We track web browsing behavior among a national sample during the 2016 presidential campaign and merge these data with a panel survey. We find that election news exposure is polarized; partisans gravitate to \"echo chambers,\" sources disproportionately read by co-partisans. We document levels of partisan selective exposure two to three times higher than prior studies. However, one-sided news consumption did not exacerbate polarization in candidate evaluation. We speculate this exposure failed to move attitudes either because partisans’ ill will toward their political opponents had already reached high levels at the outset of the study, or because of modest differences in the partisan slant of the content offered by the majority of news sources. Audience segregation appears attributable less to diverging perspectives, and more to the perceptions of partisans—particularly Republicans—that non-partisan news outlets are biased against them. *The authors thank the Bill Lane Center for the American West and the Hoover Institution for their generous financial support without which this study would not have been possible. They also thank Matthew Gentzkow, Jens Hainmueller, and Jesse Shapiro for their comments on an earlier draft. Fifty years ago, Americans’ held generally centrist political views and their feelings toward party opponents, while lukewarm, were not especially harsh (Iyengar, Sood, and Lelkes, 2012; Haidt and Hetherington, 2012). Party politics did not intrude into interpersonal relations; marriage across party lines occurred frequently (Jennings and Niemi, 1974; Jennings and Niemi, 1981; Jennings, Stoker, and Bowers, 2009). During this era of weak polarization, there was a captive audience for news. Three major news outlets— the evening newscasts broadcast by ABC, CBS, and NBC—attracted a combined audience that exceeded eighty million daily viewers (see Iyengar, 2015). The television networks provided a non-partisan, point-counterpoint perspective on the news. Since their newscasts were nearly identical in content, exposure to the world of public affairs was a uniform—and unifying—experience for voters of all political stripes. That was the state of affairs in 1970. Forty years later, things had changed dramatically. The parties diverged ideologically, although the centrifugal movement was more apparent at the elite rather than mass level (for evidence of elite polarization, see McCarty, Poole, and Rosenthal, 2006; Stonecash, Brewer, and Mariani, 2003; the ongoing debate over ideological polarization within the mass public is summarized in Abramowitz and Saunders, 2008; Fiorina and Abrams, 2009). The rhetoric of candidates and elected officials turned more acrimonious, with attacks on the opposition becoming the dominant form of political speech (Geer, 2010; Grimmer and King, 2011; Fowler and Ridout, 2013). Legislative gridlock and policy stalemate occurred on a regular basis (Mann and Ornstein, 2015). At the level of the electorate, beginning in the mid-1980s, Democrats and Republicans increasingly offered harsh evaluations of opposing party candidates and crude stereotypes of opposing party supporters (Iyengar, Lelkes, and Sood, 2012). Party affiliation had become a sufficiently intense form of social identity to serve as a litmus test for personal values and world view (Mason, 2014; Levendusky, 2009). By 2015, marriage and close personal relations across party lines was a rarity (Huber and Malhotra, 2017; Iyengar, Konitzer, and Tedin, 2017). Partisans increasingly distrusted and disassociated themselves from supporters of the opposing party (Iyengar and Westwood, 2015; Westwood", "title": "" }, { "docid": "b7171ab55a7539d54a4781dacebbfd49", "text": "This paper proposes an image processing technique for the detection of glaucoma which mainly affects the optic disc by increasing the cup size. During early stages it was difficult to detect Glaucoma, which is in fact second leading cause of blindness. In this paper glaucoma is categorized through extraction of features from retinal fundus images. The features include (i) Cup to Disc Ratio (CDR), which is one of the primary physiological parameter for the diagnosis of glaucoma and (ii) Ratio of Neuroretinal Rim in inferior, superior, temporal and nasal quadrants i.e. (ISNT quadrants) for verification of the ISNT rule. The novel technique is implemented on 80 retinal images and an accuracy of 97.5% is achieved taking an average computational time of 0.8141 seconds.", "title": "" }, { "docid": "ffbebb5d8f4d269353f95596c156ba5c", "text": "Decision trees and random forests are common classifiers with widespread use. In this paper, we develop two protocols for privately evaluating decision trees and random forests. We operate in the standard two-party setting where the server holds a model (either a tree or a forest), and the client holds an input (a feature vector). At the conclusion of the protocol, the client learns only the model’s output on its input and a few generic parameters concerning the model; the server learns nothing. The first protocol we develop provides security against semi-honest adversaries. Next, we show an extension of the semi-honest protocol that obtains one-sided security against malicious adversaries. We implement both protocols and show that both variants are able to process trees with several hundred decision nodes in just a few seconds and a modest amount of bandwidth. Compared to previous semi-honest protocols for private decision tree evaluation, we demonstrate tenfold improvements in computation and bandwidth.", "title": "" }, { "docid": "8a399beb6c89bbd2e8de9a4fd135c74f", "text": "This paper presents findings emerging from the EU-Funded Games and Learning Alliance (GALA) Network of Excellence on serious games (SGs) that has a focus upon pedagogy-driven design of SGs. The overall framework presented in this paper concerns some elements that we consider key for the design of a new generation of more pedagogically-effective and easily author-able SGs: pedagogical perspectives, learning goals mapping to game mechanics and cognition-based SGs models. The paper also includes two corresponding illustrative case studies, developed by the network, on (1) the analysis of the Re-mission game based on the developed analytical framework of learning and game mechanics and (2) the Sandbox SGs cognition-based model in the Travel in Europe project. Practitioner notes: What is already known about this topic:  Early studies have demonstrated the valuable contributions of game-based approaches in education and training.  Early work has revealed particular strengths in some entertainment game mechanics, some of which look suited for serious and educational games.  There are however no established practices, framework, models in serious games design and development (mechanics, development framework, etc.).  Games and learning have separate mechanics and Serious Games Mechanics have yet to be defined. What this paper adds:  An overall and integrated view of pedagogy-driven considerations related to SG design.  The introduction to an analytical model that maps learning mechanics to game mechanics (LMGM model) towards defining Serious Games mechanics essential for the design and development of games for learning.  The discussion on a cognition-based SG design model, the Sandbox SG model (SBSG), that seems to have the potential to contextualise learning in a meaningful virtual environment. Implications for practice and/or policy:  The two models will support the design of a game-based learning environment that encapsulates various learning theories and approaches (in particular considering education at schools and universities) be they objectivist, associative, cognitive or situative, and combine contents with mechanics that sustain interest and engagement.  The definition of Serious Games Mechanics will help bridge the gap between the expected learning outcomes and the desired positive engagement with the game-based learning environment.  Formal deployment of game-based intervention for training and education can be further encouraged if pedagogy plays a key role in the design, development and deployment of serious games. The role of educators/practitioners in the development process is also key to encouraging the uptake of game-based intervention. Introduction The increasing use of pervasive and ubiquitous digital gaming technologies gives a significant opportunity for enhancing education methods, in particular thanks to the games’ ability to appeal a wide population. Yet, despite the digital games’ potential in terms of interactivity, immersion and engagement, much work must still be done to understand how to better design, administrate and evaluate digital games across different learning contexts and targets (Alvarez and Michaud, 2008; Ulicsac, 2010; de Freitas and Liarokapis, 2011). Also, games have now evolved exploiting a variety of modules ranging from social networking and multiplayer online facilities to advanced natural interaction modalities (ISFE, 2010). A new typology of digital games has emerged, namely serious games (SG) for education and training, that are designed with a primary pedagogical goal. Samples such as America’s Army or Code of Everand have become increasingly popular, reaching large numbers of players and engaging them for long periods of time. Early studies undertaken in the US and Europe attest to the valuable contributions of game-based approaches in education (e.g. Kato et al., 2008; Knight et al., 2010). And improvements can be further achieved by better understanding the target audience, pedagogic perspectives, so to make learning experiences more immersive, relevant and engaging. A recent survey by the International Software Federation of Europe (ISFE, 2010) revealed that 74% of those aged 16-19 considered themselves as gamers (n=3000), while 60% of those 20-24, 56% 25-29 and 38% 30-44 considered themselves regular players of games. And the projected growth figures for SGs currently stand at 47% per year until 2015 (Alvarez and Michaud, 2008). The importance of pedagogy at the heart of game development is alien to digital game development for entertainment. The absence of game mechanics and dynamics specifically designed and dedicated for learning purposes is an issue, which makes such intervention unsuited for educational purposes. Certainly, the SGs’ educational potential and actual effectiveness may vary appreciably as a consequence of the pedagogical choices made a priori by the game designer (Squire, 2005). Thus, a more thought-out design is key to meet the end-user and stakeholder requirements that are twofold, on the entertainment and education sides. On the one hand, it is undeniable that a fine-tuned pedagogy plays a major role in sustaining learning effectiveness (Bellotti et al., 2010). On the other hand, one of the biggest problems of educational games to date is the inadequate integration of educational and game design principles (e.g. Kiili, 2005; 2008; Lim et al., 2011) and this is also due to the fact that digital game designers and educational experts do not usually share a common vocabulary (Bellotti et al., 2011). In this paper we report the working experience of the Games and Learning Alliance (GALA, www.galanoe.eu) Network of Excellence, funded by the European 7 th Research Framework Programme, which brings together both the research and industry SG development communities, especially from the context of Technology Enhanced Learning in order to give the nascent industry opportunities to develop, find new markets and business models and utilise IP and scientific and pedagogic outcomes. This paper presents the GALA reflections on these topics that rely on a systematic review methodology (eg. Connolly et al., 2012) and the study of models and frameworks that have been developed by the GALA partners. The paper’s main added value consists in providing an overall and integrated view of pedagogically driven design of SGs. The paper begins with an examination of the pedagogical perspectives of SGs and highlights an analytical view of the importance of mapping game mechanics to pedagogical goals. A promising cognition-based model for SG development is discussed, demonstrating some specific development strategies, which are opening up new possibilities for efficient SG production. This paper highlights illustrative case studies on the Remission and Travel in Europe games, developed by the network and associate partners Pedagogical perspective of SGs Pedagogy lies at the heart of the distinction of what is considered as games for learning compared to other entertainment games. From a pedagogical perspective, SGs are not designed mainly for entertainment purposes (Michael and Chen, 2006), but to exploit the game appeal and the consequent player motivation to support development of knowledge and skills (Doughty et al., 2009). SGs offer an effective and engaging experience (Westera et al, 2008) and careful balancing to achieve symbiosis between pedagogy and game-play is needed (Dror, 2008a). Naively transcribing existing material and instructional methods into a SG domain is detrimental (Bruckman, 1999; Dror, 2008b). SGs should have knowledge transference as a core part of their game mechanics (Shute et al, 2009; Baek, 2010). Thus, understanding how game mechanics can relate to relevant educational strategies is needed. Pedagogy is the practice of learning theory, and applying learning theory in practice is a craft that has been developed in traditional education and training contexts for many hundreds of years. In SGs, however, the standard approach has been to take established theories of learning such as associative, cognitive or situative (de Freitas and Jameson, 2012), and to seek to extend these theories within virtual and game environments. Given the many theories of learning available as candidates for application, this approach is arbitrary and possibly ineffectual. However, it is fair to say that, in general, games have to date largely implemented task-centred and cognitive theories; in particular, experiential learning and scaffolded learning approaches have been tested in game environments. In a few cases game use has led to the development of new learning theories such as the exploratory learning model (de Freitas and Neumann, 2009); however well established theories mainly prevail. A key issue for SG design is to match the desired learning outcomes with the typical game characteristics. Games are quite varied in terms of features and can potentially offer different kinds of learning experience. So, it is urgent to understand how different game elements can contribute to an effective facilitation of learning and appropriate measures supporting effectiveness assessment are needed. Measures should include both learning outcomes (knowledge transfer including cognitive and skill-based abilities) and engagement (affective learning experience). Schiphorst (2007) stated that technology should be designed “as” experience and not only “for” experience. The reason why games are good learning environments is because they allow the learner to live through experiences, interact with learning objects and have social interactions with others including teachers and peers. Real value exists in designing learning experiences to support an exploratory and open-ended model of learning to encourage learners to make their own reflections and summations and to come to an understanding in their own way (de Freitas and Neumann, 2009). These two aspects of SGs (engagement", "title": "" }, { "docid": "8a523668c8549db8aeb5a412f979a7de", "text": "The avalanche effect is an important performance that any block cipher must have. With the AES algorithm program and experiments, we fully test and research the avalanche effect performance of the AES algorithm, and give the changed cipher-bit numbers when respectively changing every bit of the plaintext and key in turn. The test results show that the AES algorithm has very good avalanche effect Performance indeed.", "title": "" }, { "docid": "46674077de97f82bc543f4e8c0a8243a", "text": "Recently, multiple formulations of vision problems as probabilistic inversions of generative models based on computer graphics have been proposed. However, applications to 3D perception from natural images have focused on low-dimensional latent scenes, due to challenges in both modeling and inference. Accounting for the enormous variability in 3D object shape and 2D appearance via realistic generative models seems intractable, as does inverting even simple versions of the many-tomany computations that link 3D scenes to 2D images. This paper proposes and evaluates an approach that addresses key aspects of both these challenges. We show that it is possible to solve challenging, real-world 3D vision problems by approximate inference in generative models for images based on rendering the outputs of probabilistic CAD (PCAD) programs. Our PCAD object geometry priors generate deformable 3D meshes corresponding to plausible objects and apply affine transformations to place them in a scene. Image likelihoods are based on similarity in a feature space based on standard mid-level image representations from the vision literature. Our inference algorithm integrates single-site and locally blocked Metropolis-Hastings proposals, Hamiltonian Monte Carlo and discriminative datadriven proposals learned from training data generated from our models. We apply this approach to 3D human pose estimation and object shape reconstruction from single images, achieving quantitative and qualitative performance improvements over state-of-the-art baselines.", "title": "" }, { "docid": "c5d5dfaa7af58dcd7c0ddc412e08bec2", "text": "Telecommunications fraud is a problem that affects operators all around the world. Operators know that fraud cannot be completely eradicated. The solution to deal with this problem is to minimize the damages and cut down losses by detecting fraud situations as early as possible. Computer systems were developed or acquired, and experts were trained to detect these situations. Still, the operators have the need to evolve this process, in order to detect fraud earlier and also get a better understanding of the fraud attacks they suffer. In this paper the fraud problem is analyzed and a new approach to the problem is designed. This new approach, based on the profiling and KDD (Knowledge Discovery in Data) techniques, supported in a MAS (Multiagent System), does not replace the existing fraud detection systems; it uses them and their results to provide operators new fraud detection methods and new knowledge.", "title": "" }, { "docid": "3767702e22ac34493bb1c6c2513da9f7", "text": "The majority of the online reviews are written in free-text format. It is often useful to have a measure which summarizes the content of the review. One such measure can be sentiment which expresses the polarity (positive/negative) of the review. However, a more granular classification of sentiment, such as rating stars, would be more advantageous and would help the user form a better opinion. In this project, we propose an approach which involves a combination of topic modeling and sentiment analysis to achieve this objective and thereby help predict the rating stars.", "title": "" }, { "docid": "d9428abb0948a96688dc112523d22e20", "text": "A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.", "title": "" }, { "docid": "198a0ecb1a1bd4f0a7e4dc757c49ea3d", "text": "There have been a number of studies that have examined the factor structure of the Wechsler Adult Intelligence Scale IV (WAIS-IV) using the standardization sample. In this study, we investigate its factor structure on a clinical neuropsychology sample of mixed aetiology. Correlated factor, higher-order and bi-factor models are all tested. Overall, the results suggest that the WAIS-IV will be suitable for use with this population.", "title": "" }, { "docid": "85b72dedb0c874fcfbb71c1d6f9fce42", "text": "In this paper, we present an optimization of Odlyzko and Schönhage algorithm that computes efficiently Zeta function at large height on the critical line, together with computation of zeros of the Riemann Zeta function thanks to an implementation of this technique. The first family of computations consists in the verification of the Riemann Hypothesis on all the first 10 non trivial zeros. The second family of computations consists in verifying the Riemann Hypothesis at very large height for different height, while collecting statistics in these zones. For example, we were able to compute two billion zeros from the 10-th zero of the Riemann Zeta function.", "title": "" }, { "docid": "5aead46411e6adc442509f2ce11167e9", "text": "We present an outline of our newly created multimodal dialogue corpus that is constructed from public domain movies. Dialogues in movies are useful sources for analyzing human communication patterns. In addition, they can be used to train machine-learning-based dialogue processing systems. However, the movie files are processing intensive and they contain large portions of non-dialogue segments. Therefore, we created a corpus that contains only dialogue segments from movies. The corpus contains 165,368 dialogue segments taken from 1,722 movies. These dialogues are automatically segmented by using deep neural network-based voice activity detection with filtering rules. Our corpus can reduce the human workload and machine-processing effort required to analyze human dialogue behavior by using movies.", "title": "" }, { "docid": "9737feb4befdaf995b1f9e88535577ec", "text": "This paper addresses the problem of detecting the presence of malware that leaveperiodictraces innetworktraffic. This characteristic behavior of malware was found to be surprisingly prevalent in a parallel study. To this end, we propose a visual analytics solution that supports both automatic detection and manual inspection of periodic signals hidden in network traffic. The detected periodic signals are visually verified in an overview using a circular graph and two stacked histograms as well as in detail using deep packet inspection. Our approach offers the capability to detect complex periodic patterns, but avoids the unverifiability issue often encountered in related work. The periodicity assumption imposed on malware behavior is a relatively weak assumption, but initial evaluations with a simulated scenario as well as a publicly available network capture demonstrate its applicability.", "title": "" }, { "docid": "68388b2f67030d85030d5813df2e147d", "text": "Radio signal propagation modeling plays an important role in designing wireless communication systems. The propagation models are used to calculate the number and position of base stations and predict the radio coverage. Different models have been developed to predict radio propagation behavior for wireless communication systems in different operating environments. In this paper we shall limit our discussion to the latest achievements in radio propagation modeling related to tunnels. The main modeling approaches used for propagation in tunnels are reviewed, namely, numerical methods for solving Maxwell equations, waveguide or modal approach, ray tracing based methods and two-slope path loss modeling. They are discussed in terms of modeling complexity and required information on the environment including tunnel geometry and electric as well as magnetic properties of walls.", "title": "" }, { "docid": "c699ede2caeb5953decc55d8e42c2741", "text": "Traditionally, two distinct approaches have been employed for exploratory factor analysis: maximum likelihood factor analysis and principal component analysis. A third alternative, called regularized exploratory factor analysis, was introduced recently in the psychometric literature. Small sample size is an important issue that has received considerable discussion in the factor analysis literature. However, little is known about the differential performance of these three approaches to exploratory factor analysis in a small sample size scenario. A simulation study and an empirical example demonstrate that regularized exploratory factor analysis may be recommended over the two traditional approaches, particularly when sample sizes are small (below 50) and the sample covariance matrix is near singular.", "title": "" } ]
scidocsrr
04a656662cbf463f1f546af6f4726840
A Survey on Neural Network-Based Summarization Methods
[ { "docid": "56826bfc5f48105387fd86cc26b402f1", "text": "It is difficult to identify sentence importance from a single point of view. In this paper, we propose a learning-based approach to combine various sentence features. They are categorized as surface, content, relevance and event features. Surface features are related to extrinsic aspects of a sentence. Content features measure a sentence based on contentconveying words. Event features represent sentences by events they contained. Relevance features evaluate a sentence from its relatedness with other sentences. Experiments show that the combined features improved summarization performance significantly. Although the evaluation results are encouraging, supervised learning approach requires much labeled data. Therefore we investigate co-training by combining labeled and unlabeled data. Experiments show that this semisupervised learning approach achieves comparable performance to its supervised counterpart and saves about half of the labeling time cost.", "title": "" }, { "docid": "64fc1433249bb7aba59e0a9092aeee5e", "text": "In this paper, we propose two generic text summarization methods that create text summaries by ranking and extracting sentences from the original documents. The first method uses standard IR methods to rank sentence relevances, while the second method uses the latent semantic analysis technique to identify semantically important sentences, for summary creations. Both methods strive to select sentences that are highly ranked and different from each other. This is an attempt to create a summary with a wider coverage of the document's main content and less redundancy. Performance evaluations on the two summarization methods are conducted by comparing their summarization outputs with the manual summaries generated by three independent human evaluators. The evaluations also study the influence of different VSM weighting schemes on the text summarization performances. Finally, the causes of the large disparities in the evaluators' manual summarization results are investigated, and discussions on human text summarization patterns are presented.", "title": "" }, { "docid": "0ac0f9965376f5547a2dabd3d06b6b96", "text": "A sentence extract summary of a document is a subset of the document's sentences that contains the main ideas in the document. We present an approach to generating such summaries, a hidden Markov model that judges the likelihood that each sentence should be contained in the summary. We compare the results of this method with summaries generated by humans, showing that we obtain significantly higher agreement than do earlier methods.", "title": "" }, { "docid": "c0a67a4d169590fa40dfa9d80768ef09", "text": "Excerpts of technical papers and magazine articles that serve the purposes of conventional abstracts have been created entirely by automatic means. In the exploratory research described, the complete text of an article in machine-readable form i s scanned by a n IBM 704 data-processing machine and analyzed in accordance with a standard program. Statistical information derived from word frequency and distribution is used by the machine to compute a relative measure of significance, first for individual words and then for sentences. Sentences scoring highest in significance are extracted and printed out to become the \" auto-abstract. \" Introduction", "title": "" } ]
[ { "docid": "1384bc0c18a47630707dfebc036d8ac0", "text": "Recent research has demonstrated the important of ontology and its applications. For example, while designing adaptive learning materials, designers need to refer to the ontology of a subject domain. Moreover, ontology can show the whole picture and core knowledge about a subject domain. Research from literature also suggested that graphical representation of ontology can reduce the problems of information overload and learning disorientation for learners. However, ontology constructions used to rely on domain experts in the past; it is a time consuming and high cost task. Ontology creation for emerging new domains like e-learning is even more challenging. The aim of this paper is to construct e-learning domain concept maps, an alternative form of ontology, from academic articles. We adopt some relevant journal articles and conferences papers in e-learning domain as data sources, and apply text-mining techniques to automatically construct concept maps for e-learning domain. The constructed concept maps can provide a useful reference for researchers, who are new to e-leaning field, to study related issues, for teachers to design adaptive courses, and for learners to understand the whole picture of e-learning domain knowledge", "title": "" }, { "docid": "2630e22fb604a0657aef4c7d8e56a89f", "text": "Social media has recently gained tremendous fame as a highly impactful channel of communication in these modern times of digitized living. It has been put on a pedestal across varied streams for facilitating participatory interaction amongst businesses, groups, societies, organizations, consumers, communities, forums, and the like. This subject has received increased attention in the literature with many of its practical applications including social media marketing (SMM) being elaborated, analysed, and recorded by many studies. This study is aimed at collating the existing research on SMM to present a review of seventy one articles that will bring together the many facets of this rapidly blooming media marketing form. The surfacing limitations in the literature on social media have also been identified and potential research directions have been offered.", "title": "" }, { "docid": "5df0273a45b5ae992b421ef5b537c083", "text": "Other ebooks & PDF you can access on our library: manual for hyster 40 forklift, moto guzzi 1000sp iii 1000 sp 3 sp3 1000sp3 motoguzzi service repair workshop manual, honeywell primus 2015 fms manual, irish wit and wisdom mini books, literacies of power what americans are not allowed to know with new commentary by shirley steinberg joe kincheloe, mexican paper cutting templates, owners manual for 94 e420, the organization and architecture of innovation the organization and architecture of innovation, old huntsman other poems, email to introduce yourself to colleagues, 2007 2009 mdx factory service repair manual, 2007 2009 mdx factory service repair manual, manual mercedes c220, manual mercedes c220, irish wit and wisdom mini books, the organization and architecture of innovation the organization and architecture of innovation, el regal de la comunicacio lb, beginning java programming the object oriented approach, my heart to keep love in xxchange volume 10, sage canes house of grace and favor a town will only rise to the standards of its women five star expressions, bobcat 3400 parts manual, inevitable circumstances, m dchenhimmel gedichte geschichten anke heimberg, labor time guide for auto repair, zomertijd zonnige verhalen, moto guzzi 1000sp iii 1000 sp 3 sp3 1000sp3 motoguzzi service repair workshop manual, draytek smart monitor manual, driving the usa in alphabetical order, driving the usa in alphabetical order, 2013 ford flex owners manual, manual mercedes c220, though mountains fall the daughters of caleb bender volume 3 , literacies of power what americans are not allowed to know with new commentary by shirley steinberg joe kincheloe, link between worlds guide, moto guzzi 1000sp iii 1000 sp 3 sp3 1000sp3 motoguzzi service repair workshop manual, owners manual for 94 e420, the american revolution the american revolution, owners manual for 94 e420, mn drivers license test study guide vietnamese, anastasia and the curse of the romanovs anastasia series ii, honeywell primus 2015 fms manual, 1978 johnson 115 horsepower manual, m2 mei jan 2014 paper, police pdr guide, gender race and national identity nations of flesh and blood routledge research in gender and society, shifters the lions share of her heart bbw lion shape shifter romance paranormal fantasy short stories, manual mercedes c220, police pdr guide, disability leave manual template, civil antisemitism modernism and british culture 1902 1939, and many other in several formats : ebook, PDF, Ms. Word, etc.", "title": "" }, { "docid": "ea47210a071a275d9fdd204d0213d3d8", "text": "In this paper the role of logic as a formal basis to exploit the query evaluation process of the boolean model and of weighted boolean models is analysed. The proposed approach is based on the expression of the constraint imposed by a query term on a document representation by means of the implication connective (by a fuzzy implication in the case of weighted terms). A logical formula corresponds to a query evaluation structure, and the degree of relevance of a document to a user query is obtained as the truth value of the formula expressing the evaluation structure of the considered query under the interpretation corresponding with a document and the query itself.", "title": "" }, { "docid": "a979b0a02f2ade809c825b256b3c69d8", "text": "The objective of this review is to analyze in detail the microscopic structure and relations among muscular fibers, endomysium, perimysium, epimysium and deep fasciae. In particular, the multilayer organization and the collagen fiber orientation of these elements are reported. The endomysium, perimysium, epimysium and deep fasciae have not just a role of containment, limiting the expansion of the muscle with the disposition in concentric layers of the collagen tissue, but are fundamental elements for the transmission of muscular force, each one with a specific role. From this review it appears that the muscular fibers should not be studied as isolated elements, but as a complex inseparable from their fibrous components. The force expressed by a muscle depends not only on its anatomical structure, but also the angle at which its fibers are attached to the intramuscular connective tissue and the relation with the epimysium and deep fasciae.", "title": "" }, { "docid": "331f0702515e1705a5ac02375f1979ac", "text": "Pavement management systems require detailed information of the current state of the roads to take appropriate actions to optimize expenditure on maintenance and rehabilitation. In particular, the presence of cracks is a cardinal aspect to be considered. This article presents a solution based on an instrumented vehicle equipped with an imaging system, two Inertial Profilers, a Differential Global Positioning System, and a webcam. Information about the state of the road is acquired at normal road speed. A method based on the use of Gabor filters is used to detect the longitudinal and transverse cracks. The methodologies used to create Gabor filter banks and the use of the filtered images as descriptors for subsequent classifiers are discussed in detail. Three different methodologies for setting the threshold of the classifiers are also evaluated. Finally, an AdaBoost algorithm is used for selecting and combining the classifiers, thus improving the results provided by a single classifier. A large database has been acquired and used to train and test the proposed system and methods, and suitable results have been obtained in comparison with other refer-", "title": "" }, { "docid": "8dd6a3cbe9ddb4c50beb83355db5aa5a", "text": "Fuzzy logic controllers have gained popularity in the past few decades with highly successful implementation in many fields. Fuzzy logic enables designers to control complex systems more effectively than traditional methods. Teaching students fuzzy logic in a laboratory can be a time-consuming and an expensive task. This paper presents a low-cost educational microcontroller-based tool for fuzzy logic controlled line following mobile robot. The robot is used in the second year of undergraduate teaching in an elective course in the department of computer engineering of the Near East University. Hardware details of the robot and the software implementing the fuzzy logic control algorithm are given in the paper. 2009 Wiley Periodicals, Inc. Comput Appl Eng Educ; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20347", "title": "" }, { "docid": "920c977ce3ed5f310c97b6fcd0f5bef4", "text": "In this paper, different automatic registration schemes base d on different optimization techniques in conjunction with different similarity measures are compared in term s of accuracy and efficiency. Results from every optimizat ion procedure are quantitatively evaluated with respect to t he manual registration, which is the standard registration method used in clinical practice. The comparison has shown automatic regi st ation schemes based on CD consist of an accurate and reliable method that can be used in clinical ophthalmology, as a satisfactory alternative to the manual method. Key-Words: multimodal image registration, optimization algorithms, sim ilarity metrics, retinal images", "title": "" }, { "docid": "ae3897a20c5dc1479b1746287382e677", "text": "We present a novel approach to parameterize a mesh with disk topology to the plane in a shape-preserving manner. Our key contribution is a local/global algorithm, which combines a local mapping of each 3D triangle to the plane, using transformations taken from a restricted set, with a global \"stitch\" operation of all triangles, involving a sparse linear system. The local transformations can be taken from a variety of families, e.g. similarities or rotations, generating different types of parameterizations. In the first case, the parameterization tries to force each 2D triangle to be an as-similar-as-possible version of its 3D counterpart. This is shown to yield results identical to those of the LSCM algorithm. In the second case, the parameterization tries to force each 2D triangle to be an as-rigid-as-possible version of its 3D counterpart. This approach preserves shape as much as possible. It is simple, effective, and fast, due to pre-factoring of the linear system involved in the global phase. Experimental results show that our approach provides almost isometric parameterizations and obtains more shape-preserving results than other state-of-the-art approaches. We present also a more general \"hybrid\" parameterization model which provides a continuous spectrum of possibilities, controlled by a single parameter. The two cases described above lie at the two ends of the spectrum. We generalize our local/global algorithm to compute these parameterizations. The local phase may also be accelerated by parallelizing the independent computations per triangle.", "title": "" }, { "docid": "c2bd875199c6da6ce0f7c46349c7c937", "text": "This chapter presents a survey of contemporary NLP research on Multiword Expressions (MWEs). MWEs pose a huge problem to precise language processing due to their idiosyncratic nature and diversity of their semantic, lexical, and syntactical properties. The chapter begins by considering MWEs definitions, describes some MWEs classes, indicates problems MWEs generate in language applications and their possible solutions, presents methods of MWE encoding in dictionaries and their automatic detection in corpora. The chapter goes into more detail on a particular MWE class called Verb-Noun Constructions (VNCs). Due to their frequency in corpus and unique characteristics, VNCs present a research problem in their own right. Having outlined several approaches to VNC representation in lexicons, the chapter explains the formalism of Lexical Function as a possible VNC representation. Such representation may serve as a tool for VNCs automatic detection in a corpus. The latter is illustrated on Spanish material applying some supervised learning methods commonly used for NLP tasks.", "title": "" }, { "docid": "7e840aa656c74c98ec943d1632cb1332", "text": "Pixel-based methods offer unique potential for modifying existing interfaces independent of their underlying implementation. Prior work has demonstrated a variety of modifications to existing interfaces, including accessibility enhancements, interface language translation, testing frameworks, and interaction techniques. But pixel-based methods have also been limited in their understanding of the interface and therefore the complexity of modifications they can support. This work examines deeper pixel-level understanding of widgets and the resulting capabilities of pixel-based runtime enhancements. Specifically, we present three new sets of methods: methods for pixel-based modeling of widgets in multiple states, methods for managing the combinatorial complexity that arises in creating a multitude of runtime enhancements, and methods for styling runtime enhancements to preserve consistency with the design of an existing interface. We validate our methods through an implementation of Moscovich et al.'s Sliding Widgets, a novel runtime enhancement that could not have been implemented with prior pixel-based methods.", "title": "" }, { "docid": "25b5775c7f45fac087ff8fed1005f061", "text": "A vast amount of text data is recorded in the forms of repair verbatim in railway maintenance sectors. Efficient text mining of such maintenance data plays an important role in detecting anomalies and improving fault diagnosis efficiency. However, unstructured verbatim, high-dimensional data, and imbalanced fault class distribution pose challenges for feature selections and fault diagnosis. We propose a bilevel feature extraction-based text mining that integrates features extracted at both syntax and semantic levels with the aim to improve the fault classification performance. We first perform an improved X2 statistics-based feature selection at the syntax level to overcome the learning difficulty caused by an imbalanced data set. Then, we perform a prior latent Dirichlet allocation-based feature selection at the semantic level to reduce the data set into a low-dimensional topic space. Finally, we fuse fault features derived from both syntax and semantic levels via serial fusion. The proposed method uses fault features at different levels and enhances the precision of fault diagnosis for all fault classes, particularly minority ones. Its performance has been validated by using a railway maintenance data set collected from 2008 to 2014 by a railway corporation. It outperforms traditional approaches.", "title": "" }, { "docid": "a0ebe19188abab323122a5effc3c4173", "text": "In this paper, we present LOADED, an algorithm for outlier detection in evolving data sets containing both continuous and categorical attributes. LOADED is a tunable algorithm, wherein one can trade off computation for accuracy so that domain-specific response times are achieved. Experimental results show that LOADED provides very good detection and false positive rates, which are several times better than those of existing distance-based schemes.", "title": "" }, { "docid": "f3864982e2e03ce4876a6685d74fb84c", "text": "The central nervous system (CNS) operates by a fine-tuned balance between excitatory and inhibitory signalling. In this context, the inhibitory neurotransmission may be of particular interest as it has been suggested that such neuronal pathways may constitute 'command pathways' and the principle of 'dis-inhibition' leading ultimately to excitation may play a fundamental role (Roberts, E. (1974). Adv. Neurol., 5: 127-143). The neurotransmitter responsible for this signalling is gamma-aminobutyrate (GABA) which was first discovered in the CNS as a curious amino acid (Roberts, E., Frankel, S. (1950). J. Biol. Chem., 187: 55-63) and later proposed as an inhibitory neurotransmitter (Curtis, D.R., Watkins, J.C. (1960). J. Neurochem., 6: 117-141; Krnjevic, K., Schwartz, S. (1967). Exp. Brain Res., 3: 320-336). The present review will describe aspects of GABAergic neurotransmission related to homeostatic mechanisms such as biosynthesis, metabolism, release and inactivation. Additionally, pharmacological and therapeutic aspects of this will be discussed.", "title": "" }, { "docid": "fa2c86d4c0716580415fce8db324fd04", "text": "One of the key elements in describing a software development method is the roles that are assigned to the members of the software team. This article describes our experience in assigning roles to students who are involved in the development of software projects, working in Extreme Programming teams. This experience, which is based on 25 such projects, teaches us that a personal role for each teammate increases personal responsibility while maintaining the essence of the software development method. In this paper we discuss ways in which different software development methods address the place of roles in a software development team. We also share our experience in refining role specifications and suggest a way to achieve and measure progress by using the perspective of the different roles.", "title": "" }, { "docid": "5b0eef5eed1645ae3d88bed9b20901b9", "text": "We present a radically new approach to fully homomorphic encryption (FHE) that dramatically improves performance and bases security on weaker assumptions. A central conceptual contribution in our work is a new way of constructing leveled fully homomorphic encryption schemes (capable of evaluating arbitrary polynomial-size circuits), without Gentry’s bootstrapping procedure. Specifically, we offer a choice of FHE schemes based on the learning with error (LWE) or ring-LWE (RLWE) problems that have 2 security against known attacks. For RLWE, we have: • A leveled FHE scheme that can evaluate L-level arithmetic circuits with Õ(λ · L) per-gate computation – i.e., computation quasi-linear in the security parameter. Security is based on RLWE for an approximation factor exponential in L. This construction does not use the bootstrapping procedure. • A leveled FHE scheme that uses bootstrapping as an optimization, where the per-gate computation (which includes the bootstrapping procedure) is Õ(λ), independent of L. Security is based on the hardness of RLWE for quasi-polynomial factors (as opposed to the sub-exponential factors needed in previous schemes). We obtain similar results for LWE, but with worse performance. We introduce a number of further optimizations to our schemes. As an example, for circuits of large width – e.g., where a constant fraction of levels have width at least λ – we can reduce the per-gate computation of the bootstrapped version to Õ(λ), independent of L, by batching the bootstrapping operation. Previous FHE schemes all required Ω̃(λ) computation per gate. At the core of our construction is a much more effective approach for managing the noise level of lattice-based ciphertexts as homomorphic operations are performed, using some new techniques recently introduced by Brakerski and Vaikuntanathan (FOCS 2011). ∗Sponsored by the Air Force Research Laboratory (AFRL). Disclaimer: This material is based on research sponsored by DARPA under agreement number FA8750-11-C-0096 and FA8750-11-2-0225. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government. Approved for Public Release, Distribution Unlimited. †This material is based on research sponsored by DARPA under Agreement number FA8750-11-2-0225. All disclaimers as above apply.", "title": "" }, { "docid": "74beab63358ece0a7b4568dd40a4aea3", "text": "We consider the problem of learning the canonical parameters specifying an undirected graphical model (Markov random field) from the mean parameters. For graphical models representing a minimal exponential family, the canonical parameters are uniquely determined by the mean parameters, so the problem is feasible in principle. The goal of this paper is to investigate the computational feasibility of this statistical task. Our main result shows that parameter estimation is in general intractable: no algorithm can learn the canonical parameters of a generic pair-wise binary graphical model from the mean parameters in time bounded by a polynomial in the number of variables (unless RP = NP). Indeed, such a result has been believed to be true (see [1]) but no proof was known. Our proof gives a polynomial time reduction from approximating the partition function of the hard-core model, known to be hard, to learning approximate parameters. Our reduction entails showing that the marginal polytope boundary has an inherent repulsive property, which validates an optimization procedure over the polytope that does not use any knowledge of its structure (as required by the ellipsoid method and others).", "title": "" }, { "docid": "c9878a454c91fec094fce02e1ac49348", "text": "Autonomous walking bipedal machines, possibly useful for rehabilitation and entertainment purposes, need a high energy efficiency, offered by the concept of ‘Passive Dynamic Walking’ (exploitation of the natural dynamics of the robot). 2D passive dynamic bipeds have been shown to be inherently stable, but in the third dimension two problematic degrees of freedom are introduced: yaw and roll. We propose a design for a 3D biped with a pelvic body as a passive dynamic compensator, which will compensate for the undesired yaw and roll motion, and allow the rest of the robot to move as if it were a 2D machine. To test our design, we perform numerical simulations on a multibody model of the robot. With limit cycle analysis we calculate the stability of the robot when walking at its natural speed. The simulation shows that the compensator, indeed, effectively compensates for both the yaw and the roll motion, and that the walker is stable.", "title": "" }, { "docid": "38c32734ecc5d0e1c3bb30f97f9c9798", "text": "Dengue has emerged as an international public health problem. Reasons for the resurgence of dengue in the tropics and subtropics are complex and include unprecedented urbanization with substandard living conditions, lack of vector control, virus evolution, and international travel. Of all these factors, urbanization has probably had the most impact on the amplification of dengue within a given country, and travel has had the most impact for the spread of dengue from country to country and continent to continent. Epidemics of dengue, their seasonality, and oscillations over time are reflected by the epidemiology of dengue in travelers. Sentinel surveillance of travelers could augment existing national public health surveillance systems.", "title": "" } ]
scidocsrr
8eaf4f6e40e4a0c9585c8d572cd77814
A Horizontal Fragmentation Algorithm for the Fact Relation in a Distributed Data Warehouse
[ { "docid": "cd892dec53069137c1c2cfe565375c62", "text": "Optimal application performance on a Distributed Object Based System (DOBS) requires class fragmentation and the development of allocation schemes to place fragments at distributed sites so data transfer is minimized. Fragmentation enhances application performance by reducing the amount of irrelevant data accessed and the amount of data transferred unnecessarily between distributed sites. Algorithms for effecting horizontal and vertical fragmentation ofrelations exist, but fragmentation techniques for class objects in a distributed object based system are yet to appear in the literature. This paper first reviews a taxonomy of the fragmentation problem in a distributed object base. The paper then contributes by presenting a comprehensive set of algorithms for horizontally fragmenting the four realizable class models on the taxonomy. The fundamental approach is top-down, where the entity of fragmentation is the class object. Our approach consists of first generating primary horizontal fragments of a class based on only applications accessing this class, and secondly generating derived horizontal fragments of the class arising from primary fragments of its subclasses, its complex attributes (contained classes), and/or its complex methods classes. Finally, we combine the sets of primary and derived fragments of each class to produce the best possible fragments. Thus, these algorithms account for inheritance and class composition hierarchies as well as method nesting among objects, and are shown to be polynomial time.", "title": "" } ]
[ { "docid": "d1114f1ced731a700d40dd97fe62b82b", "text": "Agricultural sector is playing vital role in Indian economy, in which irrigation mechanism is of key concern. Our paper aims to find the exact field condition and to control the wastage of water in the field and to provide exact controlling of field by using the drip irrigation, atomizing the agricultural environment by using the components and building the necessary hardware. For the precisely monitoring and controlling of the agriculture filed, different types of sensors were used. To implement the proposed system ARM LPC2148 Microcontroller is used. The irrigation mechanism is monitored and controlled more efficiently by the proposed system, which is a real time feedback control system. GSM technology is used to inform the end user about the exact field condition. Actually this method of irrigation system has been proposed primarily to save resources, yield of crops and farm profitability.", "title": "" }, { "docid": "80c21770ada160225e17cb9673fff3b3", "text": "This paper describes a model to address the task of named-entity recognition on Indonesian microblog messages due to its usefulness for higher-level tasks or text mining applications on Indonesian microblogs. We view our task as a sequence labeling problem using machine learning approach. We also propose various word-level and orthographic features, including the ones that are specific to the Indonesian language. Finally, in our experiment, we compared our model with a baseline model previously proposed for Indonesian formal documents, instead of microblog messages. Our contribution is two-fold: (1) we developed NER tool for Indonesian microblog messages, which was never addressed before, (2) we developed NER corpus containing around 600 Indonesian microblog messages available for future development.", "title": "" }, { "docid": "aed80386c32e16f70fff3cbc44b07d97", "text": "The vision for the \"Web of Things\" (WoT) aims at bringing physical objects of the world into the World Wide Web. The Web is constantly evolving and has changed over the last couple of decades and the changes have spurted new areas of growth. The primary focus of the WoT is to bridge the gap between physical and digital worlds over a common and widely used platform, which is the Web. Everyday physical \"things\", which are not Web-enabled, and have limited or zero computing capability, can be accommodated within the Web. As a step towards this direction, this work focuses on the specification of a thing, its descriptors and functions that could participate in the process of its discovery and operations. Besides, in this model for the WoT, we also propose a semantic Web-based architecture to integrate these things as Web resources to further demystify the realization of the WoT vision.", "title": "" }, { "docid": "c3c5931200ff752d8285cc1068e779ee", "text": "Speech-driven facial animation is the process which uses speech signals to automatically synthesize a talking character. The majority of work in this domain creates a mapping from audio features to visual features. This often requires post-processing using computer graphics techniques to produce realistic albeit subject dependent results. We present a system for generating videos of a talking head, using a still image of a person and an audio clip containing speech, that doesn’t rely on any handcrafted intermediate features. To the best of our knowledge, this is the first method capable of generating subject independent realistic videos directly from raw audio. Our method can generate videos which have (a) lip movements that are in sync with the audio and (b) natural facial expressions such as blinks and eyebrow movements 1. We achieve this by using a temporal GAN with 2 discriminators, which are capable of capturing different aspects of the video. The effect of each component in our system is quantified through an ablation study. The generated videos are evaluated based on their sharpness, reconstruction quality, and lip-reading accuracy. Finally, a user study is conducted, confirming that temporal GANs lead to more natural sequences than a static GAN-based approach.", "title": "" }, { "docid": "812c41737bb2a311d45c5566f773a282", "text": "Acceleration, sprint and agility performance are crucial in sports like soccer. There are few studies regarding the effect of training on youth soccer players in agility performance and in sprint distances shorter than 30 meter. Therefore, the aim of the recent study was to examine the effect of a high-intensity sprint and plyometric training program on 13-year-old male soccer players. A training group of 14 adolescent male soccer players, mean age (±SD) 13.5 years (±0.24) followed an eight week intervention program for one hour per week, and a group of 12 adolescent male soccer players of corresponding age, mean age 13.5 years (±0.23) served as control a group. Preand post-tests assessed 10-m linear sprint, 20-m linear sprint and agility performance. Results showed a significant improvement in agility performance, pre 8.23 s (±0.34) to post 7.69 s (± 0.34) (p<0.01), and a significant improvement in 0-20m linear sprint, pre 3.54s (±0.17) to post 3.42s (±0.18) (p<0.05). In 0-10m sprint the participants also showed an improvement, pre 2.02s (±0.11) to post 1.96s (± 0.11), however this was not significant. The correlation between 10-m sprint and agility was r = 0.53 (p<0.01), and between 20-m linear sprint and agility performance, r = 0.67 (p<0.01). The major finding in the study is the significant improvement in agility performance and in 0-20 m linear sprint in the intervention group. These findings suggest that organizing the training sessions with short-burst high-intensity sprint and plyometric exercises interspersed with adequate recovery time, may result in improvements in both agility and in linear sprint performance in adolescent male soccer players. Another finding is the correlation between linear sprint and agility performance, indicating a difference when compared to adults. 4 | Mathisen: EFFECT OF HIGH-SPEED...", "title": "" }, { "docid": "ccff1c7fa149a033b49c3a6330d4e0f3", "text": "Stroke is the leading cause of permanent adult disability in the U.S., frequently resulting in chronic motor impairments. Rehabilitation of the upper limb, particularly the hand, is especially important as arm and hand deficits post-stroke limit the performance of activities of daily living and, subsequently, functional independence. Hand rehabilitation is challenging due to the complexity of motor control of the hand. New instrumentation is needed to facilitate examination of the hand. Thus, a novel actuated exoskeleton for the index finger, the FingerBot, was developed to permit the study of finger kinetics and kinematics under a variety of conditions. Two such novel environments, one applying a spring-like extension torque proportional to angular displacement at each finger joint and another applying a constant extension torque at each joint, were compared in 10 stroke survivors with the FingerBot. Subjects attempted to reach targets located throughout the finger workspace. The constant extension torque assistance resulted in a greater workspace area (p < 0.02) and a larger active range of motion for the metacarpophalangeal joint (p < 0.01) than the spring-like assistance. Additionally, accuracy in terms of reaching the target was greater with the constant extension assistance as compared to no assistance. The FingerBot can be a valuable tool in assessing various hand rehabilitation paradigms following stroke.", "title": "" }, { "docid": "177c5969917e04ea94773d1c545fae82", "text": "Attitudes toward global warming are influenced by various heuristics, which may distort policy away from what is optimal for the well-being of people. These possible distortions, or biases, include: a focus on harms that we cause, as opposed to those that we can remedy more easily; a feeling that those who cause a problem should fix it; a desire to undo a problem rather than compensate for its presence; parochial concern with one’s own group (nation); and neglect of risks that are not available. Although most of these biases tend to make us attend relatively too much to global warming, other biases, such as wishful thinking, cause us to attend too little. I discuss these possible effects and illustrate some of them with an experiment conducted on the World Wide Web.", "title": "" }, { "docid": "34382f9716058d727f467716350788a7", "text": "The structure of the brain and the nature of evolution suggest that, despite its uniqueness, language likely depends on brain systems that also subserve other functions. The declarative/procedural (DP) model claims that the mental lexicon of memorized word-specific knowledge depends on the largely temporal-lobe substrates of declarative memory, which underlies the storage and use of knowledge of facts and events. The mental grammar, which subserves the rule-governed combination of lexical items into complex representations, depends on a distinct neural system. This system, which is composed of a network of specific frontal, basal-ganglia, parietal and cerebellar structures, underlies procedural memory, which supports the learning and execution of motor and cognitive skills, especially those involving sequences. The functions of the two brain systems, together with their anatomical, physiological and biochemical substrates, lead to specific claims and predictions regarding their roles in language. These predictions are compared with those of other neurocognitive models of language. Empirical evidence is presented from neuroimaging studies of normal language processing, and from developmental and adult-onset disorders. It is argued that this evidence supports the DP model. It is additionally proposed that \"language\" disorders, such as specific language impairment and non-fluent and fluent aphasia, may be profitably viewed as impairments primarily affecting one or the other brain system. Overall, the data suggest a new neurocognitive framework for the study of lexicon and grammar.", "title": "" }, { "docid": "b741698d7e4d15cb7f4e203f2ddbce1d", "text": "This study examined the process of how socioeconomic status, specifically parents' education and income, indirectly relates to children's academic achievement through parents' beliefs and behaviors. Data from a national, cross-sectional study of children were used for this study. The subjects were 868 8-12-year-olds, divided approximately equally across gender (436 females, 433 males). This sample was 49% non-Hispanic European American and 47% African American. Using structural equation modeling techniques, the author found that the socioeconomic factors were related indirectly to children's academic achievement through parents' beliefs and behaviors but that the process of these relations was different by racial group. Parents' years of schooling also was found to be an important socioeconomic factor to take into consideration in both policy and research when looking at school-age children.", "title": "" }, { "docid": "8ba9439094fae89d6ff14d03476878b9", "text": "In this paper we present a framework for the real-time control of lightweight autonomous vehicles which comprehends a proposed hardand software design. The system can be used for many kinds of vehicles and offers high computing power and flexibility in respect of the control algorithms and additional application dependent tasks. It was originally developed to control a small quad-rotor UAV where stringent restrictions in weight and size of the hardware components exist, but has been transfered to a fixed-wing UAV and a ground vehicle for inand outdoor search and rescue missions. The modular structure and the use of a standard PC architecture at an early stage simplifies reuse of components and fast integration of new features. Figure 1: Quadrotor UAV controlled by the proposed system", "title": "" }, { "docid": "5f96b65c7facf35cd0b2e629a2e98662", "text": "Effectively evaluating visualization techniques is a difficult task often assessed through feedback from user studies and expert evaluations. This work presents an alternative approach to visualization evaluation in which brain activity is passively recorded using electroencephalography (EEG). These measurements are used to compare different visualization techniques in terms of the burden they place on a viewer’s cognitive resources. In this paper, EEG signals and response times are recorded while users interpret different representations of data distributions. This information is processed to provide insight into the cognitive load imposed on the viewer. This paper describes the design of the user study performed, the extraction of cognitive load measures from EEG data, and how those measures are used to quantitatively evaluate the effectiveness of visualizations.", "title": "" }, { "docid": "9ae370847ec965a3ce9c7636f8d6a726", "text": "In this paper we present a wearable device for control of home automation systems via hand gestures. This solution has many advantages over traditional home automation interfaces in that it can be used by those with loss of vision, motor skills, and mobility. By combining other sources of context with the pendant we can reduce the number and complexity of gestures while maintaining functionality. As users input gestures, the system can also analyze their movements for pathological tremors. This information can then be used for medical diagnosis, therapy, and emergency services.Currently, the Gesture Pendant can recognize control gestures with an accuracy of 95% and userdefined gestures with an accuracy of 97% It can detect tremors above 2HZ within .1 Hz.", "title": "" }, { "docid": "3d9e279afe4ba8beb1effd4f26550f67", "text": "We propose and demonstrate a scheme for boosting the efficiency of entanglement distribution based on a decoherence-free subspace over lossy quantum channels. By using backward propagation of a coherent light, our scheme achieves an entanglement-sharing rate that is proportional to the transmittance T of the quantum channel in spite of encoding qubits in multipartite systems for the decoherence-free subspace. We experimentally show that highly entangled states, which can violate the Clauser-Horne-Shimony-Holt inequality, are distributed at a rate proportional to T.", "title": "" }, { "docid": "97561632e9d87093a5de4f1e4b096df7", "text": "Recommender systems are now popular both commercially and in the research community, where many approaches have been suggested for providing recommendations. In many cases a system designer that wishes to employ a recommendation system must choose between a set of candidate approaches. A first step towards selecting an appropriate algorithm is to decide which properties of the application to focus upon when making this choice. Indeed, recommendation systems have a variety of properties that may affect user experience, such as accuracy, robustness, scalability, and so forth. In this paper we discuss how to compare recommenders based on a set of properties that are relevant for the application. We focus on comparative studies, where a few algorithms are compared using some evaluation metric, rather than absolute benchmarking of algorithms. We describe experimental settings appropriate for making choices between algorithms. We review three types of experiments, starting with an offline setting, where recommendation approaches are compared without user interaction, then reviewing user studies, where a small group of subjects experiment with the system and report on the experience, and finally describe large scale online experiments, where real user populations interact with the system. In each of these cases we describe types of questions that can be answered, and suggest protocols for experimentation. We also discuss how to draw trustworthy conclusions from the conducted experiments. We then review a large set of properties, and explain how to evaluate systems given relevant properties. We also survey a large set of evaluation metrics in the context of the property that they evaluate. Guy Shani Microsoft Research, One Microsoft Way, Redmond, WA, e-mail: guyshani@microsoft.com Asela Gunawardana Microsoft Research, One Microsoft Way, Redmond, WA, e-mail: aselag@microsoft.com", "title": "" }, { "docid": "5c469bbeb053c187c2d14fd9f27c4426", "text": "Fatigue damage increases with applied load cycles in a cumulative manner. Cumulative fatigue damage analysis plays a key role in life prediction of components and structures subjected to field load histories. Since the introduction of damage accumulation concept by Palmgren about 70 years ago and ‘linear damage rule’ by Miner about 50 years ago, the treatment of cumulative fatigue damage has received increasingly more attention. As a result, many damage models have been developed. Even though early theories on cumulative fatigue damage have been reviewed by several researchers, no comprehensive report has appeared recently to review the considerable efforts made since the late 1970s. This article provides a comprehensive review of cumulative fatigue damage theories for metals and their alloys. emphasizing the approaches developed between the early 1970s to the early 1990s. These theories are grouped into six categories: linear damage rules; nonlinear damage curve and two-stage linearization approaches; life curve modification methods; approaches based on crack growth concepts: continuum damage mechanics models: and energy-based theories.", "title": "" }, { "docid": "b0bcd65de1841474dba09e9b1b5c2763", "text": "Modern web clickstream data consists of long, high-dimensional sequences of multivariate events, making it difficult to analyze. Following the overarching principle that the visual interface should provide information about the dataset at multiple levels of granularity and allow users to easily navigate across these levels, we identify four levels of granularity in clickstream analysis: patterns, segments, sequences and events. We present an analytic pipeline consisting of three stages: pattern mining, pattern pruning and coordinated exploration between patterns and sequences. Based on this approach, we discuss properties of maximal sequential patterns, propose methods to reduce the number of patterns and describe design considerations for visualizing the extracted sequential patterns and the corresponding raw sequences. We demonstrate the viability of our approach through an analysis scenario and discuss the strengths and limitations of the methods based on user feedback.", "title": "" }, { "docid": "c31ffcb1514f437313c2f3f0abaf3a88", "text": "Identifying temporal relations between events is an essential step towards natural language understanding. However, the temporal relation between two events in a story depends on, and is often dictated by, relations among other events. Consequently, effectively identifying temporal relations between events is a challenging problem even for human annotators. This paper suggests that it is important to take these dependencies into account while learning to identify these relations and proposes a structured learning approach to address this challenge. As a byproduct, this provides a new perspective on handling missing relations, a known issue that hurts existing methods. As we show, the proposed approach results in significant improvements on the two commonly used data sets for this problem.", "title": "" }, { "docid": "2a68d57f8d59205122dd11461accecab", "text": "A resistive methanol sensor based on ZnO hexagonal nanorods having average diameter (60–70 nm) and average length of <formula formulatype=\"inline\"><tex Notation=\"TeX\">${\\sim}{\\rm 500}~{\\rm nm}$</tex></formula>, is reported in this paper. A low temperature chemical bath deposition technique is employed to deposit vertically aligned ZnO hexagonal nanorods using zinc acetate dihydrate and hexamethylenetetramine (HMT) precursors at 100<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{\\circ}{\\rm C}$</tex></formula> on a <formula formulatype=\"inline\"><tex Notation=\"TeX\">${\\rm SiO}_{2}$</tex></formula> substrate having Sol-Gel grown ZnO seed layer. After structural (XRD, FESEM) and electrical (Hall effect) characterizations, four types of sensors structures incorporating the effect of catalytic metal electrode (Pd-Ag) and Pd nanoparticle sensitization, are fabricated and tested for sensing methanol vapor in the temperature range of 27<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{\\circ}{\\rm C}$</tex> </formula>–300<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{\\circ}{\\rm C}$</tex></formula>. The as deposited ZnO nanorods with Pd-Ag catalytic contact offered appreciably high dynamic range (190–3040 ppm) at moderately lower temperature (200<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{\\circ}{\\rm C}$</tex></formula>) compared to the sensors with noncatalytic electrode (Au). Surface modification of nanorods by Pd nanoparticles offered faster response and recovery with increased response magnitude for both type of electrodes, but at the cost of lower dynamic range (190–950 ppm). The possible sensing mechanism has also been discussed briefly.", "title": "" }, { "docid": "ef1f34e7bc08b78bfbf7317cd102c89e", "text": "Most modern trackers typically employ a bounding box given in the first frame to track visual objects, where their tracking results are often sensitive to the initialization. In this paper, we propose a new tracking method, Reliable Patch Trackers (RPT), which attempts to identify and exploit the reliable patches that can be tracked effectively through the whole tracking process. Specifically, we present a tracking reliability metric to measure how reliably a patch can be tracked, where a probability model is proposed to estimate the distribution of reliable patches under a sequential Monte Carlo framework. As the reliable patches distributed over the image, we exploit the motion trajectories to distinguish them from the background. Therefore, the visual object can be defined as the clustering of homo-trajectory patches, where a Hough voting-like scheme is employed to estimate the target state. Encouraging experimental results on a large set of sequences showed that the proposed approach is very effective and in comparison to the state-of-the-art trackers. The full source code of our implementation will be publicly available.", "title": "" }, { "docid": "90084e7b31e89f5eb169a0824dde993b", "text": "In this work, we present a novel way of using neural network for graph-based dependency parsing, which fits the neural network into a simple probabilistic model and can be furthermore generalized to high-order parsing. Instead of the sparse features used in traditional methods, we utilize distributed dense feature representations for neural network, which give better feature representations. The proposed parsers are evaluated on English and Chinese Penn Treebanks. Compared to existing work, our parsers give competitive performance with much more efficient inference.", "title": "" } ]
scidocsrr
819228ce15a345fef6a17a6088918767
Text-Enhanced Representation Learning for Knowledge Graph
[ { "docid": "9d918a69a2be2b66da6ecf1e2d991258", "text": "We designed and implemented TAGME, a system that is able to efficiently and judiciously augment a plain-text with pertinent hyperlinks to Wikipedia pages. The specialty of TAGME with respect to known systems [5,8] is that it may annotate texts which are short and poorly composed, such as snippets of search-engine results, tweets, news, etc.. This annotation is extremely informative, so any task that is currently addressed using the bag-of-words paradigm could benefit from using this annotation to draw upon (the millions of) Wikipedia pages and their inter-relations.", "title": "" }, { "docid": "f29d0ea5ff5c96dadc440f4d4aa229c6", "text": "Wikipedia infoboxes are a valuable source of structured knowledge for global knowledge sharing. However, infobox information is very incomplete and imbalanced among the Wikipedias in different languages. It is a promising but challenging problem to utilize the rich structured knowledge from a source language Wikipedia to help complete the missing infoboxes for a target language. In this paper, we formulate the problem of cross-lingual knowledge extraction from multilingual Wikipedia sources, and present a novel framework, called WikiCiKE, to solve this problem. An instancebased transfer learning method is utilized to overcome the problems of topic drift and translation errors. Our experimental results demonstrate that WikiCiKE outperforms the monolingual knowledge extraction method and the translation-based method.", "title": "" }, { "docid": "95e2a8e2d1e3a1bbfbf44d20f9956cf0", "text": "Knowledge graph completion aims to perform link prediction between entities. In this paper, we consider the approach of knowledge graph embeddings. Recently, models such as TransE and TransH build entity and relation embeddings by regarding a relation as translation from head entity to tail entity. We note that these models simply put both entities and relations within the same semantic space. In fact, an entity may have multiple aspects and various relations may focus on different aspects of entities, which makes a common space insufficient for modeling. In this paper, we propose TransR to build entity and relation embeddings in separate entity space and relation spaces. Afterwards, we learn embeddings by first projecting entities from entity space to corresponding relation space and then building translations between projected entities. In experiments, we evaluate our models on three tasks including link prediction, triple classification and relational fact extraction. Experimental results show significant and consistent improvements compared to stateof-the-art baselines including TransE and TransH. The source code of this paper can be obtained from https: //github.com/mrlyk423/relation extraction.", "title": "" } ]
[ { "docid": "95dbebf3ed125e2a4f0d901f42f09be3", "text": "Visual feature extraction with scale invariant feature transform (SIFT) is widely used for object recognition. However, its real-time implementation suffers from long latency, heavy computation, and high memory storage because of its frame level computation with iterated Gaussian blur operations. Thus, this paper proposes a layer parallel SIFT (LPSIFT) with integral image, and its parallel hardware design with an on-the-fly feature extraction flow for real-time application needs. Compared with the original SIFT algorithm, the proposed approach reduces the computational amount by 90% and memory usage by 95%. The final implementation uses 580-K gate count with 90-nm CMOS technology, and offers 6000 feature points/frame for VGA images at 30 frames/s and ~ 2000 feature points/frame for 1920 × 1080 images at 30 frames/s at the clock rate of 100 MHz.", "title": "" }, { "docid": "8c867af4a6dd4125e90ba7642e9e7852", "text": "Parallel corpora are the necessary resources in many multilingual natural language processing applications, including machine translation and cross-lingual information retrieval. Manual preparation of a large scale parallel corpus is a very time consuming and costly procedure. In this paper, the work towards building a sentence-level aligned EnglishPersian corpus in a semi-automated manner is presented. The design of the corpus, collection, and alignment process of the sentences is described. Two statistical similarity measures were used to find the similarities of sentence pairs. To verify the alignment process automatically, Google Translator was used. The corpus is based on news resources available online and consists of about 30,000 formal sentence pairs.", "title": "" }, { "docid": "470ecc2bc4299d913125d307c20dd48d", "text": "The task of end-to-end relation extraction consists of two sub-tasks: i) identifying entity mentions along with their types and ii) recognizing semantic relations among the entity mention pairs. It has been shown that for better performance, it is necessary to address these two sub-tasks jointly [22,13]. We propose an approach for simultaneous extraction of entity mentions and relations in a sentence, by using inference in Markov Logic Networks (MLN) [21]. We learn three different classifiers : i) local entity classifier, ii) local relation classifier and iii) “pipeline” relation classifier which uses predictions of the local entity classifier. Predictions of these classifiers may be inconsistent with each other. We represent these predictions along with some domain knowledge using weighted first-order logic rules in an MLN and perform joint inference over the MLN to obtain a global output with minimum inconsistencies. Experiments on the ACE (Automatic Content Extraction) 2004 dataset demonstrate that our approach of joint extraction using MLNs outperforms the baselines of individual classifiers. Our end-to-end relation extraction performance is better than 2 out of 3 previous results reported on the ACE 2004 dataset.", "title": "" }, { "docid": "31da7b5b403ca92dde4d4c590a900aa1", "text": "In this paper, a new approach for moving an inpipe robot inside underground urban gas pipelines is proposed. Since the urban gas supply system is composed of complicated configurations of pipelines, the inpipe inspection requires a robot with outstanding mobility and corresponding control algorithms to apply for. In advance, this paper introduces a new miniature miniature inpipe robot, called MRINSPECT (Multifunctional Robotic crawler for INpipe inSPECTion) IV, which has been developed for the inspection of urban gas pipelines with a nominal 4-inch inside diameter. Its mechanism for steering with differential–drive wheels arranged three-dimensionally makes itself easily adjust to most pipeline configurations and provides excellent mobility in navigation. Also, analysis for pipelines with fittings are given in detail and geometries of the fittings are mathematically described. It is prerequisite to estimate moving pattern of the robot while passing through the fittings and based on the analysis, a method modulating speed of each drive wheel is proposed. Though modulation of speed is very important during proceeding thought the fittings, it is not easy to control the speeds because each wheel of the robot has contact with the walls having different curvatures. A new and simple way of controlling the speed is developed based on the analysis of the geometrical features of the fittings. This algorithm has the advantage to be applicable without using complicated sensor information. To confirm the effectiveness of the proposed method experiments are performed and additional considerations for the design of an inpipe robot are discussed.", "title": "" }, { "docid": "66c49b0dbdbdf29ace0f60839b867e43", "text": "The job shop scheduling problem with the makespan criterion is a certain NP-hard case from OR theory having excellent practical applications. This problem, having been examined for years, is also regarded as an indicator of the quality of advanced scheduling algorithms. In this paper we provide a new approximate algorithm that is based on the big valley phenomenon, and uses some elements of so-called path relinking technique as well as new theoretical properties of neighbourhoods. The proposed algorithm owns, unprecedented up to now, accuracy, obtainable in a quick time on a PC, which has been confirmed after wide computer tests.", "title": "" }, { "docid": "cb1048d4bffb141074a4011279054724", "text": "Question Generation (QG) is the task of generating reasonable questions from a text. It is a relatively new research topic and has its potential usage in intelligent tutoring systems and closed-domain question answering systems. Current approaches include template or syntax based methods. This thesis proposes a novel approach based entirely on semantics. Minimal Recursion Semantics (MRS) is a meta-level semantic representation with emphasis on scope underspecification. With the English Resource Grammar and various tools from the DELPH-IN community, a natural language sentence can be interpreted as an MRS structure by parsing, and an MRS structure can be realized as a natural language sentence through generation. There are three issues emerging from semantics-based QG: (1) sentence simplification for complex sentences, (2) question transformation for declarative sentences, and (3) generation ranking. Three solutions are also proposed: (1) MRS decomposition through a Connected Dependency MRS Graph, (2) MRS transformation from declarative sentences to interrogative sentences, and (3) question ranking by simple language models atop a MaxEnt-based model. The evaluation is conducted in the context of the Question Generation Shared Task and Generation Challenge 2010. The performance of proposed method is compared against other syntax and rule based systems. The result also reveals the challenges of current research on question generation and indicates direction for future work.", "title": "" }, { "docid": "ef7996942968a720211aedbca2db9315", "text": "A wafer map is a graphical illustration of the locations of defective chips on a wafer. Defective chips are likely to exhibit a spatial dependence across the wafer map, which contains useful information on the process of integrated circuit (IC) fabrication. An analysis of wafer map data helps to better understand ongoing process problems. This paper proposes a new methodology in which spatial correlogram is used for the detection of the presence of spatial autocorrelations and for the classification of defect patterns on the wafer map. After the detection of spatial autocorrelation based on our proposed spatial randomness test using spatial correlogram, the dynamic time warping algorithm which provides nonlinear alignments between two sequences to find optimal warping path is adopted for the automatic classification of spatial patterns based on spatial correlogram. We also develop generalized join-count (JC)-based statistics and then propose a procedure to determine the optimal weights of JC-based statistics. The proposed method is illustrated using real-life examples and simulated data sets. The experimental results show that our method is robust to random noise and has a robust performance regardless of defect location and size.", "title": "" }, { "docid": "84ad9c8ae3e1ed3d25650a29af0673c6", "text": "As data mining evolves and matures more and more businesses are incorporating this technology into their business practices. However, currently data mining and decision support software is expensive and selection of the wrong tools can be costly in many ways. This paper provides direction and decision-making information to the practicing professional. A framework for evaluating data mining tools is presented and a methodology for applying this framework is described. Finally a case study to demonstrate the method’s effectiveness is presented. This methodology represents the first-hand experience using many of the leading data mining tools against real business data at the Center for Data Insight (CDI) at Northern Arizona University (NAU). This is not a comprehensive review of commercial tools but instead provides a method and a point-of-reference for selecting the best software tool for a particular problem. Experience has shown that there is not one best data-mining tool for all purposes. This instrument is designed to accommodate differences in environments and problem domains. It is expected that this methodology will be used to publish tool comparisons and benchmarking results.", "title": "" }, { "docid": "ac8cef535e5038231cdad324325eaa37", "text": "There are mainly two types of Emergent Self-Organizing Maps (ESOM) grid structures in use: hexgrid (honeycomb like) and quadgrid (trellis like) maps. In addition to that, the shape of the maps may be square or rectangular. This work investigates the effects of these different map layouts. Hexgrids were found to have no convincing advantage over quadgrids. Rectangular maps, however, are distinctively superior to square maps. Most surprisingly, rectangular maps outperform square maps for isotropic data, i.e. data sets with no particular primary direction.", "title": "" }, { "docid": "316ead33d0313804b7aa95570427e375", "text": "We consider an optimal control problem with a deterministic finite horizon and state variable dynamics given by a Markovswitching jump-diffusion stochastic differential equation. Our main results extend the dynamic programming technique to this larger family of stochastic optimal control problems. More specifically, we provide a detailed proof of Bellman’s optimality principle (or dynamic programming principle) and obtain the corresponding Hamilton-Jacobi-Belman equation, which turns out to be a partial integro-differential equation due to the extra terms arising from the Lévy process and the Markov process. As an application of our results, we study a finite horizon consumptioninvestment problem for a jump-diffusion financial market consisting of one risk-free asset and one risky asset whose coefficients are assumed to depend on the state of a continuous time finite state Markov process. We provide a detailed study of the optimal strategies for this problem, for the economically relevant families of power utilities and logarithmic utilities.", "title": "" }, { "docid": "7c06010200faa47511896228fcb36097", "text": "Polysaccharide immunomodulators were first discovered over 40 years ago. Although very few have been rigorously studied, recent reports have revealed the mechanism of action and structure-function attributes of some of these molecules. Certain polysaccharide immunomodulators have been identified that have profound effects in the regulation of immune responses during the progression of infectious diseases, and studies have begun to define structural aspects of these molecules that govern their function and interaction with cells of the host immune system. These polymers can influence innate and cell-mediated immunity through interactions with T cells, monocytes, macrophages, and polymorphonuclear lymphocytes. The ability to modulate the immune response in an appropriate way can enhance the host's immune response to certain infections. In addition, this strategy can be utilized to augment current treatment regimens such as antimicrobial therapy that are becoming less efficacious with the advent of antibiotic resistance. This review focuses on recent studies that illustrate the structural and biologic activities of specific polysaccharide immunomodulators and outlines their potential for clinical use.", "title": "" }, { "docid": "a302b0a5f20daf162b6d10f5b0f8aaab", "text": "In this work we present a novel end-to-end framework for tracking and classifying a robot’s surroundings in complex, dynamic and only partially observable real-world environments. The approach deploys a recurrent neural network to filter an input stream of raw laser measurements in order to directly infer object locations, along with their identity in both visible and occluded areas. To achieve this we first train the network using unsupervised Deep Tracking, a recently proposed theoretical framework for end-to-end space occupancy prediction. We show that by learning to track on a large amount of unsupervised data, the network creates a rich internal representation of its environment which we in turn exploit through the principle of inductive transfer of knowledge to perform the task of it’s semantic classification. As a result, we show that only a small amount of labelled data suffices to steer the network towards mastering this additional task. Furthermore we propose a novel recurrent neural network architecture specifically tailored to tracking and semantic classification in real-world robotics applications. We demonstrate the tracking and classification performance of the method on real-world data collected at a busy road junction. Our evaluation shows that the proposed end-to-end framework compares favourably to a state-of-the-art, model-free tracking solution and that it outperforms a conventional one-shot training scheme for semantic classification.", "title": "" }, { "docid": "48622252f5f8b19d24b4aca1e2bedb10", "text": "Executive Summary 8 7 9 1 8. 1. I n t ro d u c t i o n : Adaptation and Adaptive Capacity 8 8 1 1 8. 2. Adaptation Characteristics and Pro c e s s e s 8 8 2 1 8. 2. 1. Components and Forms of A d a p t a t i o n 8 8 2 1 8. 2. 2. Climate Stimuli for A d a p t a t i o n A d a p t a t i o n s 8 8 4 1 8. 3. F u t u re A d a p t a t i o n s", "title": "" }, { "docid": "360bb962f0be4e23b2fa83e4cb67db3c", "text": "Assistive robots are emerging as technologies that enable older adults to perform activities of daily living with autonomy. Exoskeletons are a subset of assistive robots that can support mobility. Perceptions and acceptance of these technologies require understanding in a user-centred design context to ensure optimum experience and adoption by as broad a spectrum of older adults as possible. The adoption and use of assistive robots for activities of daily living (ADL) by older adults is poorly understood. Older adult acceptance of technology is affected by numerous factors, such as perceptions and stigma associated with dependency and ageing. Assistive technology (AT) models provide theoretical frameworks that inform decision-making in relation to assistive devices for people with disabilities. However, technology acceptance models (TAMs) are theoretical explanations of factors that influence why users adopt some technologies and not others. Recent models have emerged specifically describing technology acceptance by older adults. In the context of exoskeleton design, these models could influence design approaches. This article will discuss a selection of TAMs, displaying a chronology that highlights their evolution, and two prioritised TAMs—Almere and the senior technology acceptance model (STAM)—that merit consideration when attempting to understand acceptance and use of assistive robots by older adults.", "title": "" }, { "docid": "fcfc16b94f06bf6120431a348e97b9ac", "text": "Multi-label classification is a practical yet challenging task in machine learning related fields, since it requires the prediction of more than one label category for each input instance. We propose a novel deep neural networks (DNN) based model, Canonical Correlated AutoEncoder (C2AE), for solving this task. Aiming at better relating feature and label domain data for improved classification, we uniquely perform joint feature and label embedding by deriving a deep latent space, followed by the introduction of label-correlation sensitive loss function for recovering the predicted label outputs. Our C2AE is achieved by integrating the DNN architectures of canonical correlation analysis and autoencoder, which allows end-to-end learning and prediction with the ability to exploit label dependency. Moreover, our C2AE can be easily extended to address the learning problem with missing labels. Our experiments on multiple datasets with different scales confirm the effectiveness and robustness of our proposed method, which is shown to perform favorably against state-of-the-art methods for multi-label classification.", "title": "" }, { "docid": "d1c0b58fa78ecda169d3972eae870590", "text": "Power system stability is defined as an ability of the power system to reestablish the initial steady state or come into the new steady state after any variation of the system's operation value or after system´s breakdown. The stability and reliability of the electric power system is highly actual topic nowadays, especially in the light of recent accidents like splitting of UCTE system and north-American blackouts. This paper deals with the potential of the evaluation in term of transient stability of the electric power system within the defense plan and the definition of the basic criterion for the transient stability – Critical Clearing Time (CCT).", "title": "" }, { "docid": "c784bfbd522bb4c9908c3f90a31199fe", "text": "Vedolizumab (VDZ) inhibits α4β7 integrins and is used to target intestinal immune responses in patients with inflammatory bowel disease, which is considered to be relatively safe. Here we report on a fatal complication following VDZ administration. A 64-year-old female patient with ulcerative colitis (UC) refractory to tumor necrosis factor inhibitors was treated with VDZ. One week after the second VDZ infusion, she was admitted to hospital with severe diarrhea and systemic inflammatory response syndrome (SIRS). Blood stream infections were ruled out, and endoscopy revealed extensive ulcerations of the small intestine covered with pseudomembranes, reminiscent of invasive candidiasis or mesenteric ischemia. Histology confirmed subtotal destruction of small intestinal epithelia and colonization with Candida. Moreover, small mesenteric vessels were occluded by hyaline thrombi, likely as a result of SIRS, while perfusion of large mesenteric vessels was not compromised. Beta-D-glucan concentrations were highly elevated, and antimycotic therapy was initiated for suspected invasive candidiasis but did not result in any clinical benefit. Given the non-responsiveness to anti-infective therapies, an autoimmune phenomenon was suspected and immunosuppressive therapy was escalated. However, the patient eventually died from multi-organ failure. This case should raise the awareness for rare but severe complications related to immunosuppressive therapy, particularly in high risk patients.", "title": "" }, { "docid": "6934b06f35dc7855a8410329b099ca2f", "text": "Privacy protection in publishing transaction data is an important problem. A key feature of transaction data is the extreme sparsity, which renders any single technique ineffective in anonymizing such data. Among recent works, some incur high information loss, some result in data hard to interpret, and some suffer from performance drawbacks. This paper proposes to integrate generalization and suppression to reduce information loss. However, the integration is non-trivial. We propose novel techniques to address the efficiency and scalability challenges. Extensive experiments on real world databases show that this approach outperforms the state-of-the-art methods, including global generalization, local generalization, and total suppression. In addition, transaction data anonymized by this approach can be analyzed by standard data mining tools, a property that local generalization fails to provide.", "title": "" }, { "docid": "5a0fe40414f7881cc262800a43dfe4d0", "text": "In this work, a passive rectifier circuit is presented, which is operating at 868 MHz. It allows energy harvesting from low power RF waves with a high efficiency. It consists of a novel multiplier circuit design and high quality components to reduce parasitic effects, losses and reaches a low startup voltage. Using lower capacitor rises up the switching speed of the whole circuit. An inductor L serves to store energy in a magnetic field during the negative cycle wave and returns it during the positive one. A low pass filter is arranged in cascade with the rectifier circuit to reduce ripple at high frequencies and to get a stable DC signal. A 50 kΩ load is added at the output to measure the output power and to visualize the behavior of the whole circuit. Simulation results show an outstanding potential of this RF-DC converter witch has a relative high sensitivity beginning with -40 dBm.", "title": "" }, { "docid": "bf333ff6237d875c34a5c62b0216d5d9", "text": "The design of tall buildings essentially involves a conceptual design, approximate analysis, preliminary design and optimization, to safely carry gravity and lateral loads. The design criteria are, strength, serviceability, stability and human comfort. The strength is satisfied by limit stresses, while serviceability is satisfied by drift limits in the range of H/500 to H/1000. Stability is satisfied by sufficient factor of safety against buckling and P-Delta effects. The factor of safety is around 1.67 to 1.92. The human comfort aspects are satisfied by accelerations in the range of 10 to 25 milli-g, where g=acceleration due to gravity, about 981cms/sec^2. The aim of the structural engineer is to arrive at suitable structural schemes, to satisfy these criteria, and assess their structural weights in weight/unit area in square feet or square meters. This initiates structural drawings and specifications to enable construction engineers to proceed with fabrication and erection operations. The weight of steel in lbs/sqft or in kg/sqm is often a parameter the architects and construction managers are looking for from the structural engineer. This includes the weights of floor system, girders, braces and columns. The premium for wind, is optimized to yield drifts in the range of H/500, where H is the height of the tall building. Herein, some aspects of the design of gravity system, and the lateral system, are explored. Preliminary design and optimization steps are illustrated with examples of actual tall buildings designed by CBM Engineers, Houston, Texas, with whom the author has been associated with during the past 3 decades. Dr.Joseph P.Colaco, its President, has been responsible for the tallest buildings in Los Angeles, Houston, St. Louis, Dallas, New Orleans, and Washington, D.C, and with the author in its design staff as a Senior Structural Engineer. Research in the development of approximate methods of analysis, and preliminary design and optimization, has been conducted at WPI, with several of the author’s graduate students. These are also illustrated. Software systems to do approximate analysis of shear-wall frame, framed-tube, out rigger braced tall buildings are illustrated. Advanced Design courses in reinforced and pre-stressed concrete, as well as structural steel design at WPI, use these systems. Research herein, was supported by grants from NSF, Bethlehem Steel, and Army.", "title": "" } ]
scidocsrr
c93eb746fd3537a1ea9f7f5374b87d00
Cytoscape Web: an interactive web-based network browser
[ { "docid": "6f77e74cd8667b270fae0ccc673b49a5", "text": "GeneMANIA (http://www.genemania.org) is a flexible, user-friendly web interface for generating hypotheses about gene function, analyzing gene lists and prioritizing genes for functional assays. Given a query list, GeneMANIA extends the list with functionally similar genes that it identifies using available genomics and proteomics data. GeneMANIA also reports weights that indicate the predictive value of each selected data set for the query. Six organisms are currently supported (Arabidopsis thaliana, Caenorhabditis elegans, Drosophila melanogaster, Mus musculus, Homo sapiens and Saccharomyces cerevisiae) and hundreds of data sets have been collected from GEO, BioGRID, Pathway Commons and I2D, as well as organism-specific functional genomics data sets. Users can select arbitrary subsets of the data sets associated with an organism to perform their analyses and can upload their own data sets to analyze. The GeneMANIA algorithm performs as well or better than other gene function prediction methods on yeast and mouse benchmarks. The high accuracy of the GeneMANIA prediction algorithm, an intuitive user interface and large database make GeneMANIA a useful tool for any biologist.", "title": "" } ]
[ { "docid": "4d0b163e7c4c308696fa5fd4d93af894", "text": "Modeling agent behavior is central to understanding the emergence of complex phenomena in multiagent systems. Prior work in agent modeling has largely been task-specific and driven by handengineering domain-specific prior knowledge. We propose a general learning framework for modeling agent behavior in any multiagent system using only a handful of interaction data. Our framework casts agent modeling as a representation learning problem. Consequently, we construct a novel objective inspired by imitation learning and agent identification and design an algorithm for unsupervised learning of representations of agent policies. We demonstrate empirically the utility of the proposed framework in (i) a challenging highdimensional competitive environment for continuous control and (ii) a cooperative environment for communication, on supervised predictive tasks, unsupervised clustering, and policy optimization using deep reinforcement learning.", "title": "" }, { "docid": "b4b2c5f66c948cbd4c5fbff7f9062f12", "text": "China is taking major steps to improve Beijing’s air quality for the 2008 Olympic Games. However, concentrations of fine particulate matter and ozone in Beijing often exceed healthful levels in the summertime. Based on the US EPA’s Models-3/CMAQ model simulation over the Beijing region, we estimate that about 34% of PM2.5 on average and 35–60% of ozone during high ozone episodes at the Olympic Stadium site can be attributed to sources outside Beijing. Neighboring Hebei and Shandong Provinces and the Tianjin Municipality all exert significant influence on Beijing’s air quality. During sustained wind flow from the south, Hebei Province can contribute 50–70% of Beijing’s PM2.5 concentrations and 20–30% of ozone. Controlling only local sources in Beijing will not be sufficient to attain the air quality goal set for the Beijing Olympics. There is an urgent need for regional air quality management studies and new emission control strategies to ensure that the air quality goals for 2008 are met. r 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "35527aff5ef7f67a19166c0e7e81f77f", "text": "BACKGROUND\nAtherosclerotic plaque stability is related to histological composition. However, current diagnostic tools do not allow adequate in vivo identification and characterization of plaques. Spectral analysis of backscattered intravascular ultrasound (IVUS) data has potential for real-time in vivo plaque classification.\n\n\nMETHODS AND RESULTS\nEighty-eight plaques from 51 left anterior descending coronary arteries were imaged ex vivo at physiological pressure with the use of 30-MHz IVUS transducers. After IVUS imaging, the arteries were pressure-fixed and corresponding histology was collected in matched images. Regions of interest, selected from histology, were 101 fibrous, 56 fibrolipidic, 50 calcified, and 70 calcified-necrotic regions. Classification schemes for model building were computed for autoregressive and classic Fourier spectra by using 75% of the data. The remaining data were used for validation. Autoregressive classification schemes performed better than those from classic Fourier spectra with accuracies of 90.4% for fibrous, 92.8% for fibrolipidic, 90.9% for calcified, and 89.5% for calcified-necrotic regions in the training data set and 79.7%, 81.2%, 92.8%, and 85.5% in the test data, respectively. Tissue maps were reconstructed with the use of accurate predictions of plaque composition from the autoregressive classification scheme.\n\n\nCONCLUSIONS\nCoronary plaque composition can be predicted through the use of IVUS radiofrequency data analysis. Autoregressive classification schemes performed better than classic Fourier methods. These techniques allow real-time analysis of IVUS data, enabling in vivo plaque characterization.", "title": "" }, { "docid": "14682892d663cb1d351f54f3534c44b2", "text": "Feel lonely? What about reading books? Book is one of the greatest friends to accompany while in your lonely time. When you have no friends and activities somewhere and sometimes, reading book can be a great choice. This is not only for spending the time, it will increase the knowledge. Of course the b=benefits to take will relate to what kind of book that you are reading. And now, we will concern you to try reading data quality concepts methodologies and techniques as one of the reading material to finish quickly.", "title": "" }, { "docid": "86dd65bddeb01d4395b81cef0bc4f00e", "text": "Many people may see the development of software and hardware like different disciplines. However, there are great similarities between them that have been shown due to the appearance of extensions for general purpose programming languages for its use as hardware description languages. In this contribution, the approach proposed by the MyHDL package to use Python as an HDL is analyzed by making a comparative study. This study is based on the independent application of Verilog and Python based flows to the development of a real peripheral. The use of MyHDL has revealed to be a powerful and promising tool, not only because of the surprising results, but also because it opens new horizons towards the development of new techniques for modeling and verification, using the full power of one of the most versatile programming languages nowadays.", "title": "" }, { "docid": "2b972c01c0cac24cbbf15f8f2a3d4fa7", "text": "We present techniques for gathering data that expose errors of automatic predictive models. In certain common settings, traditional methods for evaluating predictive models tend to miss rare-but-important errors—most importantly, rare cases for which the model is confident of its prediction (but wrong). In this paper we present a system that, in a game-like setting, asks humans to identify cases that will cause the predictivemodel-based system to fail. Such techniques are valuable in discovering problematic cases that do not reveal themselves during the normal operation of the system, and may include cases that are rare but catastrophic. We describe the design of the system, including design iterations that did not quite work. In particular, the system incentivizes humans to provide examples that are difficult for the model to handle, by providing a reward proportional to the magnitude of the predictive model’s error. The humans are asked to “Beat the Machine” and find cases where the automatic model (“the Machine”) is wrong. Experiments show that the humans using Beat the Machine identify more errors than traditional techniques for discovering errors in from predictive models, and indeed, they identify many more errors where the machine is confident it is correct. Further, the cases the humans identify seem to be not simply outliers, but coherent areas missed completely by the model. Beat the machine identifies the “unknown unknowns.”", "title": "" }, { "docid": "8310851d5115ec570953a8c4a1757332", "text": "We present a global optimization approach for mapping color images onto geometric reconstructions. Range and color videos produced by consumer-grade RGB-D cameras suffer from noise and optical distortions, which impede accurate mapping of the acquired color data to the reconstructed geometry. Our approach addresses these sources of error by optimizing camera poses in tandem with non-rigid correction functions for all images. All parameters are optimized jointly to maximize the photometric consistency of the reconstructed mapping. We show that this optimization can be performed efficiently by an alternating optimization algorithm that interleaves analytical updates of the color map with decoupled parameter updates for all images. Experimental results demonstrate that our approach substantially improves color mapping fidelity.", "title": "" }, { "docid": "1ec52bc459957064fba3bb0feecf264d", "text": "Non-orthogonal transmission, although not entirely new to the wireless industry, is gaining more attention due to its promised throughput gain and unique capability to support a large number of simultaneous transmissions within limited resources. In this article, several key techniques for non-orthogonal transmission are discussed. The downlink technique is featured by MUST, which is being specified in 3GPP for mobile broadband services. In the uplink, grantfree schemes such as multi-user shared access and sparse code multiple access, are promising in supporting massive machine-type communication services. The multi-antenna aspect is also addressed in the context of MUST, showing that MIMO technology and non-orthogonal transmission can be used jointly to provide combined gain.", "title": "" }, { "docid": "4cb7a6a3dee9f5398e779f353d2f542c", "text": "Data mining approach was used in this paper to predict labor market needs, by implementing Naïve Bayes Classifiers, Decision Trees, and Decision Rules techniques. Naïve Bayes technique implemented by creating tables of training; the sets of these tables were generated by using four factors that affect continuity in their jobs. The training tables used to predict the classification of other (unclassified) instances, and tabulate the results of conditional and prior probabilities to test unknown instance for classification. The information obtained can classify unknown instances for employment in the labor market. In Decision Tree technique, a model was constructed from a dataset in the form of a tree, created by a process known as splitting on the value of attributes. The Decision Rules, which was constructed from Decision Trees of rules gave the best results, therefore we recommended using this method in predicting labor market. © 2013 The Authors. Published by Elsevier B.V. Selection and/or peer-review under responsibility of the organizers of the 2013 International Conference on Computational Science", "title": "" }, { "docid": "83688690678b474cd9efe0accfdb93f9", "text": "Feature selection, as a preprocessing step to machine learning, is effective in reducing dimensionality, removing irrelevant data, increasing learning accuracy, and improving result comprehensibility. However, the recent increase of dimensionality of data poses a severe challenge to many existing feature selection methods with respect to efficiency and effectiveness. In this work, we introduce a novel concept, predominant correlation, and propose a fast filter method which can identify relevant features as well as redundancy among relevant features without pairwise correlation analysis. The efficiency and effectiveness of our method is demonstrated through extensive comparisons with other methods using real-world data of high dimensionality.", "title": "" }, { "docid": "865c1ee7044cbb23d858706aa1af1a63", "text": "Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to protect PV modules from damages and to eliminate the risks of safety hazards. This paper examines two types of unique faults found in photovoltaic (PV) array installations that have not been studied in the literature. One is a fault that occurs under low irradiance conditions. In some circumstances, fault current protection devices are unable to detect certain types of faults so that the fault may remain hidden in the PV system, even after irradiance increases. The other type of fault occurs when a string of PV modules is reversely connected, caused by inappropriate installation. This fault type brings new challenges for overcurrent protection devices because of the high rating voltage requirement. In both cases, these unique PV faults may subsequently lead to unexpected safety hazards, reduced system efficiency and reduced reliability.", "title": "" }, { "docid": "b4e5153f7592394e8743bc0fdee40dcc", "text": "This paper is focussed on the modelling and control of a hydraulically-driven biologically-inspired robotic leg. The study is part of a larger project aiming at the development of an autonomous quadruped robot (hyQ) for outdoor operations. The leg has two hydraulically-actuated degrees of freedom (DOF), the hip and knee joints. The actuation system is composed of proportional valves and asymmetric cylinders. After a brief description of the prototype leg, the paper shows the development of a comprehensive model of the leg where critical parameters have been experimentally identified. Subsequently the leg control design is presented. The core of this work is the experimental assessment of the pros and cons of single-input single-output (SISO) vs. multiple-input multiple-output (MIMO) and linear vs. nonlinear control algorithms in this application (the leg is a coupled multivariable system driven by nonlinear actuators). The control schemes developed are a conventional PID (linear SISO), a Linear Quadratic Regulator (LQR) controller (linear MIMO) and a Feedback Linearisation (FL) controller (nonlinear MIMO). LQR performs well at low frequency but its behaviour worsens at higher frequencies. FL produces the fastest response in simulation, but when implemented is sensitive to parameters uncertainty and needs to be properly modified to achieve equally good performance also in the practical implementation.", "title": "" }, { "docid": "7d25c646a8ce7aa862fba7088b8ea915", "text": "Neuro-dynamic programming (NDP for short) is a relatively new class of dynamic programming methods for control and sequential decision making under uncertainty. These methods have the potential of dealing with problems that for a long time were thought to be intractable due to either a large state space or the lack of an accurate model. They combine ideas from the fields of neural networks, artificial intelligence, cognitive science, simulation, and approximation theory. We will delineate the major conceptual issues, survey a number of recent developments, describe some computational experience, and address a number of open questions. We consider systems where decisions are made in stages. The outcome of each decision is not fully predictable but can be anticipated to some extent before the next decision is made. Each decision results in some immediate cost but also affects the context in which future decisions are to be made and therefore affects the cost incurred in future stages. Dynamic programming (DP for short) provides a mathematical formalization of the tradeoff between immediate and future costs. Generally, in DP formulations there is a discrete-time dynamic system whose state evolves according to given transition probabilities that depend on a decision/control u. In particular, if we are in state i and we choose decision u, we move to state j with given probability pij(u). Simultaneously with this transition, we incur a cost g(i, u, j). In comparing, however, the available decisions u, it is not enough to look at the magnitude of the cost g(i, u, j); we must also take into account how desirable the next state j is. We thus need a way to rank or rate states j. This is done by using the optimal cost (over all remaining stages) starting from state j, which is denoted by J∗(j). These costs can be shown to", "title": "" }, { "docid": "bca883795052e1c14553600f40a0046b", "text": "The SEIR model with nonlinear incidence rates in epidemiology is studied. Global stability of the endemic equilibrium is proved using a general criterion for the orbital stability of periodic orbits associated with higher-dimensional nonlinear autonomous systems as well as the theory of competitive systems of differential equations.", "title": "" }, { "docid": "f64c4946a26f401822539bdd020f4ac5", "text": "This paper reviews the concept of presence in immersive virtual environments, the sense of being there signalled by people acting and responding realistically to virtual situations and events. We argue that presence is a unique phenomenon that must be distinguished from the degree of engagement, involvement in the portrayed environment. We argue that there are three necessary conditions for presence: the (a) consistent low latency sensorimotor loop between sensory data and proprioception; (b) statistical plausibility: images must be statistically plausible in relation to the probability distribution of images over natural scenes. A constraint on this plausibility is the level of immersion; (c) behaviour-response correlations: Presence may be enhanced and maintained over time by appropriate correlations between the state and behaviour of participants and responses within the environment, correlations that show appropriate responses to the activity of the participants. We conclude with a discussion of methods for assessing whether presence occurs, and in particular recommend the approach of comparison with ground truth and give some examples of this.", "title": "" }, { "docid": "10c7b7a19197c8562ebee4ae66c1f5e8", "text": "Generative Adversarial Networks (GANs) have recently achieved impressive results for many real-world applications, and many GAN variants have emerged with improvements in sample quality and training stability. However, visualization and understanding of GANs is largely missing. How does a GAN represent our visual world internally? What causes the artifacts in GAN results? How do architectural choices affect GAN learning? Answering such questions could enable us to develop new insights and better models. In this work, we present an analytic framework to visualize and understand GANs at the unit-, object-, and scene-level. We first identify a group of interpretable units that are closely related to object concepts with a segmentation-based network dissection method. Then, we quantify the causal effect of interpretable units by measuring the ability of interventions to control objects in the output. Finally, we examine the contextual relationship between these units and their surrounding by inserting the discovered object concepts into new images. We show several practical applications enabled by our framework, from comparing internal representations across different layers, models, and datasets, to improving GANs by locating and removing artifact-causing units, to interactively manipulating objects in the scene. We provide open source interpretation tools to help peer researchers and practitioners better understand their GAN models∗.", "title": "" }, { "docid": "d3c3e9877695a8abb2783e685f254eef", "text": "Software systems are constantly evolving, with new versions and patches being released on a continuous basis. Unfortunately, software updates present a high risk, with many releases introducing new bugs and security vulnerabilities. \n We tackle this problem using a simple but effective multi-version based approach. Whenever a new update becomes available, instead of upgrading the software to the new version, we run the new version in parallel with the old one; by carefully coordinating their executions and selecting the behaviour of the more reliable version when they diverge, we create a more secure and dependable multi-version application. \n We implemented this technique in Mx, a system targeting Linux applications running on multi-core processors, and show that it can be applied successfully to several real applications such as Coreutils, a set of user-level UNIX applications; Lighttpd, a popular web server used by several high-traffic websites such as Wikipedia and YouTube; and Redis, an advanced key-value data structure server used by many well-known services such as GitHub and Flickr.", "title": "" }, { "docid": "c5d06fe50c16278943fe1df7ad8be888", "text": "Current main memory organizations in embedded and mobile application systems are DRAM dominated. The ever-increasing gap between today's processor and memory speeds makes the DRAM subsystem design a major aspect of computer system design. However, the limitations to DRAM scaling and other challenges like refresh provide undesired trade-offs between performance, energy and area to be made by architecture designers. Several emerging NVM options are being explored to at least partly remedy this but today it is very hard to assess the viability of these proposals because the simulations are not fully based on realistic assumptions on the NVM memory technologies and on the system architecture level. In this paper, we propose to use realistic, calibrated STT-MRAM models and a well calibrated cross-layer simulation and exploration framework, named SEAT, to better consider technologies aspects and architecture constraints. We will focus on general purpose/mobile SoC multi-core architectures. We will highlight results for a number of relevant benchmarks, representatives of numerous applications based on actual system architecture. The most energy efficient STT-MRAM based main memory proposal provides an average energy consumption reduction of 27% at the cost of 2x the area and the least energy efficient STT-MRAM based main memory proposal provides an average energy consumption reduction of 8% at the around the same area or lesser when compared to DRAM.", "title": "" }, { "docid": "eba545eb04c950ecd9462558c9d3da85", "text": "The ability to recognize facial expressions automatically enables novel applications in human-computer interaction and other areas. Consequently, there has been active research in this field, with several recent works utilizing Convolutional Neural Networks (CNNs) for feature extraction and inference. These works differ significantly in terms of CNN architectures and other factors. Based on the reported results alone, the performance impact of these factors is unclear. In this paper, we review the state of the art in image-based facial expression recognition using CNNs and highlight algorithmic differences and their performance impact. On this basis, we identify existing bottlenecks and consequently directions for advancing this research field. Furthermore, we demonstrate that overcoming one of these bottlenecks – the comparatively basic architectures of the CNNs utilized in this field – leads to a substantial performance increase. By forming an ensemble of modern deep CNNs, we obtain a FER2013 test accuracy of 75.2%, outperforming previous works without requiring auxiliary training data or face registration.", "title": "" }, { "docid": "4d36b2d77713a762040fd4ebc68e0d54", "text": "Diversification and fragmentation of scientific exploration brings an increasing need for integration, for example through interdisciplinary research. The field of nanoscience and nanotechnology appears to exhibit strong interdisciplinary characteristics. Our objective was to explore the structure of the field and ascertain how different research areas within this field reflect interdisciplinarity through citation patterns. The complex relations between the citing and cited articles were examined through schematic visualization. Examination of WOS categories assigned to journals shows the scatter of nano studies across a wide range of research topics. We identified four distinctive groups of categories each showing some detectable shared characteristics. Three alternative measures of similarity were employed to delineate these groups. These distinct groups enabled us to assess interdisciplinarity within the groups and relationships between the groups. Some measurable levels of interdisciplinarity exist in all groups. However, one of the groups indicated that certain categories of both citing as well as cited articles aggregate mostly in the framework of physics, chemistry, and materials. This may suggest that the nanosciences show characteristics of a distinct discipline. The similarity in citing articles is most evident inside the respective groups, though, some subgroups within larger groups are also related to each other through the similarity of cited articles.", "title": "" } ]
scidocsrr
0ce169d13f1650ed08cab1fe6935545e
Advancing the state of mobile cloud computing
[ { "docid": "a08aa88aa3b4249baddbd8843e5c9be3", "text": "We present the design, implementation, evaluation, and user ex periences of theCenceMe application, which represents the first system that combines the inference of the presence of individuals using off-the-shelf, sensor-enabled mobile phones with sharing of this information through social networking applications such as Facebook and MySpace. We discuss the system challenges for the development of software on the Nokia N95 mobile phone. We present the design and tradeoffs of split-level classification, whereby personal sensing presence (e.g., walking, in conversation, at the gym) is derived from classifiers which execute in part on the phones and in part on the backend servers to achieve scalable inference. We report performance measurements that characterize the computational requirements of the software and the energy consumption of the CenceMe phone client. We validate the system through a user study where twenty two people, including undergraduates, graduates and faculty, used CenceMe continuously over a three week period in a campus town. From this user study we learn how the system performs in a production environment and what uses people find for a personal sensing system.", "title": "" } ]
[ { "docid": "6e30387a3706dea2b7d18668c08bb31b", "text": "The semantic web vision is one in which rich, ontology-based semantic markup will become widely available. The availability of semantic arkup on the web opens the way to novel, sophisticated forms of question answering. AquaLog is a portable question-answering system which akes queries expressed in natural language and an ontology as input, and returns answers drawn from one or more knowledge bases (KBs). We ay that AquaLog is portable because the configuration time required to customize the system for a particular ontology is negligible. AquaLog resents an elegant solution in which different strategies are combined together in a novel way. It makes use of the GATE NLP platform, string etric algorithms, WordNet and a novel ontology-based relation similarity service to make sense of user queries with respect to the target KB. oreover it also includes a learning component, which ensures that the performance of the system improves over the time, in response to the articular community jargon used by end users. 2007 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "673d6aea6c9a3ebde1d8bf30be9a8804", "text": "FDTD numerical study compared to the results of measurement is reported for double-ridged horn antenna with sinusoidal profile of the ridge. Different transitions from coaxial to double-ridged waveguide were considered on the preliminary step of the study. Next, a suitable configuration for feeding the ridges of antenna was chosen. The sinusoidal ridge taper is described in the next part of the paper. Finally, the simulations results of complete antenna are presented. Theoretical characteristics of reflection and antenna patterns are compared to the results of measurements showing acceptable accordance.", "title": "" }, { "docid": "7e7651261be84e2e05cde0ac9df69e6d", "text": "Searching a large database to find a sequence that is most similar to a query can be prohibitively expensive, particularly if individual sequence comparisons involve complex operations such as warping. To achieve scalability, \"pruning\" heuristics are typically employed to minimize the portion of the database that must be searched with more complex matching. We present an approximate pruning technique which involves embedding sequences in a Euclidean space. Sequences are embedded using a convolutional network with a form of attention that integrates over time, trained on matching and non-matching pairs of sequences. By using fixed-length embeddings, our pruning method effectively runs in constant time, making it many orders of magnitude faster than full dynamic time warping-based matching for large datasets. We demonstrate our approach on a large-scale musical score-to-audio recording retrieval task.", "title": "" }, { "docid": "de1c4c92e95320f5526c8af06acfadc0", "text": "Provides a method for automatic translation UML diagrams to Petri nets, which is to convert formats like structure .xmi and .cpn. We consider the transformation of the most frequently used items on activity diagram - state action, condition, fork and join based on rules transformation. These elements are shown in activity diagram and its corresponding Petri net. It is noted that in active diagram presence four types elements - state action, a pseudostate, final state and transition, in Petri nets involved three types elements - place, transition and arc. Discussed in detail the comparison of initial state, state action and final state of activity diagram and places Petri nets - element name and its properties.", "title": "" }, { "docid": "bbf764205f770481b787e76db5a3b614", "text": "A∗ is a popular path-finding algorithm, but it can only be applied to those domains where a good heuristic function is known. Inspired by recent methods combining Deep Neural Networks (DNNs) and trees, this study demonstrates how to train a heuristic represented by a DNN and combine it with A∗ . This new algorithm which we call א∗ can be used efficiently in domains where the input to the heuristic could be processed by a neural network. We compare א∗ to N-Step Deep QLearning (DQN Mnih et al. 2013) in a driving simulation with pixel-based input, and demonstrate significantly better performance in this scenario.", "title": "" }, { "docid": "700191eaaaf0bdd293fc3bbd24467a32", "text": "SMART (Semantic web information Management with automated Reasoning Tool) is an open-source project, which aims to provide intuitive tools for life scientists for represent, integrate, manage and query heterogeneous and distributed biological knowledge. SMART was designed with interoperability and extensibility in mind and uses AJAX, SVG and JSF technologies, RDF, OWL, SPARQL semantic web languages, triple stores (i.e. Jena) and DL reasoners (i.e. Pellet) for the automated reasoning. Features include semantic query composition and validation using DL reasoners, a graphical representation of the query, a mapping of DL queries to SPARQL, and the retrieval of pre-computed inferences from an RDF triple store. With a use case scenario, we illustrate how a biological scientist can intuitively query the yeast knowledge base and navigate the results. Continued development of this web-based resource for the biological semantic web will enable new information retrieval opportunities for the life sciences.", "title": "" }, { "docid": "3394eb51b71e5def4e4637963da347ab", "text": "In this paper we present a model of e-learning suitable for teacher training sessions. The main purpose of our work is to define the components of the educational system which influences the successful adoption of e-learning in the field of education. We also present the factors of the readiness of e-learning mentioned in the literature available and classifies them into the 3 major categories that constitute the components of every organization and consequently that of education. Finally, we present an implementation model of e-learning through the use of virtual private networks, which lends an added value to the realization of e-learning.", "title": "" }, { "docid": "1523534d398b4900c90d94e3f1bee422", "text": "PURPOSE\nThe purpose of this pilot study was to examine the effectiveness of hippotherapy as an intervention for the treatment of postural instability in individuals with multiple sclerosis (MS).\n\n\nSUBJECTS\nA sample of convenience of 15 individuals with MS (24-72 years) were recruited from support groups and assessed for balance deficits.\n\n\nMETHODS\nThis study was a nonequivalent pretest-posttest comparison group design. Nine individuals (4 males, 5 females) received weekly hippotherapy intervention for 14 weeks. The other 6 individuals (2 males, 4 females) served as a comparison group. All participants were assessed with the Berg Balance Scale (BBS) and Tinetti Performance Oriented Mobility Assessment (POMA) at 0, 7, and 14 weeks.\n\n\nRESULTS\nThe group receiving hippotherapy showed statistically significant improvement from pretest (0 week) to posttest (14 week) on the BBS (mean increase 9.15 points (x (2) = 8.82, p = 0.012)) and POMA scores (mean increase 5.13 (x (2) = 10.38, p = 0.006)). The comparison group had no significant changes on the BBS (mean increase 0.73 (x (2) = 0.40, p = 0.819)) or POMA (mean decrease 0.13 (x (2) = 1.41, p = 0.494)). A statistically significant difference was also found between the groups' final BBS scores (treatment group median = 55.0, comparison group median 41.0), U = 7, r = -0.49.\n\n\nDISCUSSION\nHippotherapy shows promise for the treatment of balance disorders in persons with MS. Further research is needed to refine protocols and selection criteria.", "title": "" }, { "docid": "e65d14dc0777e4a14fea6d00f06d9bfc", "text": "A novel single-layer dual band-notched printed circle-like slot antenna for ultrawideband (UWB) applications is presented. The proposed antenna comprises a circle-like slot, a trident-shaped feed line, and two nested C-shaped stubs. By using a trident-shaped feed line, much wider impedance bandwidth is obtained. Due to inserting a pair of nested C-shaped stubs on the back surface of the substrate, two frequency band-notches of 5.1-6.2 (WLAN) and 3-3.8 GHz (WiMAX) are achieved. The nested stubs are connected to the tuning stub using two cylindrical via pins. The designed antenna has a total size of 26 × 30 mm2 and operates over the frequency band between 2.5 and 25 GHz. Throughout this letter, experimental results of the impedance bandwidth, gain, and radiation patterns are compared and discussed .", "title": "" }, { "docid": "cc9686bac7de957afe52906763799554", "text": "A key issue in software evolution analysis is the identification of particular changes that occur across several versions of a program. We present change distilling, a tree differencing algorithm for fine-grained source code change extraction. For that, we have improved the existing algorithm by Chawathe et al. for extracting changes in hierarchically structured data. Our algorithm extracts changes by finding both a match between the nodes of the compared two abstract syntax trees and a minimum edit script that can transform one tree into the other given the computed matching. As a result, we can identify fine-grained change types between program versions according to our taxonomy of source code changes. We evaluated our change distilling algorithm with a benchmark that we developed, which consists of 1,064 manually classified changes in 219 revisions of eight methods from three different open source projects. We achieved significant improvements in extracting types of source code changes: Our algorithm approximates the minimum edit script 45 percent better than the original change extraction approach by Chawathe et al. We are able to find all occurring changes and almost reach the minimum conforming edit script, that is, we reach a mean absolute percentage error of 34 percent, compared to the 79 percent reached by the original algorithm. The paper describes both our change distilling algorithm and the results of our evolution.", "title": "" }, { "docid": "4264c3ed6ea24a896377a7efa2b425b0", "text": "The pervasiveness of Web 2.0 and social networking sites has enabled people to interact with each other easily through various social media. For instance, popular sites like Del.icio.us, Flickr, and YouTube allow users to comment on shared content (bookmarks, photos, videos), and users can tag their favorite content. Users can also connect with one another, and subscribe to or become a fan or a follower of others. These diverse activities result in a multi-dimensional network among actors, forming group structures with group members sharing similar interests or affiliations. This work systematically addresses two challenges. First, it is challenging to effectively integrate interactions over multiple dimensions to discover hidden community structures shared by heterogeneous interactions. We show that representative community detection methods for single-dimensional networks can be presented in a unified view. Based on this unified view, we present and analyze four possible integration strategies to extend community detection from single-dimensional to multi-dimensional networks. In particular, we propose a novel integration scheme based on structural features. Another challenge is the evaluation of different methods without ground truth information about community membership. We employ a novel cross-dimension network validation procedure to compare the performance of different methods. We use synthetic data to deepen our understanding, and real-world data to compare integration strategies as well as baseline methods in a large scale. We study further the computational time of different methods, normalization effect during integration, sensitivity to related parameters, and alternative community detection methods for integration. Lei Tang, Xufei Wang, Huan Liu Computer Science and Engineering, Arizona State University, Tempe, AZ 85287, USA E-mail: {L.Tang, Xufei.Wang, Huan.Liu@asu.edu} Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 2010 2. REPORT TYPE 3. DATES COVERED 00-00-2010 to 00-00-2010 4. TITLE AND SUBTITLE Community Detection in Multi-Dimensional Networks 5a. CONTRACT NUMBER", "title": "" }, { "docid": "630c4e87333606c6c8e7345cb0865c64", "text": "MapReduce plays a critical role as a leading framework for big data analytics. In this paper, we consider a geodistributed cloud architecture that provides MapReduce services based on the big data collected from end users all over the world. Existing work handles MapReduce jobs by a traditional computation-centric approach that all input data distributed in multiple clouds are aggregated to a virtual cluster that resides in a single cloud. Its poor efficiency and high cost for big data support motivate us to propose a novel data-centric architecture with three key techniques, namely, cross-cloud virtual cluster, data-centric job placement, and network coding based traffic routing. Our design leads to an optimization framework with the objective of minimizing both computation and transmission cost for running a set of MapReduce jobs in geo-distributed clouds. We further design a parallel algorithm by decomposing the original large-scale problem into several distributively solvable subproblems that are coordinated by a high-level master problem. Finally, we conduct real-world experiments and extensive simulations to show that our proposal significantly outperforms the existing works.", "title": "" }, { "docid": "ecf5be2966efe597978a25c72dc676e4", "text": "A compact ±45° dual-polarized magneto-electric (ME) dipole base station antenna is proposed for 2G/3G/LTE applications. The antenna is excited by two Γ-shaped probes placed at a convenient location and two orthogonally octagonal loop electric dipoles are employed to achieve a wide impedance bandwidth. A stable antenna gain and a stable radiation pattern are realized by using a rectangular box-shaped reflector instead of planar one. The antenna is prototype and measured. Measured results show overlapped impedance bandwidth is 58% with standing-wave ratio (SWR) ≤ 1.5 from 1.68 to 3.05 GHz, port-to-port isolation is large than 26 dB within the bandwidth, and stable antenna gains of 8.6 ± 0.8 dBi and 8.3 ± 0.6 dBi for port 1 and port 2, respectively. Nearly symmetrical radiation patterns with low back lobe radiation both in horizontal and vertical planes, and narrow beamwidth can be also obtained. Moreover, the size of the antenna is very compact, which is only 0.79λ0 × 0.79λ0 × 0.26λ0. The proposed antenna can be used for multiband base stations in next generation communication systems.", "title": "" }, { "docid": "a94f066ec5db089da7fd19ac30fe6ee3", "text": "Information Centric Networking (ICN) is a new networking paradigm in which the ne twork provides users with content instead of communicatio n channels between hosts. Software Defined Networking (SDN) is an approach that promises to enable the co ntinuous evolution of networking architectures. In this paper we propose and discuss solutions to support ICN by using SDN concepts. We focus on an ICN framework called CONET, which groun ds its roots in the CCN/NDN architecture and can interwork with its implementation (CCNx). Altho ugh some details of our solution have been specifically designed for the CONET architecture, i ts general ideas and concepts are applicable to a c lass of recent ICN proposals, which follow the basic mod e of operation of CCN/NDN. We approach the problem in two complementary ways. First we discuss a general and long term solution based on SDN concepts without taking into account specific limit ations of SDN standards and equipment. Then we focus on an experiment to support ICN functionality over a large scale SDN testbed based on OpenFlow, developed in the context of the OFELIA Eu ropean research project. The current OFELIA testbed is based on OpenFlow 1.0 equipment from a v ariety of vendors, therefore we had to design the experiment taking into account the features that ar e currently available on off-the-shelf OpenFlow equipment.", "title": "" }, { "docid": "af49fef0867a951366cfb21288eeb3ed", "text": "As a discriminative method of one-shot learning, Siamese deep network allows recognizing an object from a single exemplar with the same class label. However, it does not take the advantage of the underlying structure and relationship among a multitude of instances since it only relies on pairs of instances for training. In this paper, we propose a quadruplet deep network to examine the potential connections among the training instances, aiming to achieve a more powerful representation. We design four shared networks that receive multi-tuple of instances as inputs and are connected by a novel loss function consisting of pair-loss and tripletloss. According to the similarity metric, we select the most similar and the most dissimilar instances as the positive and negative inputs of triplet loss from each multi-tuple. We show that this scheme improves the training performance and convergence speed. Furthermore, we introduce a new weighted pair loss for an additional acceleration of the convergence. We demonstrate promising results for model-free tracking-by-detection of objects from a single initial exemplar in the Visual Object Tracking benchmark.", "title": "" }, { "docid": "e6912f1b9e6060b452f2313766288e97", "text": "The air-core inductance of power transformers is measured using a nonideal low-power rectifier. Its dc output serves to drive the transformer into deep saturation, and its ripple provides low-amplitude variable excitation. The principal advantage of the proposed method is its simplicity. For validation, the experimental results are compared with 3-D finite-element simulations.", "title": "" }, { "docid": "a41c9650da7ca29a51d310cb4a3c814d", "text": "The analysis of resonant-type antennas based on the fundamental infinite wavelength supported by certain periodic structures is presented. Since the phase shift is zero for a unit-cell that supports an infinite wavelength, the physical size of the antenna can be arbitrary; the antenna's size is independent of the resonance phenomenon. The antenna's operational frequency depends only on its unit-cell and the antenna's physical size depends on the number of unit-cells. In particular, the unit-cell is based on the composite right/left-handed (CRLH) metamaterial transmission line (TL). It is shown that the CRLH TL is a general model for the required unit-cell, which includes a nonessential series capacitance for the generation of an infinite wavelength. The analysis and design of the required unit-cell is discussed based upon field distributions and dispersion diagrams. It is also shown that the supported infinite wavelength can be used to generate a monopolar radiation pattern. Infinite wavelength resonant antennas are realized with different number of unit-cells to demonstrate the infinite wavelength resonance", "title": "" }, { "docid": "86fca69ae48592e06109f7b05180db28", "text": "Background: The software development industry has been adopting agile methods instead of traditional software development methods because they are more flexible and can bring benefits such as handling requirements changes, productivity gains and business alignment. Objective: This study seeks to evaluate, synthesize, and present aspects of research on agile methods tailoring including the method tailoring approaches adopted and the criteria used for agile practice selection. Method: The method adopted was a Systematic Literature Review (SLR) on studies published from 2002 to 2014. Results: 56 out of 783 papers have been identified as describing agile method tailoring approaches. These studies have been identified as case studies regarding the empirical research, as solution proposals regarding the research type, and as evaluation studies regarding the research validation type. Most of the papers used method engineering to implement tailoring and were not specific to any agile method on their scope. Conclusion: Most of agile methods tailoring research papers proposed or improved a technique, were implemented as case studies analyzing one case in details and validated their findings using evaluation. Method engineering was the base for tailoring, the approaches are independent of agile method and the main criteria used are internal environment and objectives variables. © 2015 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "21eddfd81b640fc1810723e93f94ae5d", "text": "R. B. Gnanajothi, Topics in graph theory, Ph. D. thesis, Madurai Kamaraj University, India, 1991. E. M. Badr, On the Odd Gracefulness of Cyclic Snakes With Pendant Edges, International journal on applications of graph theory in wireless ad hoc networks and sensor networks (GRAPH-HOC) Vol. 4, No. 4, December 2012. E. M. Badr, M. I. Moussa & K. Kathiresan (2011): Crown graphs and subdivision of ladders are odd graceful, International Journal of Computer Mathematics, 88:17, 3570-3576. A. Rosa, On certain valuation of the vertices of a graph, Theory of Graphs (International Symposium, Rome, July 1966), Gordon and Breach, New York and Dunod Paris (1967) 349-355. A. Solairaju & P. Muruganantham, Even Vertex Gracefulness of Fan Graph,", "title": "" }, { "docid": "5b6d68984b4f9a6e0f94e0a68768dc8c", "text": "In this paper, we focus on a major internet problem which is a huge amount of uncategorized text. We review existing techniques used for feature selection and categorization. After reviewing the existing literature, it was found that there exist some gaps in existing algorithms, one of which is a requirement of the labeled dataset for the training of the classifier. Keywords— Bayesian; KNN; PCA; SVM; TF-IDF", "title": "" } ]
scidocsrr
2b3fb61ba8c0b8c5d5df64c9fe2c86cb
Fast Group Recommendations by Applying User Clustering
[ { "docid": "3663322ebe405b5e9d588ccdf305da02", "text": "In this demonstration paper, we present gRecs, a system for group recommendations that follows a collaborative strategy. We enhance recommendations with the notion of support to model the confidence of the recommendations. Moreover, we propose partitioning users into clusters of similar ones. This way, recommendations for users are produced with respect to the preferences of their cluster members without extensively searching for similar users in the whole user base. Finally, we leverage the power of a top-k algorithm for locating the top-k group recommendations.", "title": "" }, { "docid": "455a6fe5862e3271ac00057d1b569b11", "text": "Personalization technologies and recommender systems help online consumers avoid information overload by making suggestions regarding which information is most relevant to them. Most online shopping sites and many other applications now use recommender systems. Two new recommendation techniques leverage multicriteria ratings and improve recommendation accuracy as compared with single-rating recommendation approaches. Taking full advantage of multicriteria ratings in personalization applications requires new recommendation techniques. In this article, we propose several new techniques for extending recommendation technologies to incorporate and leverage multicriteria rating information.", "title": "" } ]
[ { "docid": "c591881de09c709ae2679cacafe24008", "text": "This paper discusses a technique to estimate the position of a sniper using a spatial microphone array placed on elevated platforms. The shooter location is obtained from the exact location of the microphone array, from topographic information of the area and from an estimated direction of arrival (DoA) of the acoustic wave related to the explosion in the gun barrel, which is known as muzzle blast. The estimation of the DOA is based on the time differences the sound wavefront arrives at each pair of microphones, employing a technique known as Generalized Cross Correlation (GCC) with phase transform. The main idea behind the localization procedure used herein is that, based on the DoA, the acoustical path of the muzzle blast (from the weapon to the microphone) can be marked as a straight line on a terrain profile obtained from an accurate digital map, allowing the estimation of the shooter location whenever the microphone array is located on an dominant position. In addition, a new approach to improve the DoA estimation from a cognitive selection of microphones is introduced. In this technique, the microphones selected must form a consistent (sum of delays equal to zero) fundamental loop. The results obtained after processing muzzle blast gunshot signals recorded in a typical scenario, show the effectiveness of the proposed method.", "title": "" }, { "docid": "52844cb9280029d5ddec869945b28be2", "text": "In this work, a new fast dynamic community detection algorithm for large scale networks is presented. Most of the previous community detection algorithms are designed for static networks. However, large scale social networks are dynamic and evolve frequently over time. To quickly detect communities in dynamic large scale networks, we proposed dynamic modularity optimizer framework (DMO) that is constructed by modifying well-known static modularity based community detection algorithm. The proposed framework is tested using several different datasets. According to our results, community detection algorithms in the proposed framework perform better than static algorithms when large scale dynamic networks are considered.", "title": "" }, { "docid": "6f9ffe5e1633046418ca0bc4f7089b2f", "text": "This paper presents a new motion planning primitive to be used for the iterative steering of vision-based autonomous vehicles. This primitive is a parameterized quintic spline, denoted as -spline, that allows interpolating an arbitrary sequence of points with overall second-order geometric ( -) continuity. Issues such as completeness, minimality, regularity, symmetry, and flexibility of these -splines are addressed in the exposition. The development of the new primitive is tightly connected to the inversion control of nonholonomic car-like vehicles. The paper also exposes a supervisory strategy for iterative steering that integrates feedback vision data processing with the feedforward inversion control.", "title": "" }, { "docid": "f267da735820809d9c93672299db43f5", "text": "The Feigenbaum constants arise in the theory of iteration of real functions. We calculate here to high precision the constants a and S associated with period-doubling bifurcations for maps with a single maximum of order z , for 2 < z < 12. Multiple-precision floating-point techniques are used to find a solution of Feigenbaum's functional equation, and hence the constants. 1. History Consider the iteration of the function (1) fßZ(x) = l-p\\x\\z, z>0; that is, the sequence (2) *(+i =/„,*(*/)> i'=l,2,...; x0 = 0. In 1979 Feigenbaum [8] observed that there exist bifurcations in the set of limit points of (2) (that is, in the set of all points which are the limit of some infinite subsequence) as the parameter p is increased for fixed z. Roughly speaking, if the sequence (2) is asymptotically periodic with period p for a particular parameter value p (that is, there exists a stable p-cycle), then as p is increased, the period will be observed to double, so that a stable 2/>cycle appears. We denote the critical /¿-value at which the 2J cycle first appears by Pj. Feigenbaum also conjectured that there exist certain \"universal\" scaling constants associated with these bifurcations. Specifically, (3) «5 = lim ZlZJhzi 7-00 pJ+x ftj exists, and ô2 is about 4.669. Similarly, if rf. is the value of the nearest cycle element to 0 in the 2J cycle, then (4) az = lim y;-oo dJ+x exists, and a2 is about -2.503 . Received November 22, 1989; revised September 10, 1990. 1980 Mathematics Subject Classification (1985 Revision). Primary 11Y60, 26A18, 39A10, 65Q05. ©1991 American Mathematical Society 0025-5718/91 $1.00 + $.25 per page", "title": "" }, { "docid": "55fef695aadc5d524e2d858345dc325f", "text": "The number of offboard fast charging stations is increasing as plug-in electric vehicles (PEVs) are more widespread in the world. Additional features on the operation of chargers will result in more benefits for investors, utility companies, and PEV owners. This paper investigates reactive power support operation using offboard PEV charging stations while charging a PEV battery. The topology consists of a three-phase ac-dc boost rectifier that is capable of operating in all four quadrants. The operation modes that are of interest are power-factor-corrected charging operation, and charging and capacitive/inductive reactive power operation. This paper also presents a control system for the PQ command following of a bidirectional offboard charger. The controller only receives the charging power command from a user and the reactive power command (when needed) from a utility, and it adjusts the line current and the battery charging current correspondingly. The vehicle's battery is not affected during the reactive power operation. A simulation study is developed utilizing PSIM, and the control system is experimentally tested using a 12.5-kVA charging station design.", "title": "" }, { "docid": "320947783c6a43fe858e3ab97f231d9f", "text": "Almost all orthopaedic surgeons come across acute compartment syndrome (ACS) in their clinical practice. Diagnosis of ACS mostly relies on clinical findings. If the diagnosis is missed and left untreated, it can lead to serious consequences which can endanger limb and life of the patient and also risk the clinician to face lawsuits. This review article highlights the characteristic features of ACS which will help an orthopaedic surgeon to understand the pathophysiology, natural history, high risk patients, diagnosis, and surgical management of the condition.", "title": "" }, { "docid": "c056fa934bbf9bc6a286cd718f3a7217", "text": "The advent of deep sub-micron technology has exacerbated reliability issues in on-chip interconnects. In particular, single event upsets, such as soft errors, and hard faults are rapidly becoming a force to be reckoned with. This spiraling trend highlights the importance of detailed analysis of these reliability hazards and the incorporation of comprehensive protection measures into all network-on-chip (NoC) designs. In this paper, we examine the impact of transient failures on the reliability of on-chip interconnects and develop comprehensive counter-measures to either prevent or recover from them. In this regard, we propose several novel schemes to remedy various kinds of soft error symptoms, while keeping area and power overhead at a minimum. Our proposed solutions are architected to fully exploit the available infrastructures in an NoC and enable versatile reuse of valuable resources. The effectiveness of the proposed techniques has been validated using a cycle-accurate simulator", "title": "" }, { "docid": "642b98bf1ea22958411514cb7f01ef68", "text": "This paper studies the problems of vehicle make & model classification. Some of the main challenges are reaching high classification accuracy and reducing the annotation time of the images. To address these problems, we have created a fine-grained database using online vehicle marketplaces of Turkey. A pipeline is proposed to combine an SSD (Single Shot Multibox Detector) model with a CNN (Convolutional Neural Network) model to train on the database. In the pipeline, we first detect the vehicles by following an algorithm which reduces the time for annotation. Then, we feed them into the CNN model. It is reached approximately 4% better classification accuracy result than using a conventional CNN model. Next, we propose to use the detected vehicles as ground truth bounding box (GTBB) of the images and feed them into an SSD model in another pipeline. At this stage, it is reached reasonable classification accuracy result without using perfectly shaped GTBB. Lastly, an application is implemented in a use case by using our proposed pipelines. It detects the unauthorized vehicles by comparing their license plate numbers and make & models. It is assumed that license plates are readable.", "title": "" }, { "docid": "a8c1224f291df5aeb655a2883b16bcfb", "text": "We present a scalable approach to automatically suggest relevant clothing products, given a single image without metadata. We formulate the problem as cross-scenario retrieval: the query is a real-world image, while the products from online shopping catalogs are usually presented in a clean environment. We divide our approach into two main stages: a) Starting from articulated pose estimation, we segment the person area and cluster promising image regions in order to detect the clothing classes present in the query image. b) We use image retrieval techniques to retrieve visually similar products from each of the detected classes. We achieve clothing detection performance comparable to the state-of-the-art on a very recent annotated dataset, while being more than 50 times faster. Finally, we present a large scale clothing suggestion scenario, where the product database contains over one million products.", "title": "" }, { "docid": "361e874cccb263b202155ef92e502af3", "text": "String similarity join is an important operation in data integration and cleansing that finds similar string pairs from two collections of strings. More than ten algorithms have been proposed to address this problem in the recent two decades. However, existing algorithms have not been thoroughly compared under the same experimental framework. For example, some algorithms are tested only on specific datasets. This makes it rather difficult for practitioners to decide which algorithms should be used for various scenarios. To address this problem, in this paper we provide a comprehensive survey on a wide spectrum of existing string similarity join algorithms, classify them into different categories based on their main techniques, and compare them through extensive experiments on a variety of real-world datasets with different characteristics. We also report comprehensive findings obtained from the experiments and provide new insights about the strengths and weaknesses of existing similarity join algorithms which can guide practitioners to select appropriate algorithms for various scenarios.", "title": "" }, { "docid": "4ead8caeea4143b8c5deb2ea91e0a141", "text": "The statistical discrimination and clustering literature has studied the problem of identifying similarities in time series data. Some studies use non-parametric approaches for splitting a set of time series into clusters by looking at their Euclidean distances in the space of points. A new measure of distance between time series based on the normalized periodogram is proposed. Simulation results comparing this measure with others parametric and non-parametric metrics are provided. In particular, the classification of time series as stationary or as non-stationary is discussed. The use of both hierarchical and non-hierarchical clustering algorithms is considered. An illustrative example with economic time series data is also presented. © 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "421e3e91c92c10485c6da9b29d37521d", "text": "STUDY OBJECTIVES\nThe psychomotor vigilance test (PVT) is among the most widely used measures of behavioral alertness, but there is large variation among published studies in PVT performance outcomes and test durations. To promote standardization of the PVT and increase its sensitivity and specificity to sleep loss, we determined PVT metrics and task durations that optimally discriminated sleep deprived subjects from alert subjects.\n\n\nDESIGN\nRepeated-measures experiments involving 10-min PVT assessments every 2 h across both acute total sleep deprivation (TSD) and 5 days of chronic partial sleep deprivation (PSD).\n\n\nSETTING\nControlled laboratory environment.\n\n\nPARTICIPANTS\n74 healthy subjects (34 female), aged 22-45 years.\n\n\nINTERVENTIONS\nTSD experiment involving 33 h awake (N = 31 subjects) and a PSD experiment involving 5 nights of 4 h time in bed (N = 43 subjects).\n\n\nMEASUREMENTS AND RESULTS\nIn a paired t-test paradigm and for both TSD and PSD, effect sizes of 10 different PVT performance outcomes were calculated. Effect sizes were high for both TSD (1.59-1.94) and PSD (0.88-1.21) for PVT metrics related to lapses and to measures of psychomotor speed, i.e., mean 1/RT (response time) and mean slowest 10% 1/RT. In contrast, PVT mean and median RT outcomes scored low to moderate effect sizes influenced by extreme values. Analyses facilitating only portions of the full 10-min PVT indicated that for some outcomes, high effect sizes could be achieved with PVT durations considerably shorter than 10 min, although metrics involving lapses seemed to profit from longer test durations in TSD.\n\n\nCONCLUSIONS\nDue to their superior conceptual and statistical properties and high sensitivity to sleep deprivation, metrics involving response speed and lapses should be considered primary outcomes for the 10-min PVT. In contrast, PVT mean and median metrics, which are among the most widely used outcomes, should be avoided as primary measures of alertness. Our analyses also suggest that some shorter-duration PVT versions may be sensitive to sleep loss, depending on the outcome variable selected, although this will need to be confirmed in comparative analyses of separate duration versions of the PVT. Using both sensitive PVT metrics and optimal test durations maximizes the sensitivity of the PVT to sleep loss and therefore potentially decreases the sample size needed to detect the same neurobehavioral deficit. We propose criteria to better standardize the 10-min PVT and facilitate between-study comparisons and meta-analyses.", "title": "" }, { "docid": "4f278f699b587f01191bc7f06839a548", "text": "This paper describes the design and the realization of a low-frequency ac magnetic-field-based indoor positioning system (PS). The system operation is based on the principle of inductive coupling between wire loop antennas. Specifically, due to the characteristics of the ac artificially generated magnetic fields, the relation between the induced voltage and the distance is modeled with a linear behavior in a bilogarithmic scale when a configuration with coplanar, thus equally oriented, antennas is used. In this case, the distance between a transmitting antenna and a receiving one is estimated using measurements of the induced voltage in the latter. For a high operational range, the system makes use of resonant antennas tuned at the same nominal resonant frequency. The quality factors act as antenna gain increasing the amplitude of the induced voltage. The low-operating frequency is the key factor for improving robustness against nonline-of-sight (NLOS) conditions and environment influences with respect to other existing solutions. The realized prototype, which is implemented using off-the-shelf components, exhibits an average and maximum positioning error, respectively, lower than 0.3 and 0.9 m in an indoor environment over a large area of 15 m × 12 m in NLOS conditions. Similar performance is obtained in an outdoor environment over an area of 30 m × 14 m. Furthermore, the system does not require any type of synchronization between the nodes and can accommodate an arbitrary number of users without additional infrastructure.", "title": "" }, { "docid": "a041c18f97eb9b5b2ed2e5315d542b96", "text": "While 360° cameras offer tremendous new possibilities in vision, graphics, and augmented reality, the spherical images they produce make core feature extraction non-trivial. Convolutional neural networks (CNNs) trained on images from perspective cameras yield “flat\" filters, yet 360° images cannot be projected to a single plane without significant distortion. A naive solution that repeatedly projects the viewing sphere to all tangent planes is accurate, but much too computationally intensive for real problems. We propose to learn a spherical convolutional network that translates a planar CNN to process 360° imagery directly in its equirectangular projection. Our approach learns to reproduce the flat filter outputs on 360° data, sensitive to the varying distortion effects across the viewing sphere. The key benefits are 1) efficient feature extraction for 360° images and video, and 2) the ability to leverage powerful pre-trained networks researchers have carefully honed (together with massive labeled image training sets) for perspective images. We validate our approach compared to several alternative methods in terms of both raw CNN output accuracy as well as applying a state-of-the-art “flat\" object detector to 360° data. Our method yields the most accurate results while saving orders of magnitude in computation versus the existing exact reprojection solution.", "title": "" }, { "docid": "054ed84aa377673d1327dedf26c06c59", "text": "App Stores, such as Google Play or the Apple Store, allow users to provide feedback on apps by posting review comments and giving star ratings. These platforms constitute a useful electronic mean in which application developers and users can productively exchange information about apps. Previous research showed that users feedback contains usage scenarios, bug reports and feature requests, that can help app developers to accomplish software maintenance and evolution tasks. However, in the case of the most popular apps, the large amount of received feedback, its unstructured nature and varying quality can make the identification of useful user feedback a very challenging task. In this paper we present a taxonomy to classify app reviews into categories relevant to software maintenance and evolution, as well as an approach that merges three techniques: (1) Natural Language Processing, (2) Text Analysis and (3) Sentiment Analysis to automatically classify app reviews into the proposed categories. We show that the combined use of these techniques allows to achieve better results (a precision of 75% and a recall of 74%) than results obtained using each technique individually (precision of 70% and a recall of 67%).", "title": "" }, { "docid": "cbb5d9269067ad2bbdb2c9823338d752", "text": "This Paper reveals the information about Deep Neural Network (DNN) and concept of deep learning in field of natural language processing i.e. machine translation. Now day's DNN is playing major role in machine leaning technics. Recursive recurrent neural network (R2NN) is a best technic for machine learning. It is the combination of recurrent neural network and recursive neural network (such as Recursive auto encoder). This paper presents how to train the recurrent neural network for reordering for source to target language by using Semi-supervised learning methods. Word2vec tool is required to generate word vectors of source language and Auto encoder helps us in reconstruction of the vectors for target language in tree structure. Results of word2vec play an important role in word alignment of the input vectors. RNN structure is very complicated and to train the large data file on word2vec is also a time-consuming task. Hence, a powerful hardware support (GPU) is required. GPU improves the system performance by decreasing training time period.", "title": "" }, { "docid": "fd97b7130c7d1828566422f49c857db5", "text": "The phase noise of phase/frequency detectors can significantly raise the in-band phase noise of frequency synthesizers, corrupting the modulated signal. This paper analyzes the phase noise mechanisms in CMOS phase/frequency detectors and applies the results to two different topologies. It is shown that an octave increase in the input frequency raises the phase noise by 6 dB if flicker noise is dominant and by 3 dB if white noise is dominant. An optimization methodology is also proposed that lowers the phase noise by 4 to 8 dB for a given power consumption. Simulation and analytical results agree to within 3.1 dB for the two topologies at different frequencies.", "title": "" }, { "docid": "119215115226e0bd3ee4c2762433aad5", "text": "Super-coiled polymer (SCP) artificial muscles have many attractive properties, such as high energy density, large contractions, and good dynamic range. To fully utilize them for robotic applications, it is necessary to determine how to scale them up effectively. Bundling of SCP actuators, as though they are individual threads in woven textiles, can demonstrate the versatility of SCP actuators and artificial muscles in general. However, this versatility comes with a need to understand how different bundling techniques can be achieved with these actuators and how they may trade off in performance. This letter presents the first quantitative comparison, analysis, and modeling of bundled SCP actuators. By exploiting weaving and braiding techniques, three new types of bundled SCP actuators are created: woven bundles, two-dimensional, and three-dimensional braided bundles. The bundle performance is adjustable by employing different numbers of individual actuators. Experiments are conducted to characterize and compare the force, strain, and speed of different bundles, and a linear model is proposed to predict their performance. This work lays the foundation for model-based SCP-actuated textiles, and physically scaling robots that employ SCP actuators as the driving mechanism.", "title": "" }, { "docid": "3c7d25c85b837a3337c93ca2e1e54af4", "text": "BACKGROUND\nThe treatment of acne scars with fractional CO(2) lasers is gaining increasing impact, but has so far not been compared side-by-side to untreated control skin.\n\n\nOBJECTIVE\nIn a randomized controlled study to examine efficacy and adverse effects of fractional CO(2) laser resurfacing for atrophic acne scars compared to no treatment.\n\n\nMETHODS\nPatients (n = 13) with atrophic acne scars in two intra-individual areas of similar sizes and appearances were randomized to (i) three monthly fractional CO(2) laser treatments (MedArt 610; 12-14 W, 48-56 mJ/pulse, 13% density) and (ii) no treatment. Blinded on-site evaluations were performed by three physicians on 10-point scales. Endpoints were change in scar texture and atrophy, adverse effects, and patient satisfaction.\n\n\nRESULTS\nPreoperatively, acne scars appeared with moderate to severe uneven texture (6.15 ± 1.23) and atrophy (5.72 ± 1.45) in both interventional and non-interventional control sites, P = 1. Postoperatively, lower scores of scar texture and atrophy were obtained at 1 month (scar texture 4.31 ± 1.33, P < 0.0001; atrophy 4.08 ± 1.38, P < 0.0001), at 3 months (scar texture 4.26 ± 1.97, P < 0.0001; atrophy 3.97 ± 2.08, P < 0.0001), and at 6 months (scar texture 3.89 ± 1.7, P < 0.0001; atrophy 3.56 ± 1.76, P < 0.0001). Patients were satisfied with treatments and evaluated scar texture to be mild or moderately improved. Adverse effects were minor.\n\n\nCONCLUSIONS\nIn this single-blinded randomized controlled trial we demonstrated that moderate to severe atrophic acne scars can be safely improved by ablative fractional CO(2) laser resurfacing. The use of higher energy levels might have improved the results and possibly also induced significant adverse effects.", "title": "" } ]
scidocsrr
2a2454400b07e3f22b9b5a3b5ff8fa9e
A shortest augmenting path algorithm for dense and sparse linear assignment problems
[ { "docid": "a48309ea49caa504cdc14bf77ec57472", "text": "We propose a new algorithm for the classical assignment problem. The algorithm resembles in some ways the Hungarian method but differs substantially in other respects. The average computational complexity of an efficient implementation of the algorithm seems to be considerably better than the one of the Hungarian method. In a large number of randomly generated problems the algorithm has consistently outperformed an efficiently coded version of the Hungarian method by a broad margin. The factor of improvement increases with the problem dimension N and reaches an order of magnitude for N equal to several hundreds.", "title": "" } ]
[ { "docid": "6155030c582f3c893e80136fcea90ecf", "text": "Drawing on a survey of 745 Dutch adolescents ages 13 to 18, the authors investigated (a) the occurrence and frequency of adolescents’ exposure to sexually explicit material on the Internet and (b) the correlates of this exposure. Seventy-one percent of the male adolescents and 40% of the female adolescents had been exposed to some kind of online sexually explicit material in the 6 months prior to the interview. Adolescents were more likely to be exposed to sexually explicit material online if they were male, were high sensation seekers, were less satisfied with their lives, were more sexually interested, used sexual content in other media more often, had a fast Internet connection, and had friends that were predominantly younger. Among male adolescents, a more advanced pubertal status was also associated with more frequent exposure to online sexually explicit material. Among female adolescents, greater sexual experience decreased exposure to online sexually explicit material.", "title": "" }, { "docid": "51f90bbb8519a82983eec915dd643d34", "text": "The growth of vehicles in Yogyakarta Province, Indonesia is not proportional to the growth of roads. This problem causes severe traffic jam in many main roads. Common traffic anomalies detection using surveillance camera requires manpower and costly, while traffic anomalies detection with crowdsourcing mobile applications are mostly owned by private. This research aims to develop a real-time traffic classification by harnessing the power of social network data, Twitter. In this study, Twitter data are processed to the stages of preprocessing, feature extraction, and tweet classification. This study compares classification performance of three machine learning algorithms, namely Naive Bayes (NB), Support Vector Machine (SVM), and Decision Tree (DT). Experimental results show that SVM algorithm produced the best performance among the other algorithms with 99.77% and 99.87% of classification accuracy in balanced and imbalanced data, respectively. This research implies that social network service may be used as an alternative source for traffic anomalies detection by providing information of traffic flow condition in real-time.", "title": "" }, { "docid": "76656cc995bb0a3b6644b1c5eeab2cff", "text": "Article history: Available online 27 April 2013", "title": "" }, { "docid": "e33b3ebfc46c371253cf7f68adbbe074", "text": "Although backward folding of the epiglottis is one of the signal events of the mammalian adult swallow, the epiglottis does not fold during the infant swallow. How this functional change occurs is unknown, but we hypothesize that a change in swallow mechanism occurs with maturation, prior to weaning. Using videofluoroscopy, we found three characteristic patterns of swallowing movement at different ages in the pig: an infant swallow, a transitional swallow and a post-weaning (juvenile or adult) swallow. In animals of all ages, the dorsal region of the epiglottis and larynx was held in an intranarial position by a muscular sphincter formed by the palatopharyngeal arch. In the infant swallow, increasing pressure in the oropharynx forced a liquid bolus through the piriform recesses on either side of a relatively stationary epiglottis into the esophagus. As the infant matured, the palatopharyngeal arch and the soft palate elevated at the beginning of the swallow, so exposing a larger area of the epiglottis to bolus pressure. In transitional swallows, the epiglottis was tilted backward relatively slowly by a combination of bolus pressure and squeezing of the epiglottis by closure of the palatopharyngeal sphincter. The bolus, however, traveled alongside but never over the tip of the epiglottis. In the juvenile swallow, the bolus always passed over the tip of the epiglottis. The tilting of the epiglottis resulted from several factors, including the action of the palatopharyngeal sphincter, higher bolus pressure exerted on the epiglottis and the allometry of increased size. In both transitional and juvenile swallows, the subsequent relaxation of the palatopharyngeal sphincter released the epiglottis, which sprang back to its original intranarial position.", "title": "" }, { "docid": "2bd9f317404d556b5967e6dcb6832b1b", "text": "Ischemic Heart Disease (IHD) and stroke are statistically the leading causes of death world-wide. Both diseases deal with various types of cardiac arrhythmias, e.g. premature ventricular contractions (PVCs), ventricular and supra-ventricular tachycardia, atrial fibrillation. For monitoring and detecting such an irregular heart rhythm accurately, we are now developing a very cost-effective ECG monitor, which is implemented in 8-bit MCU with an efficient QRS detector using steep-slope algorithm and arrhythmia detection algorithm using a simple heart rate variability (HRV) parameter. This work shows the results of evaluating the real-time steep-slope algorithm using MIT-BIH Arrhythmia Database. The performance of this algorithm has 99.72% of sensitivity and 99.19% of positive predictivity. We then show the preliminary results of arrhythmia detection using various types of normal and abnormal ECGs from an ECG simulator. The result is, 18 of 20 ECG test signals were correctly detected.", "title": "" }, { "docid": "faad414eebea949d944e045f9cec3cf4", "text": "This note introduces practical set invariance notions for physically interconnected, discrete–time systems, subject to additive but bounded disturbances. The developed approach provides a decentralized, non–conservative and computationally tractable way to study desirable robust positive invariance and stability notions for the overall system as well as to guarantee safe and independent operation of the constituting subsystems. These desirable properties are inherited, under mild assumptions, from the classical stability and invariance properties of the associated vector–valued dynamics which capture in a simple but appropriate and non– conservative way the dynamical behavior induced by the underlying set–dynamics of interest.", "title": "" }, { "docid": "d90efd08169f350d336afcbea291306c", "text": "This paper describes a multi-UAV distributed decisional architecture developed in the framework of the AWARE Project together with a set of tests with real Unmanned Aerial Vehicles (UAVs) and Wireless Sensor Networks (WSNs) to validate this approach in disaster management and civil security applications. The paper presents the different components of the AWARE platform and the scenario in which the multi-UAV missions were carried out. The missions described in this paper include surveillance with multiple UAVs, sensor deployment and fire threat confirmation. In order to avoid redundancies, instead of describing the operation of the full architecture for every mission, only non-overlapping aspects are highlighted in each one. Key issues in multi-UAV systems such as distributed task allocation, conflict resolution and plan refining are solved in the execution of the missions.", "title": "" }, { "docid": "f43ed3feda4e243a1cb77357b435fb52", "text": "Existing text generation methods tend to produce repeated and “boring” expressions. To tackle this problem, we propose a new text generation model, called Diversity-Promoting Generative Adversarial Network (DP-GAN). The proposed model assigns low reward for repeatedly generated text and high reward for “novel” and fluent text, encouraging the generator to produce diverse and informative text. Moreover, we propose a novel languagemodel based discriminator, which can better distinguish novel text from repeated text without the saturation problem compared with existing classifier-based discriminators. The experimental results on review generation and dialogue generation tasks demonstrate that our model can generate substantially more diverse and informative text than existing baselines.1", "title": "" }, { "docid": "548d87ac6f8a023d9f65af371ad9314c", "text": "Mindfiilness meditation is an increasingly popular intervention for the treatment of physical illnesses and psychological difficulties. Using intervention strategies with mechanisms familiar to cognitive behavioral therapists, the principles and practice of mindfijlness meditation offer promise for promoting many of the most basic elements of positive psychology. It is proposed that mindfulness meditation promotes positive adjustment by strengthening metacognitive skills and by changing schemas related to emotion, health, and illness. Additionally, the benefits of yoga as a mindfulness practice are explored. Even though much empirical work is needed to determine the parameters of mindfulness meditation's benefits, and the mechanisms by which it may achieve these benefits, theory and data thus far clearly suggest the promise of mindfulness as a link between positive psychology and cognitive behavioral therapies.", "title": "" }, { "docid": "316ba707462fa3f29ecb08d8552a8c2d", "text": "This paper presents a novel fiber optic tactile probe designed for tissue palpation during minimally invasive surgery (MIS). The probe consists of 3×4 tactile sensing elements at 2.6mm spacing with a dimension of 12×18×8 mm3 allowing its application via a 25mm surgical port. Each tactile element converts the applied pressure values into a circular image pattern. The image patterns of all the sensing elements are captured by a camera attached at the proximal end of the sensor system. Processing the intensity and the area of these circular patterns allows the computation of the applied pressure across the sensing array. Validation tests show that each sensing element of the tactile probe can measure forces from 0 to 1N with a resolution of 0.05 N. The proposed sensing concept is low cost, lightweight, sterilizable, easy to be miniaturized and compatible for magnetic resonance (MR) environments. Experiments using the developed sensor for tissue abnormality detection were conducted. Results show that the proposed tactile probe can accurately and effectively detect nodules embedded inside soft tissue, demonstrating the promising application of this probe for surgical palpation during MIS.", "title": "" }, { "docid": "bdb738a5df12bbd3862f0e5320856473", "text": "The Extended Kalman Filter (EKF) has become a standard technique used in a number of nonlinear estimation and machine learning applications. These include estimating the state of a nonlinear dynamic system, estimating parameters for nonlinear system identification (e.g., learning the weights of a neural network), and dual estimation (e.g., the ExpectationMaximization (EM) algorithm)where both states and parameters are estimated simultaneously. This paper points out the flaws in using the EKF, and introduces an improvement, the Unscented Kalman Filter (UKF), proposed by Julier and Uhlman [5]. A central and vital operation performed in the Kalman Filter is the propagation of a Gaussian random variable (GRV) through the system dynamics. In the EKF, the state distribution is approximated by a GRV, which is then propagated analytically through the first-order linearization of the nonlinear system. This can introduce large errors in the true posterior mean and covariance of the transformed GRV, which may lead to sub-optimal performance and sometimes divergence of the filter. The UKF addresses this problem by using a deterministic sampling approach. The state distribution is again approximated by a GRV, but is now represented using a minimal set of carefully chosen sample points. These sample points completely capture the true mean and covariance of the GRV, and when propagated through the true nonlinear system, captures the posterior mean and covariance accurately to the 3rd order (Taylor series expansion) for any nonlinearity. The EKF, in contrast, only achieves first-order accuracy. Remarkably, the computational complexity of the UKF is the same order as that of the EKF. Julier and Uhlman demonstrated the substantial performance gains of the UKF in the context of state-estimation for nonlinear control. Machine learning problems were not considered. We extend the use of the UKF to a broader class of nonlinear estimation problems, including nonlinear system identification, training of neural networks, and dual estimation problems. Our preliminary results were presented in [13]. In this paper, the algorithms are further developed and illustrated with a number of additional examples. This work was sponsored by the NSF under grant grant IRI-9712346", "title": "" }, { "docid": "5cc2a5b23d2da7f281270e0ca4a097e1", "text": "It is widely accepted that the deficiencies in public sector health system can only be overcome by significant reforms. The need for reforms in India s health sector has been emphasized by successive plan documents since the Eighth Five-Year Plan in 1992, by the 2002 national health policy and by international donor agencies. The World Bank (2001:12,14), which has been catalytic in initiating health sector reforms in many states, categorically emphasized: now is the time to carry out radical experiments in India’s health sector, particularly since the status quo is leading to a dead end. . But it is evident that there is no single strategy that would be best option The proposed reforms are not cheap, but the cost of not reforming is even greater”.", "title": "" }, { "docid": "84f6bc32035aab1e490d350c687df342", "text": "Popularity bias is a phenomenon associated with collaborative filtering algorithms, in which popular items tend to be recommended over unpopular items. As the appropriate level of item popularity differs depending on individual users, a user-level modification approach can produce diverse recommendations while improving the recommendation accuracy. However, there are two issues with conventional user-level approaches. First, these approaches do not isolate users’ preferences from their tendencies toward item popularity clearly. Second, they do not consider temporal item popularity, although item popularity changes dynamically over time in reality. In this paper, we propose a novel approach to counteract the popularity bias, namely, matrix factorization based collaborative filtering incorporating individual users’ tendencies toward item popularity. Our model clearly isolates users’ preferences from their tendencies toward popularity. In addition, we consider the temporal item popularity and incorporate it into our model. Experimental results using a real-world dataset show that our model improve both accuracy and diversity compared with a baseline algorithm in both static and time-varying models. Moreover, our model outperforms conventional approaches in terms of accuracy with the same diversity level. Furthermore, we show that our proposed model recommends items by capturing users’ tendencies toward item popularity: it recommends popular items for the user who likes popular items, while recommending unpopular items for those who don’t like popular items.", "title": "" }, { "docid": "93ed81d5244715aaaf402817aa674310", "text": "Automatically recognized terminology is widely used for various domain-specific texts processing tasks, such as machine translation, information retrieval or ontology construction. However, there is still no agreement on which methods are best suited for particular settings and, moreover, there is no reliable comparison of already developed methods. We believe that one of the main reasons is the lack of state-of-the-art methods implementations, which are usually non-trivial to recreate. In order to address these issues, we present ATR4S, an open-source software written in Scala that comprises more than 15 methods for automatic terminology recognition (ATR) and implements the whole pipeline from text document preprocessing, to term candidates collection, term candidates scoring, and finally, term candidates ranking. It is highly scalable, modular and configurable tool with support of automatic caching. We also compare 13 state-of-the-art methods on 7 open datasets by average precision and processing time. Experimental comparison reveals that no single method demonstrates best average precision for all datasets and that other available tools for ATR do not contain the best methods.", "title": "" }, { "docid": "4f73815cc6bbdfbacee732d8724a3f74", "text": "Networks can be considered as approximation schemes. Multilayer networks of the perceptron type can approximate arbitrarily well continuous functions (Cybenko 1988, 1989; Funahashi 1989; Stinchcombe and White 1989). We prove that networks derived from regularization theory and including Radial Basis Functions (Poggio and Girosi 1989), have a similar property. From the point of view of approximation theory, however, the property of approximating continuous functions arbitrarily well is not sufficient for characterizing good approximation schemes. More critical is the property ofbest approximation. The main result of this paper is that multilayer perceptron networks, of the type used in backpropagation, do not have the best approximation property. For regularization networks (in particular Radial Basis Function networks) we prove existence and uniqueness of best approximation.", "title": "" }, { "docid": "b01ca91d572ec4d59acc8b020b6c2408", "text": "In the paper, we review our work on heterogeneous III-V-on-silicon photonic components and circuits for applications in optical communication and sensing. We elaborate on the integration strategy and describe a broad range of devices realized on this platform covering a wavelength range from 850 nm to 3.85 μm.", "title": "" }, { "docid": "9da15e2851124d6ca1524ba28572f922", "text": "With the growth of mobile data application and the ultimate expectations of 5G technology, the need to expand the capacity of the wireless networks is inevitable. Massive MIMO technique is currently taking a major part of the ongoing research, and expected to be the key player in the new cellular technologies. This papers presents an overview of the major aspects related to massive MIMO design including, antenna array general design, configuration, and challenges, in addition to advanced beamforming techniques and channel modeling and estimation issues affecting the implementation of such systems.", "title": "" }, { "docid": "8848adb878c7219b5d67aced8f9e789c", "text": "In this short review of fish gill morphology we cover some basic gross anatomy as well as in some more detail the microscopic anatomy of the branchial epithelia from representatives of the major extant groups of fishes (Agnathans, Elasmobranchs, and Teleosts). The agnathan hagfishes have primitive gill pouches, while the lampreys have arch-like gills similar to the higher fishes. In the lampreys and elasmobranchs, the gill filaments are supported by a complete interbranchial septum and water exits via external branchial slits or pores. In contrast, the teleost interbranchial septum is much reduced, leaving the ends of the filaments unattached, and the multiple gill openings are replaced by the single caudal opening of the operculum. The basic functional unit of the gill is the filament, which supports rows of plate-like lamellae. The lamellae are designed for gas exchange with a large surface area and a thin epithelium surrounding a well-vascularized core of pillar cell capillaries. The lamellae are positioned for the blood flow to be counter-current to the water flow over the gills. Despite marked differences in the gross anatomy of the gill among the various groups, the cellular constituents of the epithelium are remarkably similar. The lamellar gas-exchange surface is covered by squamous pavement cells, while large, mitochondria-rich, ionocytes and mucocytes are found in greatest frequency in the filament epithelium. Demands for ionoregulation can often upset this balance. There has been much study of the structure and function of the branchial mitochondria-rich cells. These cells are generally characterized by a high mitochondrial density and an amplification of the basolateral membrane through folding or the presence of an intracellular tubular system. Morphological subtypes of MRCs as well as some methods of MRC detection are discussed.", "title": "" }, { "docid": "403310053251e81cdad10addedb64c87", "text": "Many types of data are best analyzed by fitting a curve using nonlinear regression, and computer programs that perform these calculations are readily available. Like every scientific technique, however, a nonlinear regression program can produce misleading results when used inappropriately. This article reviews the use of nonlinear regression in a practical and nonmathematical manner to answer the following questions: Why is nonlinear regression superior to linear regression of transformed data? How does nonlinear regression differ from polynomial regression and cubic spline? How do nonlinear regression programs work? What choices must an investigator make before performing nonlinear regression? What do the final results mean? How can two sets of data or two fits to one set of data be compared? What problems can cause the results to be wrong? This review is designed to demystify nonlinear regression so that both its power and its limitations will be appreciated.", "title": "" }, { "docid": "b1fbaaf4684238e61bf9d3706558f9fa", "text": "Recommender systems increasingly use contextual and demographical data as a basis for recommendations. Users, however, often feel uncomfortable providing such information. In a privacy-minded design of recommenders, users are free to decide for themselves what data they want to disclose about themselves. But this decision is often complex and burdensome, because the consequences of disclosing personal information are uncertain or even unknown. Although a number of researchers have tried to analyze and facilitate such information disclosure decisions, their research results are fragmented, and they often do not hold up well across studies. This article describes a unified approach to privacy decision research that describes the cognitive processes involved in users’ “privacy calculus” in terms of system-related perceptions and experiences that act as mediating factors to information disclosure. The approach is applied in an online experiment with 493 participants using a mock-up of a context-aware recommender system. Analyzing the results with a structural linear model, we demonstrate that personal privacy concerns and disclosure justification messages affect the perception of and experience with a system, which in turn drive information disclosure decisions. Overall, disclosure justification messages do not increase disclosure. Although they are perceived to be valuable, they decrease users’ trust and satisfaction. Another result is that manipulating the order of the requests increases the disclosure of items requested early but decreases the disclosure of items requested later.", "title": "" } ]
scidocsrr
3b5db81a98f1a89e43ef70c699c493ad
High-Speed Generator and Multilevel Converter for Energy Recovery in Automotive Systems
[ { "docid": "f405c62d932eec05c55855eb13ba804c", "text": "Multilevel converters have been under research and development for more than three decades and have found successful industrial application. However, this is still a technology under development, and many new contributions and new commercial topologies have been reported in the last few years. The aim of this paper is to group and review these recent contributions, in order to establish the current state of the art and trends of the technology, to provide readers with a comprehensive and insightful review of where multilevel converter technology stands and is heading. This paper first presents a brief overview of well-established multilevel converters strongly oriented to their current state in industrial applications to then center the discussion on the new converters that have made their way into the industry. In addition, new promising topologies are discussed. Recent advances made in modulation and control of multilevel converters are also addressed. A great part of this paper is devoted to show nontraditional applications powered by multilevel converters and how multilevel converters are becoming an enabling technology in many industrial sectors. Finally, some future trends and challenges in the further development of this technology are discussed to motivate future contributions that address open problems and explore new possibilities.", "title": "" } ]
[ { "docid": "2e2ee64b0e2d18fff783d67fade3f9b3", "text": "This paper discusses some aspects of selecting and testing random and pseudorandom number generators. The outputs of such generators may be used in many cryptographic apphcations, such as the generation of key material. Generators suitable for use in cryptographic applications may need to meet stronger requirements than for other applications. In particular, their outputs must be unpredictable in the absence of knowledge of the inputs. Some criteria for characterizing and selecting appropriate generators are discussed in this document. The subject of statistical testing and its relation to cryptanalysis is also discussed, and some recommended statistical tests are provided. These tests may be useful as a first step in determining whether or not a generator is suitable for a particular cryptographic application. However, no set of statistical tests can absolutely certify a generator as appropriate for usage in a particular application, i.e., statistical testing cannot serve as a substitute for cryptanalysis. The design and cryptanalysis of generators is outside the scope of this paper.", "title": "" }, { "docid": "9cc524d3b55c9522c6e9e89b2caeb787", "text": "Operative and nonoperative treatment methods of burst fractures were compared regarding canal remodeling. The entire series consisted of 18 patients, with seven in the operative treatment group and 11 in the nonoperative treatment group. All fractures were studied with computed tomography (CT) at the postoperative (operative treatment group) or postinjury (nonoperative treatment group) and the latest follow-up. All patients were followed up for > or = 18 months. There was no statistical difference between postoperative and postinjury canal areas (p = 0.0859). However, a significant difference was found between the rates of remodeling (p = 0.0059). Although spinal canal remodeling occurred in both groups, the resorption of retropulsed fragments was less favorable in nonoperative treatment group.", "title": "" }, { "docid": "10f3303a6f3e910841f7dcfbced968f7", "text": "The field of robotics has matured using artificial intelligence and machine learning such that intelligent robots are being developed in the form of autonomous vehicles. The anticipated widespread use of intelligent robots and their potential to do harm has raised interest in their security. This research evaluates a cyberattack on the machine learning policy of an autonomous vehicle by designing and attacking a robotic vehicle operating in a dynamic environment. The primary contribution of this research is an initial assessment of effective manipulation through an indirect attack on a robotic vehicle using the Q learning algorithm for real-time routing control. Secondly, the research highlights the effectiveness of this attack along with relevant artifact issues.", "title": "" }, { "docid": "34268a4c51c914c64b38ac2e8fad768a", "text": "User Experience of On-Screen Interaction Techniques: An Experimental Investigation of Clicking, Sliding, Zooming, Hovering, Dragging, and Flipping S. Shyam Sundar a b , Saraswathi Bellur c , Jeeyun Oh d , Qian Xu e & Haiyan Jia a a The Pennsylvania State University b Sungkyunkwan University , Korea c University of Connecticut d Robert Morris University e Elon University Accepted author version posted online: 29 Mar 2013.Published online: 27 Dec 2013.", "title": "" }, { "docid": "89eee86640807e11fa02d0de4862b3a5", "text": "The evolving fifth generation (5G) cellular wireless networks are envisioned to overcome the fundamental challenges of existing cellular networks, for example, higher data rates, excellent end-to-end performance, and user-coverage in hot-spots and crowded areas with lower latency, energy consumption, and cost per information transfer. To address these challenges, 5G systems will adopt a multi-tier architecture consisting of macrocells, different types of licensed small cells, relays, and device-to-device (D2D) networks to serve users with different quality-of-service (QoS) requirements in a spectrum and energy-efficient manner. Starting with the visions and requirements of 5G multi-tier networks, this article outlines the challenges of interference management (e.g. power control, cell association) in these networks with shared spectrum access (i.e. when the different network tiers share the same licensed spectrum). It is argued that the existing interference management schemes will not be able to address the interference management problem in prioritized 5G multi-tier networks where users in different tiers have different priorities for channel access. In this context a survey and qualitative comparison of the existing cell association and power control schemes is provided to demonstrate their limitations for interference management in 5G networks. Open challenges are highlighted and guidelines are provided to modify the existing schemes in order to overcome these limitations and make them suitable for the emerging 5G systems.", "title": "" }, { "docid": "36787667e41db8d9c164e39a89f0c533", "text": "This paper presents an improvement of the well-known conventional three-phase diode bridge rectifier with dc output capacitor. The proposed circuit increases the power factor (PF) at the ac input and reduces the ripple current stress on the smoothing capacitor. The basic concept is the arrangement of an active voltage source between the output of the diode bridge and the smoothing capacitor which is controlled in a way that it emulates an ideal smoothing inductor. With this the input currents of the diode bridge which usually show high peak amplitudes are converted into a 120/spl deg/ rectangular shape which ideally results in a total PF of 0.955. The active voltage source mentioned before is realized by a low-voltage switch-mode converter stage of small power rating as compared to the output power of the rectifier. Starting with a brief discussion of basic three-phase rectifier techniques and of the drawbacks of three-phase diode bridge rectifiers with capacitive smoothing, the concept of the proposed active smoothing is described and the stationary operation is analyzed. Furthermore, control concepts as well as design considerations and analyses of the dynamic systems behavior are given. Finally, measurements taken from a laboratory model are presented.", "title": "" }, { "docid": "5f1474036533a4583520ea2526d35daf", "text": "We motivate the integration of programming by example and natural language programming by developing a system for specifying programs for simple text editing operations based on regular expressions. The programs are described with unconstrained natural language instructions, and providing one or more examples of input/output. We show that natural language allows the system to deduce the correct program much more often and much faster than is possible with the input/output example(s) alone, showing that natural language programming and programming by example can be combined in a way that overcomes the ambiguities that both methods suffer from individually and, at the same time, provides a more natural interface to the user.", "title": "" }, { "docid": "f2026d9d827c088711875acc56b12b70", "text": "The goal of the study is to formalize the concept of viral marketing (VM) as a close derivative of contagion models from epidemiology. The study examines in detail the two common mathematical models of epidemic spread and their marketing implications. The SIR and SEIAR models of infectious disease spread are examined in detail. From this analysis of the epidemiological foundations along with a review of relevant marketing literature, a marketing model of VM is developed. This study demonstrates the key elements that define viral marketing as a formal marketing concept and the distinctive mechanical features that differ from conventional marketing.", "title": "" }, { "docid": "bf04d5a87fbac1157261fac7652b9177", "text": "We consider the partitioning of a society into coalitions in purely hedonic settings; i.e., where each player's payo is completely determined by the identity of other members of her coalition. We rst discuss how hedonic and non-hedonic settings di er and some su cient conditions for the existence of core stable coalition partitions in hedonic settings. We then focus on a weaker stability condition: individual stability, where no player can bene t from moving to another coalition while not hurting the members of that new coalition. We show that if coalitions can be ordered according to some characteristic over which players have single-peaked preferences, or where players have symmetric and additively separable preferences, then there exists an individually stable coalition partition. Examples show that without these conditions, individually stable coalition partitions may not exist. We also discuss some other stability concepts, and the incompatibility of stability with other normative properties.", "title": "" }, { "docid": "0ea451a2030603899d9ad95649b73908", "text": "Distributed artificial intelligence (DAI) is a subfield of artificial intelligence that deals with interactions of intelligent agents. Precisely, DAI attempts to construct intelligent agents that make decisions that allow them to achieve their goals in a world populated by other intelligent agents with their own goals. This paper discusses major concepts used in DAI today. To do this, a taxonomy of DAI is presented, based on the social abilities of an individual agent, the organization of agents, and the dynamics of this organization through time. Social abilities are characterized by the reasoning about other agents and the assessment of a distributed situation. Organization depends on the degree of cooperation and on the paradigm of communication. Finally, the dynamics of organization is characterized by the global coherence of the group and the coordination between agents. A reasonably representative review of recent work done in DAI field is also supplied in order to provide a better appreciation of this vibrant AI field. The paper concludes with important issues in which further research in DAI is needed.", "title": "" }, { "docid": "71100c87c7ce1fd246f7924ff8690583", "text": "Predicting acute hypotensive episode (AHE) in patients in emergency rooms and in intensive care units (ICU) is a difficult challenge. As it is well accepted that physiological compensatory adaptations to circulatory shock involve blood flow redistribution and increase in sympathetic stimulation, we recently investigated if galvanic skin response (GSR) or electro-dermal activity (EDA), a measure of sympathetic stimulation, could give information about the impending danger of acute hypotensive episode or circulatory collapse (Subramanya and Mudol, 2012). In this current study, a low-cost wearable device was developed and tested to help progress towards a system for predicting blood pressure (BP) and cardiovascular dynamics. In a pilot study, we examined hypotheses about the relation between GSR values and four BP indexes (systolic BP, diastolic BP, mean arterial pressure and pulse pressure) in apparently healthy human volunteers before and immediately after treadmill exercise. All four BP indexes had significant relationship with GSR, with pulse pressure possibly the strongest predictor of variations in the GSR and vice-versa. This paper opens up opportunities for future investigations to evaluate the utility of continuous monitoring of GSR to forecast imminent cardiovascular collapse, AHE and shock, and could have far-reaching implications for ICU, trauma and critical care management.", "title": "" }, { "docid": "7f110e4769b996de13afe63962bcf2d2", "text": "Versu is a text-based simulationist interactive drama. Because it uses autonomous agents, the drama is highly replayable: you can play the same story from multiple perspectives, or assign different characters to the various roles. The architecture relies on the notion of a social practice to achieve coordination between the independent autonomous agents. A social practice describes a recurring social situation, and is a successor to the Schankian script. Social practices are implemented as reactive joint plans, providing affordances to the agents who participate in them. The practices never control the agents directly; they merely provide suggestions. It is always the individual agent who decides what to do, using utility-based reactive action selection.", "title": "" }, { "docid": "eb0a907ad08990b0fe5e2374079cf395", "text": "We examine whether tolerance for failure spurs corporate innovation based on a sample of venture capital (VC) backed IPO firms. We develop a novel measure of VC investors’ failure tolerance by examining their tendency to continue investing in a venture conditional on the venture not meeting milestones. We find that IPO firms backed by more failure-tolerant VC investors are significantly more innovative. A rich set of empirical tests shows that this result is not driven by the endogenous matching between failure-tolerant VCs and startups with high exante innovation potentials. Further, we find that the marginal impact of VC failure tolerance on startup innovation varies significantly in the cross section. Being financed by a failure-tolerant VC is much more important for ventures that are subject to high failure risk. Finally, we examine the determinants of the cross-sectional heterogeneity in VC failure tolerance. We find that both capital constraints and career concerns can negatively distort VC failure tolerance. We also show that younger and less experienced VCs are more exposed to these distortions, making them less failure tolerant than more established VCs.", "title": "" }, { "docid": "643c03c450933c426c0b38e8b3345a8a", "text": "Data mining is the discovery of interesting, unexpected or valuable structures in large datasets. As such, it has two rather different aspects. One of these concerns large-scale, 'global' structures, and the aim is to model the shapes, or features of the shapes, of distributions. The other concerns small-scale, 'local' structures, and the aim is to detect these anomalies and decide if they are real or chance occurrences. In the context of signal detection in the pharmaceutical sector, most interest lies in the second of the above two aspects; however, signal detection occurs relative to an assumed background model, therefore, some discussion of the first aspect is also necessary. This paper gives a lightning overview of data mining and its relation to statistics, with particular emphasis on tools for the detection of adverse drug reactions.", "title": "" }, { "docid": "be1c48183fbba677f9dd3d262b70b9b8", "text": "The goal of our research is to investigate whether a Cognitive Tutor can be made more effective by extending it to help students acquire help-seeking skills. We present a preliminary model of help-seeking behavior that will provide the basis for a Help-Seeking Tutor Agent. The model, implemented by 57 production rules, captures both productive and unproductive help-seeking behavior. As a first test of the model’s efficacy, we used it off-line to evaluate students’ help-seeking behavior in an existing data set of student-tutor interactions, We found that 72% of all student actions represented unproductive help-seeking behavior. Consistent with some of our earlier work (Aleven & Koedinger, 2000) we found a proliferation of hint abuse (e.g., using hints to find answers rather than trying to understand). We also found that students frequently avoided using help when it was likely to be of benefit and often acted in a quick, possibly undeliberate manner. Students’ help-seeking behavior accounted for as much variance in their learning gains as their performance at the cognitive level (i.e., the errors that they made with the tutor). These findings indicate that the help-seeking model needs to be adjusted, but they also underscore the importance of the educational need that the Help-Seeking Tutor Agent aims to address.", "title": "" }, { "docid": "55e977381cf25444be499ec0c320cef9", "text": "Embedding network data into a low-dimensional vector space has shown promising performance for many real-world applications, such as node classification and entity retrieval. However, most existing methods focused only on leveraging network structure. For social networks, besides the network structure, there also exists rich information about social actors, such as user profiles of friendship networks and textual content of citation networks. These rich attribute information of social actors reveal the homophily effect, exerting huge impacts on the formation of social networks. In this paper, we explore the rich evidence source of attributes in social networks to improve network embedding. We propose a generic Attributed Social Network Embedding framework (ASNE), which learns representations for social actors (i.e., nodes) by preserving both the structural proximity and attribute proximity. While the structural proximity captures the global network structure, the attribute proximity accounts for the homophily effect. To justify our proposal, we conduct extensive experiments on four real-world social networks. Compared to the state-of-the-art network embedding approaches, ASNE can learn more informative representations, achieving substantial gains on the tasks of link prediction and node classification. Specifically, ASNE significantly outperforms node2vec with an 8.2 percent relative improvement on the link prediction task, and a 12.7 percent gain on the node classification task.", "title": "" }, { "docid": "09710d5e583ac83c2279d8fab48abe8d", "text": "This paper describes the upgrading process of the Multilingual Central Repository (MCR). The new MCR uses WordNet 3.0 as Interlingual-Index (ILI). Now, the current version of the MCR integrates in the same EuroWordNet framework wordnets from five different languages: English, Spanish, Catalan, Basque and Galician. In order to provide ontological coherence to all the integrated wordnets, the MCR has also been enriched with a disparate set of ontologies: Base Concepts, Top Ontology, WordNet Domains and Suggested Upper Merged Ontology. We also suggest a novel approach for improving some of the semantic resources integrated in the MCR, including a semiautomatic method to propagate domain information. The whole content of the MCR is freely available.", "title": "" }, { "docid": "1deeae749259ff732ad3206dc4a7e621", "text": "In traditional active learning, there is only one labeler that always returns the ground truth of queried labels. However, in many applications, multiple labelers are available to offer diverse qualities of labeling with different costs. In this paper, we perform active selection on both instances and labelers, aiming to improve the classification model most with the lowest cost. While the cost of a labeler is proportional to its overall labeling quality, we also observe that different labelers usually have diverse expertise, and thus it is likely that labelers with a low overall quality can provide accurate labels on some specific instances. Based on this fact, we propose a novel active selection criterion to evaluate the cost-effectiveness of instance-labeler pairs, which ensures that the selected instance is helpful for improving the classification model, and meanwhile the selected labeler can provide an accurate label for the instance with a relative low cost. Experiments on both UCI and real crowdsourcing data sets demonstrate the superiority of our proposed approach on selecting cost-effective queries.", "title": "" }, { "docid": "6d5bb9f895461b3bd7ee82041c3db6aa", "text": "Respondents at an Internet site completed over 600,000 tasks between October 1998 and April 2000 measuring attitudes toward and stereotypes of social groups. Their responses demonstrated, on average, implicit preference for White over Black and young over old and stereotypic associations linking male terms with science and career and female terms with liberal arts and family. The main purpose was to provide a demonstration site at which respondents could experience their implicit attitudes and stereotypes toward social groups. Nevertheless, the data collected are rich in information regarding the operation of attitudes and stereotypes, most notably the strength of implicit attitudes, the association and dissociation between implicit and explicit attitudes, and the effects of group membership on attitudes and stereotypes.", "title": "" }, { "docid": "85a411c07e88e9e3ff5a70fbc49a27f5", "text": "In all known congenital imprinting disorders an association with aberrant methylation or mutations at specific loci was well established. However, several patients with transient neonatal diabetes mellitus (TNDM), Silver-Russell syndrome (SRS) and Beckwith-Wiedemann syndrome (BWS) exhibiting multilocus hypomethylation (MLH) have meanwhile been described. Whereas TNDM patients with MLH show clinical symptoms different from carriers with isolated 6q24 aberrations, MLH carriers diagnosed as BWS or SRS present only the syndrome-specific features. Interestingly, SRS and BWS patients with nearly identical MLH patterns in leukocytes have been identified. We now report on the molecular findings in DNA in three SRS patients with hypomethylation of both 11p15 imprinted control regions (ICRs) in leukocytes. One patient was a monozygotic (MZ) twin, another was a triplet. While the hypomethylation affected both oppositely imprinted 11p15 ICRs in leukocytes, in buccal swab DNA only the ICR1 hypomethylation was visible in two of our patients. In the non-affected MZ twin of one of these patients, aberrant methylation was also present in leukocytes but neither in buccal swab DNA nor in skin fibroblasts. Despite mutation screening of several factors involved in establishment and maintenance of methylation marks including ZFP57, MBD3, DNMT1 and DNMT3L the molecular clue for the ICR1/ICR2 hypomethylation in our patients remained unclear. Furthermore, the reason for the development of the specific SRS phenotype is not obvious. In conclusion, our data reflect the broad range of epimutations in SRS and illustrate that an extensive molecular and clinical characterization of patients is necessary.", "title": "" } ]
scidocsrr
3e8ac05733f4084038a8afa7ac032498
Seamless Outdoors-Indoors Localization Solutions on Smartphones: Implementation and Challenges
[ { "docid": "58d8e3bd39fa470d1dfa321aeba53106", "text": "There are over 1.2 million Australians registered as having vision impairment. In most cases, vision impairment severely affects the mobility and orientation of the person, resulting in loss of independence and feelings of isolation. GPS technology and its applications have now become omnipresent and are used daily to improve and facilitate the lives of many. Although a number of products specifically designed for the Blind and Vision Impaired (BVI) and relying on GPS technology have been launched, this domain is still a niche and ongoing R&D is needed to bring all the benefits of GPS in terms of information and mobility to the BVI. The limitations of GPS indoors and in urban canyons have led to the development of new systems and signals that bridge the gap and provide positioning in those environments. Although still in their infancy, there is no doubt indoor positioning technologies will one day become as pervasive as GPS. It is therefore important to design those technologies with the BVI in mind, to make them accessible from scratch. This paper will present an indoor positioning system that has been designed in that way, examining the requirements of the BVI in terms of accuracy, reliability and interface design. The system runs locally on a mid-range smartphone and relies at its core on a Kalman filter that fuses the information of all the sensors available on the phone (Wi-Fi chipset, accelerometers and magnetic field sensor). Each part of the system is tested separately as well as the final solution quality.", "title": "" } ]
[ { "docid": "ddca576f0ceea86dab6b85281e359f3a", "text": "Fingerprint recognition plays an important role in many commercial applications and is used by millions of people every day, e.g. for unlocking mobile phones. Fingerprint image segmentation is typically the first processing step of most fingerprint algorithms and it divides an image into foreground, the region of interest, and background. Two types of error can occur during this step which both have a negative impact on the recognition performance: 'true' foreground can be labeled as background and features like minutiae can be lost, or conversely 'true' background can be misclassified as foreground and spurious features can be introduced. The contribution of this paper is threefold: firstly, we propose a novel factorized directional bandpass (FDB) segmentation method for texture extraction based on the directional Hilbert transform of a Butterworth bandpass (DHBB) filter interwoven with soft-thresholding. Secondly, we provide a manually marked ground truth segmentation for 10560 images as an evaluation benchmark. Thirdly, we conduct a systematic performance comparison between the FDB method and four of the most often cited fingerprint segmentation algorithms showing that the FDB segmentation method clearly outperforms these four widely used methods. The benchmark and the implementation of the FDB method are made publicly available.", "title": "" }, { "docid": "3f18214511dbcedfbf42c0ae65409c8e", "text": "Humans use a remarkable set of strategies to manipulate objects in clutter. We pick up, push, slide, and sweep with our hands and arms to rearrange clutter surrounding our primary task. But our robots treat the world like the Tower of Hanoi — moving with pick-and-place actions and fearful to interact with it with anything but rigid grasps. This produces inefficient plans and is often inapplicable with heavy, large, or otherwise ungraspable objects. We introduce a framework for planning in clutter that uses a library of actions inspired by human strategies. The action library is derived analytically from the mechanics of pushing and is provably conservative. The framework reduces the problem to one of combinatorial search, and demonstrates planning times on the order of seconds. With the extra functionality, our planner succeeds where traditional grasp planners fail, and works under high uncertainty by utilizing the funneling effect of pushing. We demonstrate our results with experiments in simulation and on HERB, a robotic platform developed at the Personal Robotics Lab at Carnegie Mellon University.", "title": "" }, { "docid": "00bbd84c9d6abb7326a4a0fb1d5fd4d8", "text": "We present a visual exploration of the field of human–computer interaction (HCI) through the author and article metadata of four of its major conferences: the ACM conferences on Computer-Human Interaction (CHI), User Interface Software and Technology, and Advanced Visual Interfaces and the IEEE Symposium on Information Visualization. This article describes many global and local patterns we discovered in this data set, together with the exploration process that produced them. Some expected patterns emerged, such as that—like most social networks— coauthorship and citation networks exhibit a power-law degree distribution, with a few widely collaborating authors and highly cited articles. Also, the prestigious and long-established CHI conference has the highest impact (citations by the others). Unexpected insights included that the years when a given conference was most selective are not correlated with those that produced its most highly referenced articles and that influential authors have distinct patterns of collaboration. An interesting sidelight is that methods from the HCI field—exploratory data analysis by information visualization and direct-manipulation interaction—proved useful for this analysis. They allowed us to take an open-ended, exploratory approach, guided by the data itself. As we answered our original questions, new ones arose; as we confirmed patterns we expected, we discovered refinements, exceptions, and fascinating new ones.", "title": "" }, { "docid": "9ed3ec5936c2e4fd383618ab11b4a07e", "text": "With the large-scale adoption of GPS equipped mobile sensing devices, positional data generated by moving objects (e.g., vehicles, people, animals) are being easily collected. Such data are typically modeled as streams of spatio-temporal (x,y,t) points, called trajectories. In recent years trajectory management research has progressed significantly towards efficient storage and indexing techniques, as well as suitable knowledge discovery. These works focused on the geometric aspect of the raw mobility data. We are now witnessing a growing demand in several application sectors (e.g., from shipment tracking to geo-social networks) on understanding the semantic behavior of moving objects. Semantic behavior refers to the use of semantic abstractions of the raw mobility data, including not only geometric patterns but also knowledge extracted jointly from the mobility data and the underlying geographic and application domains information. The core contribution of this article lies in a semantic model and a computation and annotation platform for developing a semantic approach that progressively transforms the raw mobility data into semantic trajectories enriched with segmentations and annotations. We also analyze a number of experiments we did with semantic trajectories in different domains.", "title": "" }, { "docid": "72aa5cb8cf9cf2aff5612352b01822e1", "text": "A hallmark of variational autoencoders (VAEs) for text processing is their combination of powerful encoder-decoder models, such as LSTMs, with simple latent distributions, typically multivariate Gaussians. These models pose a difficult optimization problem: there is an especially bad local optimum where the variational posterior always equals the prior and the model does not use the latent variable at all, a kind of “collapse” which is encouraged by the KL divergence term of the objective. In this work, we experiment with another choice of latent distribution, namely the von Mises-Fisher (vMF) distribution, which places mass on the surface of the unit hypersphere. With this choice of prior and posterior, the KL divergence term now only depends on the variance of the vMF distribution, giving us the ability to treat it as a fixed hyperparameter. We show that doing so not only averts the KL collapse, but consistently gives better likelihoods than Gaussians across a range of modeling conditions, including recurrent language modeling and bag-ofwords document modeling. An analysis of the properties of our vMF representations shows that they learn richer and more nuanced structures in their latent representations than their Gaussian counterparts.1", "title": "" }, { "docid": "a90be1b83ad475a50dcb82ae616d4f23", "text": "Historically, lower eyelid blepharoplasty has been a challenging surgery fraught with many potential complications, ranging from ocular irritation to full-blown lower eyelid malposition and a poor cosmetic outcome. The prevention of these complications requires a detailed knowledge of lower eyelid anatomy and a focused examination of the factors that may predispose to poor outcome. A thorough preoperative evaluation of lower eyelid skin, muscle, tone, laxity, fat prominence, tear trough deformity, and eyelid vector are critical for surgical planning. When these factors are analyzed appropriately, a natural and aesthetically pleasing outcome is more likely to occur. I have found that performing lower eyelid blepharoplasty in a bilamellar fashion (transconjunctivally to address fat prominence and transcutaneously for skin excision only), along with integrating contemporary concepts of volume preservation/augmentation, canthal eyelid support, and eyelid vector analysis, has been an integral part of successful surgery. In addition, this approach has significantly increased my confidence in attaining more consistent and reproducible results.", "title": "" }, { "docid": "8640b392c12df98ecb659a159012c183", "text": "The quest to increase lean body mass is widely pursued by those who lift weights. Research is lacking, however, as to the best approach for maximizing exercise-induced muscle growth. Bodybuilders generally train with moderate loads and fairly short rest intervals that induce high amounts of metabolic stress. Powerlifters, on the other hand, routinely train with high-intensity loads and lengthy rest periods between sets. Although both groups are known to display impressive muscularity, it is not clear which method is superior for hypertrophic gains. It has been shown that many factors mediate the hypertrophic process and that mechanical tension, muscle damage, and metabolic stress all can play a role in exercise-induced muscle growth. Therefore, the purpose of this paper is twofold: (a) to extensively review the literature as to the mechanisms of muscle hypertrophy and their application to exercise training and (b) to draw conclusions from the research as to the optimal protocol for maximizing muscle growth.", "title": "" }, { "docid": "f3f441c2cf1224746c0bfbb6ce02706d", "text": "This paper addresses the task of finegrained opinion extraction – the identification of opinion-related entities: the opinion expressions, the opinion holders, and the targets of the opinions, and the relations between opinion expressions and their targets and holders. Most existing approaches tackle the extraction of opinion entities and opinion relations in a pipelined manner, where the interdependencies among different extraction stages are not captured. We propose a joint inference model that leverages knowledge from predictors that optimize subtasks of opinion extraction, and seeks a globally optimal solution. Experimental results demonstrate that our joint inference approach significantly outperforms traditional pipeline methods and baselines that tackle subtasks in isolation for the problem of opinion extraction.", "title": "" }, { "docid": "e022bcb002e2c851e697972a49c3e417", "text": "A polymer membrane-coated palladium (Pd) nanoparticle (NP)/single-layer graphene (SLG) hybrid sensor was fabricated for highly sensitive hydrogen gas (H2) sensing with gas selectivity. Pd NPs were deposited on SLG via the galvanic displacement reaction between graphene-buffered copper (Cu) and Pd ion. During the galvanic displacement reaction, graphene was used as a buffer layer, which transports electrons from Cu for Pd to nucleate on the SLG surface. The deposited Pd NPs on the SLG surface were well-distributed with high uniformity and low defects. The Pd NP/SLG hybrid was then coated with polymer membrane layer for the selective filtration of H2. Because of the selective H2 filtration effect of the polymer membrane layer, the sensor had no responses to methane, carbon monoxide, or nitrogen dioxide gas. On the contrary, the PMMA/Pd NP/SLG hybrid sensor exhibited a good response to exposure to 2% H2: on average, 66.37% response within 1.81 min and recovery within 5.52 min. In addition, reliable and repeatable sensing behaviors were obtained when the sensor was exposed to different H2 concentrations ranging from 0.025 to 2%.", "title": "" }, { "docid": "d0a1237ceb00bb6f7c02d0374d157cc6", "text": "Hybrid energy storage system (HESS) composed of lithium-ion battery and supercapacitors has been recognized as one of the most promising solutions to face against the high cost, low power density and short cycle life of the battery-only energy storage system, which is the major headache hindering the further penetration of electric vehicles. In this work, the HESS sizing and energy management problem of an electric race car is investigated as a case study to improve the driving mileage and battery cycle life performance. Compared with the existing research, the distinctive features of this work are: (1) A dynamic model and a degradation model of the battery are employed to describe the dynamic behavior and to predict the cycle life of the battery more precisely; (2) Considering the fact that the design and control problems are coupled in most cases, in order to achieve a global optimal design solution and an implementable real-time energy management system, a Bi-level multi-objective sizing and control framework based on non-dominated sorting genetic algorithm-II and fuzzy logic control (FLC) is proposed to size the HESS and to optimize the membership functions of a FLC based EMS at the same time; (3) In order to improve the optimization efficiency, a vectorized fuzzy inference system which allows large scale of fuzzy logic controllers operating in parallel is devised. At last, the Pareto optimal solutions of different HESSs are obtained and compared to show the achieved enhancements of the proposed Bi-level optimal sizing and energy management framework.", "title": "" }, { "docid": "47e84cacb4db05a30bedfc0731dd2717", "text": "Although short-range wireless communication explicitly targets local and regional applications, range continues to be a highly important issue. The range directly depends on the so-called link budget, which can be increased by the choice of modulation and coding schemes. The recent transceiver generation in particular comes with extensive and flexible support for software-defined radio (SDR). The SX127× family from Semtech Corp. is a member of this device class and promises significant benefits for range, robust performance, and battery lifetime compared to competing technologies. This contribution gives a short overview of the technologies to support Long Range (LoRa™) and the corresponding Layer 2 protocol (LoRaWAN™). It particularly describes the possibility to combine the Internet Protocol, i.e. IPv6, into LoRaWAN™, so that it can be directly integrated into a full-fledged Internet of Things (IoT). The proposed solution, which we name 6LoRaWAN, has been implemented and tested; results of the experiments are also shown in this paper.", "title": "" }, { "docid": "52920ca4cf0324353562c2b3a1e4454b", "text": "This paper presents the design of a compliant constant force output gripper mechanism. The function of constant force output is achieved by using the negative stiffness effect of a buckled fixed-guided beam. One advantage is that it can eliminate the needs of complicated force-displacement combined control algorithm. Details of the nonlinear design process have been demonstrated. ANSYS APDL and MATLAB are used to solve this nonlinear problem. The structural design of the gripper is performed based on the theoretical model. An experimental study is carried out to verify the theoretical model. A force sensor and a displacement sensor are used to test the performance of the constant force output in the experiments. Results shows that the gripper can provide 1.1 N near constant force output in 200 μm range.", "title": "" }, { "docid": "860da9e0f4161c5f6c2aa2f98c95fe58", "text": "Traditional secret sharing schemes involve complex computation. A visual secret sharing (VSS) scheme decodes the secret without computation, but each shadow is m times as big as the original. Probabilistic VSS solved the computation complexity and space complexity problems at once. In this paper we propose a probabilistic (2, n) scheme for binary images and a deterministic (n, n) scheme for grayscale images. Both use simple Boolean operations and both have no pixel expansion. The (2, n) scheme provides a better contrast and significantly smaller recognized areas than other methods. The (n, n) scheme gives an exact reconstruction. 2006 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "e86281a0b5126a5b1aba84f1f945eb42", "text": "We consider the following problem: There is a set of items (e.g., movies) and a group of agents (e.g., passengers on a plane); each agent has some intrinsic utility for each of the items. Our goal is to pick a set of K items that maximize the total derived utility of all the agents (i.e., in our example we are to pick K movies that we put on the plane’s entertainment system). However, the actual utility that an agent derives from a given item is only a fraction of its intrinsic one, and this fraction depends on how the agent ranks the item among available ones (in the movie example, the perceived value of a movie depends on the values of the other ones available). Extreme examples of our model include the setting where each agent derives utility from his or her most preferred item only (e.g., an agent will watch his or her favorite movie only), from his or her least preferred item only (e.g., the agent worries that he or she will be somehow forced to watch the worst available movie), or derives 1/K of the utility from each of the available items (e.g., the agent will pick a movie at random). Formally, to model this process of adjusting the derived utility, we use the mechanism of ordered weighted average (OWA) operators. Our contribution is twofold: First, we provide a formal specification of the model and provide concrete examples and settings where particular OWA operators are applicable. Second, we show that, in general, our problem is NP-hard but—depending on the OWA operator and the nature of agents’ utilities—there exist good, efficient approximation algorithms (sometimes even polynomial time approximation schemes). Interestingly, our results generalize and build upon those for proportional represented in multiwinner voting scenarios.", "title": "" }, { "docid": "ac0a6e663caa3cb8cdcb1a144561e624", "text": "A two-stage process is performed by human operator for cleaning windows. The first being the application of cleaning fluid, which is usually achieved by using a wetted applicator. The aim of this task being to cover the whole window area in the shortest possible time. This depends on two parameters: the size of the applicator and the path which the applicator travels without significantly overlapping previously wetted area. The second is the removal of cleaning fluid by a squeegee blade without spillage on to other areas of the facade or previously cleaned areas of glass. This is particularly difficult for example if the window is located on the roof of a building and cleaning is performed from inside by the human window cleaner.", "title": "" }, { "docid": "7a992410068d53b06fa1249373e513cc", "text": "In the last few years, new observations by CHANDRA and XMM have shown that Pulsar Wind Nebulae present a complex but similar inner feature, with the presence of axisymmetric rings and jets, which is generally referred as jet-torus structure. Due to the rapid growth in accuracy and robustness of numerical schemes for relativistic fluid-dynamics, it is now possible to model the flow and magnetic structure of the relativistic plasma responsible for the emission. Recent results have clarified how the jet and rings are formed, suggesting that the morphology is strongly related to the wind properties, so that, in principle, it is possible to infer the conditions in the unshocked wind from the nebular emission. I will review here the current status in the modeling of Pulsar Wind Nebulae, and, in particular, how numerical simulations have increased our understanding of the flow structure, observed emission, polarization and spectral properties. I will also point to possible future developments of the present models.", "title": "" }, { "docid": "8ea35692d8d57d321faf7b79be464f42", "text": "We introduce a novel approach to the problem of localizing objects in an image and estimating their fine-pose. Given exact CAD models, and a few real training images with aligned models, we propose to leverage the geometric information from CAD models and appearance information from real images to learn a model that can accurately estimate fine pose in real images. Specifically, we propose FPM, a fine pose parts-based model, that combines geometric information in the form of shared 3D parts in deformable part based models, and appearance information in the form of objectness to achieve both fast and accurate fine pose estimation. Our method significantly outperforms current state-ofthe-art algorithms in both accuracy and speed.", "title": "" }, { "docid": "d9edc458cee2261b78214132c2e4b811", "text": "Since its discovery, the asymmetric Fano resonance has been a characteristic feature of interacting quantum systems. The shape of this resonance is distinctively different from that of conventional symmetric resonance curves. Recently, the Fano resonance has been found in plasmonic nanoparticles, photonic crystals, and electromagnetic metamaterials. The steep dispersion of the Fano resonance profile promises applications in sensors, lasing, switching, and nonlinear and slow-light devices.", "title": "" }, { "docid": "95c41c6f901685490c912a2630c04345", "text": "Network-based cloud computing is rapidly expanding as an alternative to conventional office-based computing. As cloud computing becomes more widespread, the energy consumption of the network and computing resources that underpin the cloud will grow. This is happening at a time when there is increasing attention being paid to the need to manage energy consumption across the entire information and communications technology (ICT) sector. While data center energy use has received much attention recently, there has been less attention paid to the energy consumption of the transmission and switching networks that are key to connecting users to the cloud. In this paper, we present an analysis of energy consumption in cloud computing. The analysis considers both public and private clouds, and includes energy consumption in switching and transmission as well as data processing and data storage. We show that energy consumption in transport and switching can be a significant percentage of total energy consumption in cloud computing. Cloud computing can enable more energy-efficient use of computing power, especially when the computing tasks are of low intensity or infrequent. However, under some circumstances cloud computing can consume more energy than conventional computing where each user performs all computing on their own personal computer (PC).", "title": "" }, { "docid": "adb6144e24291071f6c80e1190582f4e", "text": "Molecular docking is an important method in computational drug discovery. In large-scale virtual screening, millions of small drug-like molecules (chemical compounds) are compared against a designated target protein (receptor). Depending on the utilized docking algorithm for screening, this can take several weeks on conventional HPC systems. However, for certain applications including large-scale screening tasks for newly emerging infectious diseases such high runtimes can be highly prohibitive. In this paper, we investigate how the massively parallel neo-heterogeneous architecture of Tianhe-2 Supercomputer consisting of thousands of nodes comprising CPUs and MIC coprocessors that can efficiently be used for virtual screening tasks. Our proposed approach is based on a coordinated parallel framework called mD3DOCKxb in which CPUs collaborate with MICs to achieve high hardware utilization. mD3DOCKxb comprises a novel efficient communication engine for dynamic task scheduling and load balancing between nodes in order to reduce communication and I/O latency. This results in a highly scalable implementation with parallel efficiency of over 84% (strong scaling) when executing on 8,000 Tianhe-2 nodes comprising 192,000 CPU cores and 1,368,000 MIC cores.", "title": "" } ]
scidocsrr
99688d670ff4c80887083deed8bbe3c7
Cuckoo: A Computation Offloading Framework for Smartphones
[ { "docid": "8836fddeb496972fa38005fd2f8a4ed4", "text": "Energy harvesting has grown from long-established concepts into devices for powering ubiquitously deployed sensor networks and mobile electronics. Systems can scavenge power from human activity or derive limited energy from ambient heat, light, radio, or vibrations. Ongoing power management developments enable battery-powered electronics to live longer. Such advances include dynamic optimization of voltage and clock rate, hybrid analog-digital designs, and clever wake-up procedures that keep the electronics mostly inactive. Exploiting renewable energy resources in the device's environment, however, offers a power source limited by the device's physical survival rather than an adjunct energy store. Energy harvesting's true legacy dates to the water wheel and windmill, and credible approaches that scavenge energy from waste heat or vibration have been around for many decades. Nonetheless, the field has encountered renewed interest as low-power electronics, wireless standards, and miniaturization conspire to populate the world with sensor networks and mobile devices. This article presents a whirlwind survey through energy harvesting, spanning historic and current developments.", "title": "" } ]
[ { "docid": "e7475c3fd58141c496e8b430a2db24d3", "text": "This study concerns the quality of life of patients after stroke and how this is influenced by disablement and emotional factors. Ninety-six consecutive patients of mean age 71 years were followed for two years. At the end of that time 23% had experienced a recurrence of stroke and 27% were deceased. Of the survivors 76% were independent as regards activities of daily life (ADL) and lived in their own homes. Age as well as initial function were prognostically important factors. Patients who could participate in interviews marked on a visual analogue scale their evaluation of quality of life before and after stroke. Most of them had experienced a decrease and no improvement was observed during the two years. The deterioration was more pronounced in ADL dependent patients than among the independent. However, depression and anxiety were found to be of similar importance for quality of life as was physical disablement. These findings call for a greater emphasis on psychological support in the care of post stroke patients. The visual analogue scale can be a useful tool for detecting special needs.", "title": "" }, { "docid": "73160df16943b2f788750b8f7141d290", "text": "This letter proposes a double-sided printed bow-tie antenna for ultra wide band (UWB) applications. The frequency band considered is 3.1-10.6 GHz, which has been approved by the Federal Communications Commission as a commercial UWB band. The proposed antenna has a return loss less than 10 dB, phase linearity, and gain flatness over the above frequency band.", "title": "" }, { "docid": "7bbb9fed03444841fb66ec7f3820b9cb", "text": "In this paper, novel n- and p-type tunnel field-effect transistors (T-FETs) based on heterostructure Si/intrinsic-SiGe channel layer are proposed, which exhibit very small subthreshold swings, as well as low threshold voltages. The design parameters for improvement of the characteristics of the devices are studied and optimized based on the theoretical principles and simulation results. The proposed devices are designed to have extremely low off currents on the order of 1 fA/mum and engineered to exhibit substantially higher on currents compared with previously reported T-FET devices. Subthreshold swings as low as 15 mV/dec and threshold voltages as low as 0.13 V are achieved in these devices. Moreover, the T-FETs are designed to exhibit input and output characteristics compatible with CMOS-type digital-circuit applications. Using the proposed n- and p-type devices, the implementation of an inverter circuit based on T-FETs is reported. The performance of the T-FET-based inverter is compared with the 65-nm low-power CMOS-based inverter, and a gain of ~104 is achieved in static power consumption for the T-FET-based inverter with smaller gate delay.", "title": "" }, { "docid": "fa8e732d89f22704167be5f51f75ecb6", "text": "By studying trouble tickets from small enterprise networks, we conclude that their operators need detailed fault diagnosis. That is, the diagnostic system should be able to diagnose not only generic faults (e.g., performance-related) but also application specific faults (e.g., error codes). It should also identify culprits at a fine granularity such as a process or firewall configuration. We build a system, called NetMedic, that enables detailed diagnosis by harnessing the rich information exposed by modern operating systems and applications. It formulates detailed diagnosis as an inference problem that more faithfully captures the behaviors and interactions of fine-grained network components such as processes. The primary challenge in solving this problem is inferring when a component might be impacting another. Our solution is based on an intuitive technique that uses the joint behavior of two components in the past to estimate the likelihood of them impacting one another in the present. We find that our deployed prototype is effective at diagnosing faults that we inject in a live environment. The faulty component is correctly identified as the most likely culprit in 80% of the cases and is almost always in the list of top five culprits.", "title": "" }, { "docid": "2efe399d3896f78c6f152d98aa6d33a0", "text": "We consider the problem of verifying the identity of a distribution: Given the description of a distribution over a discrete support p = (p<sub>1</sub>, p<sub>2</sub>, ... , p<sub>n</sub>), how many samples (independent draws) must one obtain from an unknown distribution, q, to distinguish, with high probability, the case that p = q from the case that the total variation distance (L<sub>1</sub> distance) ||p - q||1≥ ϵ? We resolve this question, up to constant factors, on an instance by instance basis: there exist universal constants c, c' and a function f(p, ϵ) on distributions and error parameters, such that our tester distinguishes p = q from ||p-q||1≥ ϵ using f(p, ϵ) samples with success probability > 2/3, but no tester can distinguish p = q from ||p - q||1≥ c · ϵ when given c' · f(p, ϵ) samples. The function f(p, ϵ) is upperbounded by a multiple of ||p||2/3/ϵ<sup>2</sup>, but is more complicated, and is significantly smaller in some cases when p has many small domain elements, or a single large one. This result significantly generalizes and tightens previous results: since distributions of support at most n have L<sub>2/3</sub> norm bounded by √n, this result immediately shows that for such distributions, O(√n/ϵ<sup>2</sup>) samples suffice, tightening the previous bound of O(√npolylog/n<sup>4</sup>) for this class of distributions, and matching the (tight) known results for the case that p is the uniform distribution over support n. The analysis of our very simple testing algorithm involves several hairy inequalities. To facilitate this analysis, we give a complete characterization of a general class of inequalities- generalizing Cauchy-Schwarz, Holder's inequality, and the monotonicity of L<sub>p</sub> norms. Specifically, we characterize the set of sequences (a)<sub>i</sub> = a<sub>1</sub>, . . . , ar, (b)i = b<sub>1</sub>, . . . , br, (c)i = c<sub>1</sub>, ... , cr, for which it holds that for all finite sequences of positive numbers (x)<sub>j</sub> = x<sub>1</sub>,... and (y)<sub>j</sub> = y<sub>1</sub>,...,Π<sub>i=1</sub><sup>r</sup> (Σ<sub>j</sub>x<sup>a</sup><sub>j</sub><sup>i</sup><sub>y</sub><sub>i</sub><sup>b</sup><sup>i</sup>)<sup>ci</sup>≥1. For example, the standard Cauchy-Schwarz inequality corresponds to the sequences a = (1, 0, 1/2), b = (0,1, 1/2), c = (1/2 , 1/2 , -1). Our characterization is of a non-traditional nature in that it uses linear programming to compute a derivation that may otherwise have to be sought throu.gh trial and error, by hand. We do not believe such a characterization has appeared in the literature, and hope its computational nature will be useful to others, and facilitate analyses like the one here.", "title": "" }, { "docid": "42e198a383c240beb0aea6116bfedeaa", "text": "Cognitive radio (CR) is considered as a key enabling technology for dynamic spectrum access to improve spectrum efficiency. Although the CR concept was invented with the core idea of realizing \"cognition\", the research on measuring CR cognition capabilities and intelligence is largely open. Deriving the intelligence capabilities of CR not only can lead to the development of new CR technologies, but also makes it possible to better configure the networks by integrating CRs with different intelligence capabilities in a more cost- efficient way. In this paper, for the first time, we propose a data-driven methodology to quantitatively analyze the intelligence factors of the CR with learning capabilities. The basic idea of our methodology is to run various tests on the CR in different spectrum environments under different settings and obtain various performance results on different metrics. Then we apply factor analysis on the performance results to identify and quantize the intelligence capabilities of the CR. More specifically, we present a case study consisting of sixty three different types of CRs. CRs are different in terms of learning-based dynamic spectrum access strategies, number of sensors, sensing accuracy, and processing speed. Based on our methodology, we analyze the intelligence capabilities of the CRs through extensive simulations. Four intelligence capabilities are identified for the CRs through our analysis, which comply with the nature of the tested algorithms.", "title": "" }, { "docid": "c2f46b2ed4e4306c26585f0aab275c66", "text": "We developed a crawler that can crawl YouTube and filter videos with only one person in front of the camera. This filter is implemented by extracting a number of frames from each video, and then using OpenCV’s (Itseez, 2015) Haar cascades to estimate how many faces are in each video. The crawler is supplied a search term which it then forwards to the YouTube Data API. The search terms provide a rough estimate of topics in the datasets, since they are directly connected to meta-data provided by the uploader. Figure 1 shows the distribution of the video topics used in CMU-MOSEI. The diversity of the video topics brings the following generalizability advantages: 1) the models trained on CMU-MOSEI will be generalizable across different topics and the notion of dataset domain is marginalized, 2) the diversity of topics bring variety of speakers, which allows the trained models to be generalizable across different speakers, and 3) the diversity in topics furthermore brings diversity in recording setups which allows the trained models to be generalizable across microphones and cameras with different intrinsic parameters. This diversity makes CMU-MOSEI a one-of-a-kind dataset for sentiment analysis and emotion recognition. Figure 1: The topics of videos in CMU-MOSEI, displayed as a Venn-style word cloud (Coppersmith and Kelly, 2014). Larger words indicate more videos from that topic.", "title": "" }, { "docid": "0ab220829ea6667549ca274eaedb2a9e", "text": "In a culture where collectivism is pervasive such as China, social norms can be one of the most powerful tools to influence consumers’ behavior. Individuals are driven to meet social expectations and fulfill social roles in collectivist cultures. Therefore, this study was designed to investigate how Chinese consumers’ concern with saving face affects sustainable fashion product purchase intention and how it also moderates consumers’ commitment to sustainable fashion. An empirical data set of 469 undergraduate students in Beijing and Shanghai was used to test our hypotheses. Results confirmed that face-saving is an important motivation for Chinese consumers’ purchase of sustainable fashion items, and it also attenuated the effect of general product value while enhancing the effect of products’ green value in predicting purchasing trends. The findings contribute to the knowledge of sustainable consumption in Confucian culture, and thus their managerial implications were also discussed.", "title": "" }, { "docid": "229605eada4ca390d17c5ff168c6199a", "text": "The sharing economy is a new online community that has important implications for offline behavior. This study evaluates whether engagement in the sharing economy is associated with an actor’s aversion to risk. Using a web-based survey and a field experiment, we apply an adaptation of Holt and Laury’s (2002) risk lottery game to a representative sample of sharing economy participants. We find that frequency of activity in the sharing economy predicts risk aversion, but only in interaction with satisfaction. While greater satisfaction with sharing economy websites is associated with a decrease in risk aversion, greater frequency of usage is associated with greater risk aversion. This analysis shows the limitations of a static perspective on how risk attitudes relate to participation in the sharing economy.", "title": "" }, { "docid": "165fcc5242321f6fed9c353cc12216ff", "text": "Fingerprint alteration represents one of the newest challenges in biometric identification. The aim of fingerprint mutilation is to destroy the structure of the papillary ridges so that the identity of the offender cannot be recognized by the biometric system. The problem has received little attention and there is a lack of a real world altered fingerprints database that would allow researchers to develop new algorithms and techniques for altered fingerprints detection. The major contribution of this paper is that it provides a new public database of synthetically altered fingerprints. Starting from the cases described in the literature, three methods for generating simulated altered fingerprints are proposed.", "title": "" }, { "docid": "eb3d82a85c8a9c3f815f0f62b6ae55cd", "text": "In this paper, we explore and compare multiple solutions to the problem of data augmentation in image classification. Previous work has demonstrated the effectiveness of data augmentation through simple techniques, such as cropping, rotating, and flipping input images. We artificially constrain our access to data to a small subset of the ImageNet dataset, and compare each data augmentation technique in turn. One of the more successful data augmentations strategies is the traditional transformations mentioned above. We also experiment with GANs to generate images of different styles. Finally, we propose a method to allow a neural net to learn augmentations that best improve the classifier, which we call neural augmentation. We discuss the successes and shortcomings of this method on various datasets.", "title": "" }, { "docid": "42ecca95c15cd1f92d6e5795f99b414a", "text": "Personalized tag recommendation systems recommend a list of tags to a user when he is about to annotate an item. It exploits the individual preference and the characteristic of the items. Tensor factorization techniques have been applied to many applications, such as tag recommendation. Models based on Tucker Decomposition can achieve good performance but require a lot of computation power. On the other hand, models based on Canonical Decomposition can run in linear time and are more feasible for online recommendation. In this paper, we propose a novel method for personalized tag recommendation, which can be considered as a nonlinear extension of Canonical Decomposition. Different from linear tensor factorization, we exploit Gaussian radial basis function to increase the model’s capacity. The experimental results show that our proposed method outperforms the state-of-the-art methods for tag recommendation on real datasets and perform well even with a small number of features, which verifies that our models can make better use of features.", "title": "" }, { "docid": "c539b8957e4c131318ef0a807326b353", "text": "A large body of research has shown spatial distortions in the perception of tactile distances on the skin. For example, perceived tactile distance is increased on sensitive compared to less sensitive skin regions, and larger for stimuli oriented along the medio-lateral axis than the proximo-distal axis of the limbs. In this study we aimed to investigate the spatial coherence of these distortions by reconstructing the internal geometry of tactile space using multidimensional scaling (MDS). Participants made verbal estimates of the perceived distance between 2 touches applied sequentially to locations on their left hand. In Experiment 1 we constructed perceptual maps of the dorsum of the left hand, which showed a good fit to the actual configuration of stimulus locations. Critically, these maps also showed clear evidence of spatial distortion, being stretched along the medio-lateral hand axis. Experiment 2 replicated this result and showed that no such distortion is apparent on the palmar surface of the hand. These results show that distortions in perceived tactile distance can be characterized by geometrically simple and coherent deformations of tactile space. We suggest that the internal geometry of tactile space is shaped by the geometry of receptive fields in somatosensory cortex. (PsycINFO Database Record", "title": "" }, { "docid": "ef64da59880750872e056822c17ab00e", "text": "The efficient cooling is very important for a light emitting diode (LED) module because both the energy efficiency and lifespan decrease significantly as the junction temperature increases. The fin heat sink is commonly used for cooling LED modules with natural convection conditions. This work proposed a new design method for high-power LED lamp cooling by combining plate fins with pin fins and oblique fins. Two new types of fin heat sinks called the pin-plate fin heat sink (PPF) and the oblique-plate fin heat sink (OPF) were designed and their heat dissipation performances were compared with three conventional fin heat sinks, the plate fin heat sink, the pin fin heat sink and the oblique fin heat sink. The LED module was assumed to be operated under 1 atmospheric pressure and its heat input is set to 4 watts. The PPF and OPF models show lower junction temperatures by about 6°C ~ 12°C than those of three conventional models. The PPF with 8 plate fins inside (PPF-8) and the OPF with 7 plate fins inside (OPF-7) showed the best thermal performance among all the PPF and OPF designs, respectively. The total thermal resistances of the PPF-8 and OPF-7 models decreased by 9.0% ~ 15.6% compared to those of three conventional models.", "title": "" }, { "docid": "3ef7fab93c345317209e3a6466fc8cce", "text": "Many commercial video players rely on bitrate adaptation algorithm to adapt video bitrate to dynamic network condition. To achieve a high quality of experience, bitrate adaptation algorithm is required to strike a balance between response agility and video quality stability. Existing online algorithms select bitrates according to instantaneous throughput and buffer occupancy, achieving an agile reaction to changes but inducing video quality fluctuations due to the high dynamic of reference signals. In this paper, the idea of multi-step prediction is proposed to guide a better tradeoff, and the bitrate selection is formulated as a predictive control problem. With it, a generalized predictive control based approach is developed to calculate the optimal bitrate by minimizing the cost function over a moving look-ahead horizon. Finally, the proposed algorithm is implemented on a reference video player with performance evaluations conducted using realistic bandwidth traces. Experimental results show that the multi-step predictive control adaptation algorithm can achieve zero rebuffer event and 63.3% of reduction in bitrate switch.", "title": "" }, { "docid": "d70946cd43b73be4c68d1858bebc91fe", "text": "A truly autonomous mobile robot have to solve the SLAM problem (i.e. simultaneous map building and pose estimation) in order to navigate in an unknown environment. Unfortunately, a universal solution for the problem hasn't been proposed yet. The tinySLAM algorithm that has a compact and clear code was designed to solve SLAM in an indoor environment using a noisy laser scanner. This paper introduces the vinySLAM method that enhances tinySLAM with the Transferable Belief Model to improve its robustness and accuracy. Proposed enhancements affect scan matching and occupancy tracking keeping simplicity and clearness of the original code. The evaluation on publicly available datasets shows significant robustness and accuracy improvements.", "title": "" }, { "docid": "b0bb9c4bcf666dca927d4f747bfb1ca1", "text": "Remote monitoring of animal behaviour in the environment can assist in managing both the animal and its environmental impact. GPS collars which record animal locations with high temporal frequency allow researchers to monitor both animal behaviour and interactions with the environment. These ground-based sensors can be combined with remotely-sensed satellite images to understand animal-landscape interactions. The key to combining these technologies is communication methods such as wireless sensor networks (WSNs). We explore this concept using a case-study from an extensive cattle enterprise in northern Australia and demonstrate the potential for combining GPS collars and satellite images in a WSN to monitor behavioural preferences and social behaviour of cattle.", "title": "" }, { "docid": "36684d4ea27b940036e179fe967e949c", "text": "In this letter, we propose a miniaturized and wideband electromagnetic bandgap (EBG) structure with a meander-perforated plane (MPP) for power/ground noise suppression in multilayer printed circuit boards. The proposed MPP enhances the characteristic impedance of the EBG unit cell and improves the slow-wave effect, thus achieving the significant size reduction and the stopband enhancement. To explain the prominent results, a dispersion analysis for the proposed MPP-EBG structure is developed. Compared to a mushroom-type EBG structure, it is experimentally demonstrated that the MPP-EBG structure presents a 57% reduction in the start frequency of the bandgap, which leads to a 74% reduction in a unit cell size. In addition, the MPP-EBG structure considerably improves the noise suppression bandwidth (-40 dB) from 0.8 to 4.9 GHz compared to the mushroom-type EBG structure.", "title": "" }, { "docid": "8705415b41d8b3c2e7cb4f7523e0f958", "text": "Research in the field of Computer Supported Collaborative Learning (CSCL) is based on a wide variety of methodologies. In this paper, we focus upon content analysis, which is a technique often used to analyze transcripts of asynchronous, computer mediated discussion groups in formal educational settings. Although this research technique is often used, standards are not yet established. The applied instruments reflect a wide variety of approaches and differ in their level of detail and the type of analysis categories used. Further differences are related to a diversity in their theoretical base, the amount of information about validity and reliability, and the choice for the unit of analysis. This article presents an overview of different content analysis instruments, building on a sample of models commonly used in the CSCL-literature. The discussion of 15 instruments results in a number of critical conclusions. There are questions about the coherence between the theoretical base and the operational translation of the theory in the instruments. Instruments are hardly compared or contrasted with one another. As a consequence the empirical base of the validity of the instruments is limited. The analysis is rather critical when it comes to the issue of reliability. The authors put forward the need to improve the theoretical and empirical base of the existing instruments in order to promote the overall quality of CSCL-research. 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f5168565306f6e7f2b36ef797a6c9de8", "text": "We study the problem of clustering data objects whose locations are uncertain. A data object is represented by an uncertainty region over which a probability density function (pdf) is defined. One method to cluster uncertain objects of this sort is to apply the UK-means algorithm, which is based on the traditional K-means algorithm. In UK-means, an object is assigned to the cluster whose representative has the smallest expected distance to the object. For arbitrary pdf, calculating the expected distance between an object and a cluster representative requires expensive integration computation. We study various pruning methods to avoid such expensive expected distance calculation.", "title": "" } ]
scidocsrr
dc8b756609bc8b19762000be733a3968
Morphological Analysis for Japanese Noisy Text based on Character-level and Word-level Normalization
[ { "docid": "a16139b8924fc4468086c41fedeef3d9", "text": "Grapheme-to-phoneme conversion is the task of finding the pronunciation of a word given its written form. It has important applications in text-to-speech and speech recognition. Joint-sequence models are a simple and theoretically stringent probabilistic framework that is applicable to this problem. This article provides a selfcontained and detailed description of this method. We present a novel estimation algorithm and demonstrate high accuracy on a variety of databases. Moreover we study the impact of the maximum approximation in training and transcription, the interaction of model size parameters, n-best list generation, confidence measures, and phoneme-to-grapheme conversion. Our software implementation of the method proposed in this work is available under an Open Source license.", "title": "" }, { "docid": "571c73de53da3ed4d9a465325c9e9746", "text": "Twitter provides access to large volumes of data in real time, but is notoriously noisy, hampering its utility for NLP. In this paper, we target out-of-vocabulary words in short text messages and propose a method for identifying and normalising ill-formed words. Our method uses a classifier to detect ill-formed words, and generates correction candidates based on morphophonemic similarity. Both word similarity and context are then exploited to select the most probable correction candidate for the word. The proposed method doesn’t require any annotations, and achieves state-of-the-art performance over an SMS corpus and a novel dataset based on Twitter.", "title": "" } ]
[ { "docid": "085155ebfd2ac60ed65293129cb0bfee", "text": "Today, Convolution Neural Networks (CNN) is adopted by various application areas such as computer vision, speech recognition, and natural language processing. Due to a massive amount of computing for CNN, CNN running on an embedded platform may not meet the performance requirement. In this paper, we propose a system-on-chip (SoC) CNN architecture synthesized by high level synthesis (HLS). HLS is an effective hardware (HW) synthesis method in terms of both development effort and performance. However, the implementation should be optimized carefully in order to achieve a satisfactory performance. Thus, we apply several optimization techniques to the proposed CNN architecture to satisfy the performance requirement. The proposed CNN architecture implemented on a Xilinx's Zynq platform has achieved 23% faster and 9.05 times better throughput per energy consumption than an implementation on an Intel i7 Core processor.", "title": "" }, { "docid": "280acc4e653512fabf7b181be57b31e2", "text": "BACKGROUND\nHealth care workers incur frequent injuries resulting from patient transfer and handling tasks. Few studies have evaluated the effectiveness of mechanical lifts in preventing injuries and time loss due to these injuries.\n\n\nMETHODS\nWe examined injury and lost workday rates before and after the introduction of mechanical lifts in acute care hospitals and long-term care (LTC) facilities, and surveyed workers regarding lift use.\n\n\nRESULTS\nThe post-intervention period showed decreased rates of musculoskeletal injuries (RR = 0.82, 95% CI: 0.68-1.00), lost workday injuries (RR = 0.56, 95% CI: 0.41-0.78), and total lost days due to injury (RR = 0.42). Larger reductions were seen in LTC facilities than in hospitals. Self-reported frequency of lift use by registered nurses and by nursing aides were higher in the LTC facilities than in acute care hospitals. Observed reductions in injury and lost day injury rates were greater on nursing units that reported greater use of the lifts.\n\n\nCONCLUSIONS\nImplementation of patient lifts can be effective in reducing occupational musculoskeletal injuries to nursing personnel in both LTC and acute care settings. Strategies to facilitate greater use of mechanical lifting devices should be explored, as further reductions in injuries may be possible with increased use.", "title": "" }, { "docid": "d7cc1619647d83911ad65fac9637ef03", "text": "We analyze the increasing threats against IoT devices. We show that Telnet-based attacks that target IoT devices have rocketed since 2014. Based on this observation, we propose an IoT honeypot and sandbox, which attracts and analyzes Telnet-based attacks against various IoT devices running on different CPU architectures such as ARM, MIPS, and PPC. By analyzing the observation results of our honeypot and captured malware samples, we show that there are currently at least 4 distinct DDoS malware families targeting Telnet-enabled IoT devices and one of the families has quickly evolved to target more devices with as many as 9 different CPU architectures.", "title": "" }, { "docid": "bd121443b5a1dfb16687001c72b22199", "text": "We review the nosological criteria and functional neuroanatomical basis for brain death, coma, vegetative state, minimally conscious state, and the locked-in state. Functional neuroimaging is providing new insights into cerebral activity in patients with severe brain damage. Measurements of cerebral metabolism and brain activations in response to sensory stimuli with PET, fMRI, and electrophysiological methods can provide information on the presence, degree, and location of any residual brain function. However, use of these techniques in people with severe brain damage is methodologically complex and needs careful quantitative analysis and interpretation. In addition, ethical frameworks to guide research in these patients must be further developed. At present, clinical examinations identify nosological distinctions needed for accurate diagnosis and prognosis. Neuroimaging techniques remain important tools for clinical research that will extend our understanding of the underlying mechanisms of these disorders.", "title": "" }, { "docid": "fb4f4d1762535b8afe7feec072f1534e", "text": "Recently, evaluation of a recommender system has been beyond evaluating just the algorithm. In addition to accuracy of algorithms, user-centric approaches evaluate a system’s e↵ectiveness in presenting recommendations, explaining recommendations and gaining users’ confidence in the system. Existing research focuses on explaining recommendations that are related to user’s current task. However, explaining recommendations can prove useful even when recommendations are not directly related to user’s current task. Recommendations of development environment commands to software developers is an example of recommendations that are not related to the user’s current task, which is primarily focussed on programming, rather than inspecting recommendations. In this dissertation, we study three di↵erent kinds of explanations for IDE commands recommended to software developers. These explanations are inspired by the common approaches based on literature in the domain. We describe a lab-based experimental study with 24 participants where they performed programming tasks on an open source project. Our results suggest that explanations a↵ect users’ trust of recommendations, and explanations reporting the system’s confidence in recommendation a↵ects their trust more. The explanation with system’s confidence rating of the recommendations resulted in more recommendations being investigated. However, explanations did not a↵ect the uptake of the commands. Our qualitative results suggest that recommendations, when not user’s primary focus, should be in context of his task to be accepted more readily.", "title": "" }, { "docid": "8890f9ab4ba7164194474d9bba7b5c47", "text": "Support Vector Machines (SVMs) are among the most popular classification techniques adopted in security applications like malware detection, intrusion detection, and spam filtering. However, if SVMs are to be incorporated in real-world security systems, they must be able to cope with attack patterns that can either mislead the learning algorithm (poisoning), evade detection (evasion), or gain information about their internal parameters (privacy breaches). The main contributions of this chapter are twofold. First, we introduce a formal general framework for the empirical evaluation of the security of machine-learning systems. Second, according to our framework, we demonstrate the feasibility of evasion, poisoning and privacy attacks against SVMs in real-world security problems. For each attack technique, we evaluate its impact and discuss whether (and how) it can be countered through an adversary-aware design of SVMs. Our experiments are easily reproducible thanks to open-source code that we have made available, together with all the employed datasets, on a public repository. Battista Biggio, Igino Corona, Davide Maiorca, Giorgio Fumera, Giorgio Giacinto, and Fabio Roli Department of Electrical and Electronic Engineering, University of Cagliari, Piazza d’Armi 09123, Cagliari, Italy. e-mail: {battista.biggio,igino.corona,davide.maiorca}@diee.unica.it e-mail: {fumera,giacinto,roli}@diee.unica.it Blaine Nelson Institut für Informatik, Universität Potsdam, August-Bebel-Straße 89, 14482 Potsdam, Germany. e-mail: blaine.nelson@gmail.com Benjamin I. P. Rubinstein IBM Research, Lvl 5 / 204 Lygon Street, Carlton, VIC 3053, Australia. e-mail: ben@bipr.net 1 ar X iv :1 40 1. 77 27 v1 [ cs .L G ] 3 0 Ja n 20 14 2 Biggio, Corona, Nelson, Rubinstein, Maiorca, Fumera, Giacinto, Roli", "title": "" }, { "docid": "da26ae25feebea6fbe63dacea03e0742", "text": "A classic result of Johnson and Lindenstrauss asserts that any set of n points in d-dimensional Euclidean space can be embedded into k-dimensional Euclidean space—where k is logarithmic in n and independent of d—so that all pairwise distances are maintained within an arbitrarily small factor. All known constructions of such embeddings involve projecting the n points onto a spherically random k-dimensional hyperplane through the origin. We give two constructions of such embeddings with the property that all elements of the projection matrix belong in f 1; 0;þ1g: Such constructions are particularly well suited for database environments, as the computation of the embedding reduces to evaluating a single aggregate over k random partitions of the attributes. r 2003 Elsevier Science (USA). All rights reserved.", "title": "" }, { "docid": "919f42363fed69dc38eba0c46be23612", "text": "Large amounts of heterogeneous medical data have become available in various healthcare organizations (payers, providers, pharmaceuticals). Those data could be an enabling resource for deriving insights for improving care delivery and reducing waste. The enormity and complexity of these datasets present great challenges in analyses and subsequent applications to a practical clinical environment. In this tutorial, we introduce the characteristics and related mining challenges on dealing with big medical data. Many of those insights come from medical informatics community, which is highly related to data mining but focuses on biomedical specifics. We survey various related papers from data mining venues as well as medical informatics venues to share with the audiences key problems and trends in healthcare analytics research, with different applications ranging from clinical text mining, predictive modeling, survival analysis, patient similarity, genetic data analysis, and public health. The tutorial will include several case studies dealing with some of the important healthcare applications.", "title": "" }, { "docid": "858f15a9fc0e014dd9ffa953ac0e70f7", "text": "Canny (IEEE Trans. Pattern Anal. Image Proc. 8(6):679-698, 1986) suggested that an optimal edge detector should maximize both signal-to-noise ratio and localization, and he derived mathematical expressions for these criteria. Based on these criteria, he claimed that the optimal step edge detector was similar to a derivative of a gaussian. However, Canny’s work suffers from two problems. First, his derivation of localization criterion is incorrect. Here we provide a more accurate localization criterion and derive the optimal detector from it. Second, and more seriously, the Canny criteria yield an infinitely wide optimal edge detector. The width of the optimal detector can however be limited by considering the effect of the neighbouring edges in the image. If we do so, we find that the optimal step edge detector, according to the Canny criteria, is the derivative of an ISEF filter, proposed by Shen and Castan (Graph. Models Image Proc. 54:112–133, 1992). In addition, if we also consider detecting blurred (or non-sharp) gaussian edges of different widths, we find that the optimal blurred-edge detector is the above optimal step edge detector convolved with a gaussian. This implies that edge detection must be performed at multiple scales to cover all the blur widths in the image. We derive a simple scale selection procedure for edge detection, and demonstrate it in one and two dimensions.", "title": "" }, { "docid": "35d942882cbf5351bb0465cf51db1fdb", "text": "A Proposed Definition Computers are special technology and they raise some special ethical issues. In this essay I will discuss what makes computers different from other technology and how this difference makes a difference in ethical considerations. In particular, I want to characterize computer ethics and show why this emerging field is both intellectually interesting and enormously important. On my view, computer ethics is the analysis of the nature and social impact of computer technology and the corresponding formulation and justification of policies for the ethical use of such technology. I use the phrase “computer technology” because I take the subject matter of the field broadly to include computers and associated technology. For instance, I include concerns about software as well as hardware and concerns about networks connecting computers as well as computers themselves. A typical problem in computer ethics arises because there is a policy vacuum about how computer technology should be used. Computers provide us with new capabilities and these in turn give us new choices for action. Often, either no policies for conduct in these situations exist or existing policies seem inadequate. A central task of computer ethics is to determine what we should do in such cases, i.e., to formulate policies to guide our actions. Of course, some ethical situations confront us as individuals and some as a society. Computer ethics includes consideration of both personal and social policies for the ethical use of computer technology. Now it may seem that all that needs to be done is the mechanical application of an ethical theory to generate the appropriate policy. But this is usually not possible. A difficulty is that along with a policy vacuum there is often a conceptual vacuum. Although a problem in computer ethics may seem clear initially, a little reflection reveals a conceptual muddle. What is needed in such cases is an analysis which provides a coherent conceptual framework within which to formulate a policy for action. Indeed, much of the important work in computer ethics is devoted to proposing conceptual frameworks for understanding ethical problems involving computer technology. An example may help to clarify the kind of conceptual work that is required. Let’s suppose we are trying to formulate a policy for protecting computer programs. Initially, the idea may seem clear enough. We are looking for a policy for protecting a kind of intellectual property. But then a", "title": "" }, { "docid": "050c67f963f0a6968e951d689eb6e2ef", "text": "Detection and preventing Distributed Denial of Service Attack (DDoS) becomes a crucial process for the commercial organization that using the internet these days. Different approaches have been adopted to process traffic information collected by a monitoring stations (Routers and Servers) to distinguish the misbehaving of malicious traffic of DDoS attacks in Intrusion Detection Systems (IDS). In general, data mining techniques can be designed and implemented with the intrusion systems to protect the organizations from malicious. Specifically, unsupervised data mining clustering techniques allow to effectively distinguish the normal traffic from malicious traffic in a good accuracy. In this paper, we present a hybrid approach called centroid-based rules to detect and prevent a real-world DDoS attacks collected from “CAIDA UCSD \" DDoS Attack 2007 Dataset” and normal traffic traces from “CAIDA Anonymized Internet Traces 2008 Dataset” using unsupervised k-means data mining clustering techniques with proactive rules method. Centroid-based rules are used to effectively detect the DDoS attack in an efficient time. The Result of experiments shows that the centroid-based rules method perform better than the centroid-based method in term of accuracy and detection rate. In term of false alarm rates, the proposed solution obtains very low false positive rate in the training process and testing phases. Results of accuracy were more than 99% in training and testing processes. The proposed centroid-based rules method can be used in a real-time monitoring as DDoS defense system.", "title": "" }, { "docid": "377cab312d5e262a5363e6cf5b5c64de", "text": "Electroencephalography (EEG) has been instrumental in making discoveries about cognition, brain function, and dysfunction. However, where do EEG signals come from and what do they mean? The purpose of this paper is to argue that we know shockingly little about the answer to this question, to highlight what we do know, how important the answers are, and how modern neuroscience technologies that allow us to measure and manipulate neural circuits with high spatiotemporal accuracy might finally bring us some answers. Neural oscillations are perhaps the best feature of EEG to use as anchors because oscillations are observed and are studied at multiple spatiotemporal scales of the brain, in multiple species, and are widely implicated in cognition and in neural computations.", "title": "" }, { "docid": "a8ac2bab8abbee070dc2ae929714a801", "text": "Measuring word relatedness is an important ingredient of many NLP applications. Several datasets have been developed in order to evaluate such measures. The main drawback of existing datasets is the focus on single words, although natural language contains a large proportion of multiword terms. We propose the new TR9856 dataset which focuses on multi-word terms and is significantly larger than existing datasets. The new dataset includes many real world terms such as acronyms and named entities, and further handles term ambiguity by providing topical context for all term pairs. We report baseline results for common relatedness methods over the new data, and exploit its magnitude to demonstrate that a combination of these methods outperforms each individual method.", "title": "" }, { "docid": "e75b7c2fcdfc19a650d7da4e6ae643a2", "text": "With the proliferation of high-throughput technologies, genome-level data analysis has become common in molecular biology. Bioinformaticians are developing extensive resources to annotate and mine biological features from high-throughput data. The underlying database management systems for most bioinformatics software are based on a relational model. Modern non-relational databases offer an alternative that has flexibility, scalability, and a non-rigid design schema. Moreover, with an accelerated development pace, non-relational databases like CouchDB can be ideal tools to construct bioinformatics utilities. We describe CouchDB by presenting three new bioinformatics resources: (a) geneSmash, which collates data from bioinformatics resources and provides automated gene-centric annotations, (b) drugBase, a database of drug-target interactions with a web interface powered by geneSmash, and (c) HapMap-CN, which provides a web interface to query copy number variations from three SNP-chip HapMap datasets. In addition to the web sites, all three systems can be accessed programmatically via web services.", "title": "" }, { "docid": "28b5b038cfaecab90b07683c2eabbb5b", "text": "In this work, we devise a chaos-based secret key cryptography scheme for digital communication where the encryption is realized at the physical level, that is, the encrypting transformations are applied to the wave signal instead to the symbolic sequence. The encryption process consists of transformations applied to a two-dimensional signal composed of the message carrying signal and an encrypting signal that has to be a chaotic one. The secret key, in this case, is related to the number of times the transformations are applied. Furthermore, we show that due to its chaotic nature, the encrypting signal is able to hide the statistics of the original signal. 2004 Elsevier Ltd. All rights reserved. In this letter, we present a chaos-based cryptography scheme designed for digital communication. We depart from the traditional approach where encrypting transformations are applied to the binary sequence (the symbolic sequence) into which the wave signal is encoded [1]. In this work, we devise a scheme where the encryption is realized at the physical level, that is, a scheme that encrypts the wave signal itself. Our chaos-based cryptographic scheme takes advantage of the complexity of a chaotic transformation. This complexity is very desirable for cryptographic schemes, since security increases with the number of possibilities of encryption for a given text unit (a letter for example). One advantage of using a chaotic transformation is that it can be implemented at the physical level by means of a low power deterministic electronic circuit which can be easily etched on a chip. Another advantage is that, contrary to a stochastic transformation, a chaotic one allows an straightforward decryption. Moreover, as has been shown elsewhere [2], chaotic transformations for cryptography, enables one to introduce powerful analytical methods to analyze the method performance, besides satisfying the design axioms that guarantees security. In order to clarify our goal and the scheme devised, in what follows, we initially outline the basic ideas of our method. Given a message represented by a sequence fy i g l i1⁄41, and a chaotic encrypting signal fxi g l i1⁄41, with yi and xi 2 R and xiþ1 1⁄4 GðxiÞ, where G is a chaotic transformation, we construct an ordered pair ðxi ; y i Þ. The ith element of the sequence representing the encrypted message is the y component of the ordered pair ðxi ; yn i Þ, obtained from F n c ðxi ; y i Þ. The function Fc : R 2 ! R is a chaotic transformation and n is the number of times we apply it to the ordered pair. The nth iteration of ðxi ; y i Þ, has no inverse if n and x0i are unknown, that is, y i can not be recovered if one knows only F n c ðxi; yiÞ. As it will be clear further, this changing of initial condition is one of the factors responsible for the security of the method. Now we describe how to obtain the sequence fyigli1⁄41 by means of the sampling and quantization methods. These methods play an essential role in the field of digital communication, since they allow us to treat signals varying continuously in time as discrete signals. One instance of the use of continuous in time signals is the encoding of music or * Corresponding author. E-mail address: romuelm@iceb.ufop.br (R.F. Machado). 0960-0779/$ see front matter 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.chaos.2003.12.094 1266 R.F. Machado et al. / Chaos, Solitons and Fractals 21 (2004) 1265–1269 speech where variations in the pressure of the air are represented by a continuous signal such as the voltage in an electric circuit. In the sampling process, a signal varying continuously in time is replaced by a set of measurements (samples) taken at instants separated by a suitable time interval provided by the sampling theorem [3,4]. The signals to which the sampling theorem applies are the band limited ones. By a band limited signal, we mean a function of time whose Fourier transform is null for frequencies f such that jf jPW . According to the sampling theorem, it is possible to reconstruct the original signal from samples taken at times multiple of the sampling interval TS 6 1=2W . Thus, at the end of the sampling process, the signal is converted to a sequence fs1; s02; . . . ; slg of real values, which we refer to as the s sequence. After being sampled the signal is quantized. In this process, the amplitude range of the signal, say the interval 1⁄2a; b , is divided into N subintervals Rk 1⁄4 1⁄2ak ; akþ1Þ, 16 k6N , with a1 1⁄4 a, akþ1 1⁄4 ak þ dk , aNþ1 1⁄4 b, where dk is the length of the kth subinterval. To each Rk one assigns an appropriate real amplitude value qk 2 Rk , its middle point for example. A new sequence, the y sequence, is generated by replacing each s0i by the qk associated to the Rk region to which it belongs. So, the y sequence fy 1 ; y 2 ; . . . ; y l g is a sequence where each y i 2 R takes on values from the set fq1; . . . ; qNg. In traditional digital communication, each member of the y sequence is encoded into a binary sequence of length log2 N . Thus, traditional cryptographic schemes, and even recent proposed chaotic ones [1], transforms this binary sequence (or any other discrete alphabet) into another binary sequence, which is then modulated and transmitted. In our proposed scheme, we transform the real y into another real value, and then modulate this new y value in order to transmit it. This scheme deals with signals rather than with symbols, which implies that the required transformations are performed at the hardware or physical level. Instead of applying the encrypting transformations to the binary sequence, we apply them to the y sequence, the sequence obtained by sampling and quantizing the original wave signal. Suppose, now, that the amplitude of the wave signal is restricted to the interval [0,1]. The first step of the process is to obtain the encrypting signal, a sequence fx1; x02; . . . ; xlg, 0 < x0i < 1. As we will show, this signal is obtained by either sampling a chaotic one or by a chaotic mapping. The pair ðxi ; y i Þ localizes a point in the unit square. In order to encrypt y i , we apply the baker map to the point ðxi ; y i Þ to obtain ðxi ; y i Þ 1⁄4 ð2xi b2xi c; 0:5ðy i þ b2xi cÞÞ, where b2xi c is the largest integer equal to or less than 2x0i . The encrypted signal is given by y 1 i , that is, 0:5ðy i þ b2xi cÞ. It is important to notice that y i can take 2N different values instead of N , since each y 0 i may be encoded as either 0:5 ðy i Þ < 0:5 or 0:5 ðy i þ 1Þ > 0:5, depending on whether x0i falls below or above 0:5. So, in order to digitally modulate the encrypted signal for transmission, 2N pulse amplitudes are necessary, with each binary block being encoded by two different pulses. Therefore, our method has an output format that can be straightforwardly used in digital transmissions. Suppose, for example, that N 1⁄4 2, and we have q1 1⁄4 0:25 and q2 1⁄4 0:75. If s0i < 0:5 then y i 1⁄4 0:25 and if we use n 1⁄4 1, we have y i 1⁄4 0:125 if x0i < 0:5 or y i 1⁄4 0:625 if x0i P 0:5. On the other hand, if s0i > 0:5, then y i 1⁄4 0:75 and we have y i 1⁄4 0:375, if x0i < 0:5 or y i 1⁄4 0:875 if x0i P 0:5. So, the encrypted signal takes on values from the set f0:125; 0:375; 0:625; 0:875g, where the first and third values can be decrypted as 0.25 in the non-encrypted signal while the second and the forth as 0.75. In a general case, where we apply n iterations of the mapping, y i can assume 2nN different values. In this case, if one wants to digitally transmit the cipher text, one can encode every cipher text unit using a binary block of length log2ð2NÞ and then modulate this binary stream using 2nN pulse amplitudes. Thus, the decryption is straightforward if one knows how many times the baker map was applied during the encryption. If the baker transformation (function Fc) is applied n times, there are, for each plain text unit, 2nN possible cipher text units. In this case, the complexity of the ciphertext, that is, its security, can have its upper bound estimated by the Shannon complexity Hs which is the logarithm of the possible number of ciphertext units, produced after the baker’s map have been applied n times. So, Hs 1⁄4 n logð2Þ þ logðNÞ. We see that n is much more important for security reasons than N . So, if one wishes to improve security, one could implement a dynamical secret key schedule for n. By this we mean that, based on some information of the encrypted trajectory ðxi ; y i Þ, the value of n could be changed whenever a plain text unit is encrypted. If one allows only m values for n, the number of possible cipher text units would be given by Nm Qm j1⁄41 2 nj and the complexity of the cipher text would be Pm j1⁄41 nj log 2þ m logN , which can be very high, even for small m. Thus, without knowing the number n of applications of the baker map during the encryption, the decryption renders highly improbable. In fact, n is the secret key of our cryptographic scheme and we can think of the sequence fxi g as a dynamical secret key schedule for the x-component in the initial condition represented by the ordered pair ðxi ; y i Þ. The tools necessary to perform the security analysis are provided by the information theory. In this context, information sources are modelled by random processes whose outcome may be either discrete or continuous in time. Since major interest, and ours too, is in band limited signals, we restrict ourselves to the discrete case, where the source is modelled by a discrete time random process. This is a sequence fy i g l i1⁄41 in which each y 0 i assumes values within the set A 1⁄4 fq1; q2; . . . ; qNg. This set is called the alphabet and its elements are the letters. To each letter is assigned a probability mass function pðqjÞ 1⁄4 P ðy i 1⁄4 qjÞ, that gives the probability with which the letter is selected for transmission. R.F. Machado et al. / Chaos, Solitons and Fractals 21 (2004) 1265–1269 1267 In cryptography, one deals with two messages: the plai", "title": "" }, { "docid": "a288a610a6cd4ff32b3fff4e2124aee0", "text": "According to the survey done by IBM business consulting services in 2006, global CEOs stated that business model innovation will have a greater impact on operating margin growth, than product or service innovation. We also noticed that some enterprises in China's real estate industry have improved their business models for sustainable competitive advantage and surplus profit in recently years. Based on the case studies of Shenzhen Vanke, as well as literature review, a framework for business model innovation has been developed. The framework provides an integrated means of making sense of new business model. These include critical dimensions of new customer value propositions, technological innovation, collaboration of the business infrastructure and the economic feasibility of a new business model.", "title": "" }, { "docid": "3229ceebb2534f9da93981b5de3b7928", "text": "Tarantula is an aggressive floating point machine targeted at technical, scientific and bioinformatics workloads, originally planned as a follow-on candidate to the EV8 processor [6, 5]. Tarantula adds to the EV8 core a vector unit capable of 32 double-precision flops per cycle. The vector unit fetches data directly from a 16 MByte second level cache with a peak bandwidth of sixty four 64-bit values per cycle. The whole chip is backed by a memory controller capable of delivering over 64 GBytes/s of raw band- width. Tarantula extends the Alpha ISA with new vector instructions that operate on new architectural state. Salient features of the architecture and implementation are: (1) it fully integrates into a virtual-memory cache-coherent system without changes to its coherency protocol, (2) provides high bandwidth for non-unit stride memory accesses, (3) supports gather/scatter instructions efficiently, (4) fully integrates with the EV8 core with a narrow, streamlined interface, rather than acting as a co-processor, (5) can achieve a peak of 104 operations per cycle, and (6) achieves excellent \"real-computation\" per transistor and per watt ratios. Our detailed simulations show that Tarantula achieves an average speedup of 5X over EV8, out of a peak speedup in terms of flops of 8X. Furthermore, performance on gather/scatter intensive benchmarks such as Radix Sort is also remarkable: a speedup of almost 3X over EV8 and 15 sustained operations per cycle. Several benchmarks exceed 20 operations per cycle.", "title": "" }, { "docid": "97c9d91709c98cd6dd803ffc9810d88f", "text": "Relying entirely on an attention mechanism, the Transformer introduced by Vaswani et al. (2017) achieves state-of-the-art results for machine translation. In contrast to recurrent and convolutional neural networks, it does not explicitly model relative or absolute position information in its structure. Instead, it requires adding representations of absolute positions to its inputs. In this work we present an alternative approach, extending the self-attention mechanism to efficiently consider representations of the relative positions, or distances between sequence elements. On the WMT 2014 English-to-German and English-to-French translation tasks, this approach yields improvements of 1.3 BLEU and 0.3 BLEU over absolute position representations, respectively. Notably, we observe that combining relative and absolute position representations yields no further improvement in translation quality. We describe an efficient implementation of our method and cast it as an instance of relation-aware self-attention mechanisms that can generalize to arbitrary graphlabeled inputs.", "title": "" }, { "docid": "e0724c87fd4344e01cb9260fdd36856c", "text": "In this paper we introduce a multi-objective auto-tuning framework comprising compiler and runtime components. Focusing on individual code regions, our compiler uses a novel search technique to compute a set of optimal solutions, which are encoded into a multi-versioned executable. This enables the runtime system to choose specifically tuned code versions when dynamically adjusting to changing circumstances.\n We demonstrate our method by tuning loop tiling in cache-sensitive parallel programs, optimizing for both runtime and efficiency. Our static optimizer finds solutions matching or surpassing those determined by exhaustively sampling the search space on a regular grid, while using less than 4% of the computational effort on average. Additionally, we show that parallelism-aware multi-versioning approaches like our own gain a performance improvement of up to 70% over solutions tuned for only one specific number of threads.", "title": "" }, { "docid": "b3352b90c84bb7e85cdb09ed95981231", "text": "We present a toolbox for high-throughput screening of image-based Caenorhabditis elegans phenotypes. The image analysis algorithms measure morphological phenotypes in individual worms and are effective for a variety of assays and imaging systems. This WormToolbox is available through the open-source CellProfiler project and enables objective scoring of whole-worm high-throughput image-based assays of C. elegans for the study of diverse biological pathways that are relevant to human disease.", "title": "" } ]
scidocsrr
4f49558d5b814b8873bc624b798724df
Automated directed fairness testing
[ { "docid": "fe5a43325e2bbedf9679cc6c30e083f0", "text": "Despite the improved accuracy of deep neural networks, the discovery of adversarial examples has raised serious safety concerns. Most existing approaches for crafting adversarial examples necessitate some knowledge (architecture, parameters, etc) of the network at hand. In this paper, we focus on image classifiers and propose a feature-guided black-box approach to test the safety of deep neural networks that requires no such knowledge. Our algorithm employs object detection techniques such as SIFT (Scale Invariant Feature Transform) to extract features from an image. These features are converted into a mutable saliency distribution, where high probability is assigned to pixels that affect the composition of the image with respect to the human visual system. We formulate the crafting of adversarial examples as a two-player turn-based stochastic game, where the first player’s objective is to minimise the distance to an adversarial example by manipulating the features, and the second player can be cooperative, adversarial, or random. We show that, theoretically, the two-player game can converge to the optimal strategy, and that the optimal strategy represents a globally minimal adversarial image. For Lipschitz networks, we also identify conditions that provide safety guarantees that no adversarial examples exist. Using Monte Carlo tree search we gradually explore the game state space to search for adversarial examples. Our experiments show that, despite the black-box setting, manipulations guided by a perception-based saliency distribution are competitive with state-of-the-art methods that rely on white-box saliency matrices or sophisticated optimization procedures. Finally, we show how our method can be used to evaluate robustness of neural networks in safety-critical applications such as traffic sign recognition in self-driving cars.", "title": "" } ]
[ { "docid": "938066a85574d694f7c30fee23e6fcb9", "text": "Content-based image retrieval utilizes representations of features that are automatically extracted from the images themselves. Allmost all of the current CBIR systems allow for querying-by-example, a technique wherein an image (or part of an image) is selected by the user as the query. The system extracts the feature of the query image, searches the database for images with similar features, and exhibits relevant images to the user in order of similarity to the query. In this context, content includes among other features, perceptual properties such as texture, color, shape, and spatial relationships. Many CBIR systems have been developed that compare, analyze and retrieve images based on one or more of these features. Some systems have achieved various degrees of success by combining both content-based and text-based retrieval. In all cases, however, there has been no definitive conclusion as to what features provide the best retrieval. In this paper we present a modified SVM technique to retrieve the images similar to the query image.", "title": "" }, { "docid": "40e9b22c5efe43517d03ce32fc2a9512", "text": "There have been some pioneering works concerning embedding cryptographic properties in Compressive Sampli ng (CS) but it turns out that the concise linear projection encoding process makes this approach ineffective. Here we introduce a bilevel protection (BLP) model for constructing secure compr essive sampling scheme. Then we propose several techniques to esta blish secret key-related sparsifying basis and deploy them into o ur new CS model. It is demonstrated that the encoding process is simply a random linear projection, which is the same as the traditional model. However, decoding the measurements req uires the knowledge of both the key-related sensing matrix and the key-related sparsifying basis. We apply the proposed model to construct digital image ciphe r under the parallel compressive sampling reconstruction fr amework. The main properties of this cipher, such as low computational complexity, compressibility, robustness and compu tational secrecy under known/chosen plaintext attacks, are thoroug hly studied. It is shown that compressive sampling schemes base d on our BLP model is robust under various attack scenarios although the encoding process is a simple linear projection.", "title": "" }, { "docid": "7874a6681c45d87345197245e1e054fe", "text": "The continuous processing of streaming data has become an important aspect in many applications. Over the last years a variety of different streaming platforms has been developed and a number of open source frameworks is available for the implementation of streaming applications. In this report, we will survey the landscape of existing streaming platforms. Starting with an overview of the evolving developments in the recent past, we will discuss the requirements of modern streaming architectures and present the ways these are approached by the different frameworks.", "title": "" }, { "docid": "a6c8c5a1cf0e014860e8cd04f38532f3", "text": "How to train a binary neural network (BinaryNet) with both high compression rate and high accuracy on large scale datasets? We answer this question through a careful analysis of previous work on BinaryNets, in terms of training strategies, regularization, and activation approximation. Our findings first reveal that a low learning rate is highly preferred to avoid frequent sign changes of the weights, which often makes the learning of BinaryNets unstable. Secondly, we propose to use PReLU instead of ReLU in a BinaryNet to conveniently absorb the scale factor for weights to the activation function, which enjoys high computation efficiency for binarized layers while maintains high approximation accuracy. Thirdly, we reveal that instead of imposing L2 regularization, driving all weights to zero which contradicts with the setting of BinaryNets, we introduce a regularization term that encourages the weights to be bipolar. Fourthly, we discover that the failure of binarizing the last layer, which is essential for high compression rate, is due to the improper output range. We propose to use a scale layer to bring it to normal. Last but not least, we propose multiple binarizations to improve the approximation of the activations. The composition of all these enables us to train BinaryNets with both high compression rate and high accuracy, which is strongly supported by our extensive empirical study.", "title": "" }, { "docid": "d0603a92425308bec8c53551d018accc", "text": "It is a significant challenge to classify images with multiple labels by using only a small number of labeled samples. One option is to learn a binary classifier for each label and use manifold regularization to improve the classification performance by exploring the underlying geometric structure of the data distribution. However, such an approach does not perform well in practice when images from multiple concepts are represented by high-dimensional visual features. Thus, manifold regularization is insufficient to control the model complexity. In this paper, we propose a manifold regularized multitask learning (MRMTL) algorithm. MRMTL learns a discriminative subspace shared by multiple classification tasks by exploiting the common structure of these tasks. It effectively controls the model complexity because different tasks limit one another's search volume, and the manifold regularization ensures that the functions in the shared hypothesis space are smooth along the data manifold. We conduct extensive experiments, on the PASCAL VOC'07 dataset with 20 classes and the MIR dataset with 38 classes, by comparing MRMTL with popular image classification algorithms. The results suggest that MRMTL is effective for image classification.", "title": "" }, { "docid": "881a495a8329c71a0202c3510e21b15d", "text": "We apply basic statistical reasoning to signal reconstruction by machine learning – learning to map corrupted observations to clean signals – with a simple and powerful conclusion: it is possible to learn to restore images by only looking at corrupted examples, at performance at and sometimes exceeding training using clean data, without explicit image priors or likelihood models of the corruption. In practice, we show that a single model learns photographic noise removal, denoising synthetic Monte Carlo images, and reconstruction of undersampled MRI scans – all corrupted by different processes – based on noisy data only.", "title": "" }, { "docid": "a645943a02f5d71b146afe705fb6f49f", "text": "Along with the developments in the field of information technologies, the data in the electronic environment is increasing. Data mining methods are needed to obtain useful information for users in electronic environment. One of these methods, clustering methods, aims to group data according to common properties. This grouping is often based on the distance between the data. Clustering methods are divided into hierarchical and non-hierarchical methods according to the fragmentation technique of clusters. The success of both types of clustering methods varies according to the data set applied. In this study, both types of methods were tested on different type of data sets. Selected methods compared according to five different evaluation metrics. The results of the analysis are presented comparatively at the end of the study and which methods are more convenient for data set is explained.", "title": "" }, { "docid": "5ff019e3c12f7b1c2b3518e0883e3b6f", "text": "A novel PFC (Power Factor Corrected) Converter using Zeta DC-DC converter feeding a BLDC (Brush Less DC) motor drive using a single voltage sensor is proposed for fan applications. A single phase supply followed by an uncontrolled bridge rectifier and a Zeta DC-DC converter is used to control the voltage of a DC link capacitor which is lying between the Zeta converter and a VSI (Voltage Source Inverter). Voltage of a DC link capacitor of Zeta converter is controlled to achieve the speed control of BLDC motor. The Zeta converter is working as a front end converter operating in DICM (Discontinuous Inductor Current Mode) and thus using a voltage follower approach. The DC link capacitor of the Zeta converter is followed by a VSI which is feeding a BLDC motor. A sensorless control of BLDC motor is used to eliminate the requirement of Hall Effect position sensors. A MATLAB/Simulink environment is used to simulate the developed model to achieve a wide range of speed control with high PF (power Factor) and improved PQ (Power Quality) at the supply.", "title": "" }, { "docid": "d95cc1187827e91601cb5711dbdb1550", "text": "As data sparsity remains a significant challenge for collaborative filtering (CF, we conjecture that predicted ratings based on imputed data may be more accurate than those based on the originally very sparse rating data. In this paper, we propose a framework of imputation-boosted collaborative filtering (IBCF), which first uses an imputation technique, or perhaps machine learned classifier, to fill-in the sparse user-item rating matrix, then runs a traditional Pearson correlation-based CF algorithm on this matrix to predict a novel rating. Empirical results show that IBCF using machine learning classifiers can improve predictive accuracy of CF tasks. In particular, IBCF using a classifier capable of dealing well with missing data, such as naïve Bayes, can outperform the content-boosted CF (a representative hybrid CF algorithm) and IBCF using PMM (predictive mean matching, a state-of-the-art imputation technique), without using external content information.", "title": "" }, { "docid": "0dd2596342ecb90099f70b800ac4ea47", "text": "This letter presents a broadband transition between microstrip and CPW located at the opposite lawyer of the substrate. Basically, the transition is based on two couples of microstrip-to-slotline transitions. In order to widen bandwidth of the transition, a short-ended parallel microstrip stub is added. A demonstrator transition has been designed, fabricated and measured. Results show that a frequency range of 2.05 to 9.96 GHz (referred to return loss of 10 dB) is obtained.", "title": "" }, { "docid": "0c49617f6070d73a75fd51fbb50b52dd", "text": "High-quality image inpainting methods based on nonlinear higher-order partial differential equations have been developed in the last few years. These methods are iterative by nature, with a time variable serving as iteration parameter. For reasons of stability a large number of iterations can be needed which results in a computational complexity that is often too large for interactive image manipulation. Based on a detailed analysis of stationary first order transport equations the current paper develops a fast noniterative method for image inpainting. It traverses the inpainting domain by the fast marching method just once while transporting, along the way, image values in a coherence direction robustly estimated by means of the structure tensor. Depending on a measure of coherence strength the method switches continuously between diffusion and directional transport. It satisfies a comparison principle. Experiments with the inpainting of gray tone and color images show that the novel algorithm meets the high level of quality of the methods of Bertalmio et al. (SIG-GRAPH ’00: Proc. 27th Conf. on Computer Graphics and Interactive Techniques, New Orleans, ACM Press/Addison-Wesley, New York, pp. 417–424, 2000), Masnou (IEEE Trans. Image Process. 11(2):68–76, 2002), and Tschumperlé (Int. J. Comput. Vis. 68(1):65–82, 2006), while being faster by at least an order of magnitude.", "title": "" }, { "docid": "226d8e68f0519ddfc9e288c9151b65f0", "text": "Vector space embeddings can be used as a tool for learning semantic relationships from unstructured text documents. Among others, earlier work has shown how in a vector space of entities (e.g. different movies) fine-grained semantic relationships can be identified with directions (e.g. more violent than). In this paper, we use stacked denoising auto-encoders to obtain a sequence of entity embeddings that model increasingly abstract relationships. After identifying directions that model salient properties of entities in each of these vector spaces, we induce symbolic rules that relate specific properties to more general ones. We provide illustrative examples to demonstrate the potential of this ap-", "title": "" }, { "docid": "28fcdd3282dd57c760e9e2628764c0f8", "text": "Constructing a valid measure of presence and discovering the factors that contribute to presence have been much sought after goals of presence researchers and at times have generated controversy among them. This paper describes the results of principal-components analyses of Presence Questionnaire (PQ) data from 325 participants following exposure to immersive virtual environments. The analyses suggest that a 4-factor model provides the best fit to our data. The factors are Involvement, Adaptation/Immersion, Sensory Fidelity, and Interface Quality. Except for the Adaptation/Immersion factor, these factors corresponded to those identified in a cluster analysis of data from an earlier version of the questionnaire. The existence of an Adaptation/Immersion factor leads us to postulate that immersion is greater for those individuals who rapidly and easily adapt to the virtual environment. The magnitudes of the correlations among the factors indicate moderately strong relationships among the 4 factors. Within these relationships, Sensory Fidelity items seem to be more closely related to Involvement, whereas Interface Quality items appear to be more closely related to Adaptation/Immersion, even though there is a moderately strong relationship between the Involvement and Adaptation/Immersion factors.", "title": "" }, { "docid": "9f60376e3371ac489b4af90026041fa7", "text": "There is a substantive body of research focusing on women's experiences of intimate partner violence (IPV), but a lack of qualitative studies focusing on men's experiences as victims of IPV. This article addresses this gap in the literature by paying particular attention to hegemonic masculinities and men's perceptions of IPV. Men ( N = 9) participated in in-depth interviews. Interview data were rigorously subjected to thematic analysis, which revealed five key themes in the men's narratives: fear of IPV, maintaining power and control, victimization as a forbidden narrative, critical understanding of IPV, and breaking the silence. Although the men share similar stories of victimization as women, the way this is influenced by their gendered histories is different. While some men reveal a willingness to disclose their victimization and share similar fear to women victims, others reframe their victim status in a way that sustains their own power and control. The men also draw attention to the contextual realities that frame abuse, including histories of violence against the women who used violence and the realities of communities suffering intergenerational affects of colonized histories. The findings reinforce the importance of in-depth qualitative work toward revealing the context of violence, understanding the impact of fear, victimization, and power/control on men's mental health as well as the outcome of legal and support services and lack thereof. A critical discussion regarding the gendered context of violence, power within relationships, and addressing men's need for support without redefining victimization or taking away from policies and support for women's ongoing victimization concludes the work.", "title": "" }, { "docid": "e87e0a99e38b1464b7a0e875fb38b799", "text": "Software static analysis is one of many options for finding bugs in software. Like compilers, static analyzers take a program as input. This paper covers tools that examine source codewithout executing itand output bug reports. Static analysis is a complex and generally undecidable problem. Most tools resort to approximation to overcome these obstacles and it sometimes leads to incorrect results. Therefore, tool effectiveness needs to be evaluated. Several characteristics of the tools should be examined. First, what types of bugs can they find? Second, what proportion of bugs do they report? Third, what percentage of findings is correct? These questions can be answered by one or more metrics. But to calculate these, we need test cases having certain characteristics: statistical significance, ground truth, and relevance. Test cases with all three attributes are out of reach, but we can use combinations of only two to calculate the metrics.\n The results in this paper were collected during Static Analysis Tool Exposition (SATE) V, where participants ran 14 static analyzers on the test sets we provided and submitted their reports to us for analysis. Tools had considerably different support for most bug classes. Some tools discovered significantly more bugs than others or generated mostly accurate warnings, while others reported wrong findings more frequently. Using the metrics, an evaluator can compare candidates and select the tool that aligns best with his or her objectives. In addition, our results confirm that the bugs most commonly found by tools are among the most common and important bugs in software. We also observed that code complexity is a major hindrance for static analyzers and detailed which code constructs tools handle well and which impede their analysis.", "title": "" }, { "docid": "834a3d2e0f64d866c405fe4725bca437", "text": "In modern information processing tasks we often need to deal with complex and “multi-view” data, that come from multiple sources of information with structures behind them. For example, observations about the same set of entities are often made from different angles, or, individual observations collected by different sensors are intrinsically related. This poses new challenges in developing novel signal processing and machine learning techniques to handle such data efficiently. In this thesis, we formulate the processing and analysis of multi-view data as signal processing and machine learning problems defined on graphs. Graphs are appealing mathematical tools for modeling pairwise relationships between entities in datasets. Moreover, they are flexible and adaptable to incorporate multiple sources of information with structures. For instance, multiple types of relationships between the same entities in a certain set can be modeled as a multi-layer graph, where the layers share the same set of vertices (entities) but have different edges (relationships) between them. Alternatively, information from one source can be modeled as signals defined on the vertex set of a graph that corresponds to another source. While the former setting is an extension of the classical graph-based learning in machine learning, the latter leads to the emerging research field of signal processing on graphs. In this thesis, we bridge the gap between the two fields by studying several problems related to the clustering, classification and representation of data of various forms associated with weighted and undirected graphs. First, we address the problem of analyzing multi-layer graphs and propose methods for clustering the vertices by efficiently combining the information provided by the multiple layers. To this end, we propose to combine the characteristics of individual graph layers using tools from subspace analysis on a Grassmann manifold. The resulting combination can be viewed as a low dimensional representation of the original data, which preserves the most important information from diverse relationships between entities. We use our algorithm in clustering methods and demonstrate that the proposed method is superior to baseline schemes and competitive to state-of-the-art techniques. Next, we approach the problem of combining different layers in a multi-layer graph from a different perspective. Specifically, we consider the eigenvectors of the graph Laplacian matrix of one layer as signals defined on the vertex set of another layer. We propose a novel method based on a graph regularization framework, which produces a set of “joint eigenvectors”, or a “joint spectrum”, shared by the two layers. We use this joint spectrum in clustering problems on multilayer graphs. Compared to our previous approach and most of the state-of-the-art techniques, a unique characteristic and potential advantage of this method is that, it allows us to combine individual layers based on their respective importance in a convincing way. Third, we build on the setting in the second approach and study in general the classification problem of signals on graphs. To this end, we adopt efficient signal representations defined in", "title": "" }, { "docid": "e11bf8903ea7b6e5b7ad384451178c92", "text": "The increasing availability of online information has triggered an intensive research in the area of automatic text summarization within the Natural Language Processing (NLP). Text summarization reduces the text by removing the less useful information which helps the reader to find the required information quickly. There are many kinds of algorithms that can be used to summarize the text. One of them is TF-IDF (Term Frequency-Inverse Document Frequency). This research aimed to produce an automatic text summarizer implemented with TF-IDF algorithm and to compare it with other various online source of automatic text summarizer. To evaluate the summary produced from each summarizer, The F-Measure as the standard comparison value had been used. The result of this research produces 67% of accuracy with three data samples which are higher compared to the other online summarizers.", "title": "" }, { "docid": "48393a47c0f977c77ef346ef2432e8f5", "text": "Information Systems researchers and technologists have built and investigated Decision Support Systems (DSS) for almost 40 years. This article is a narrative overview of the history of Decision Support Systems (DSS) and a means of gathering more first-hand accounts about the history of DSS. Readers are asked to comment upon the stimulus narrative titled “A Brief History of Decision Support Systems” that has been read by thousands of visitors to DSSResources.COM. Also, the stimulus narrative has been reviewed by a number of key actors who created the history of DSS. The narrative is divided into four sections: The Early Years – 1964-1975; Developing DSS Theory – 1976-1982; Expanding the Scope of Decision Support – 1979-1989; and A Technology Shift – 1990-1995.", "title": "" }, { "docid": "6d5480bf1ee5d401e39f5e65d0aaba25", "text": "Engagement is a key reason for introducing gamification to learning and thus serves as an important measurement of its effectiveness. Based on a literature review and meta-synthesis, this paper proposes a comprehensive framework of engagement in gamification for learning. The framework sketches out the connections among gamification strategies, dimensions of engagement, and the ultimate learning outcome. It also elicits other task - and user - related factors that may potentially impact the effect of gamification on learner engagement. To verify and further strengthen the framework, we conducted a user study to demonstrate that: 1) different gamification strategies can trigger different facets of engagement; 2) the three dimensions of engagement have varying effects on skill acquisition and transfer; and 3) task nature and learner characteristics that were overlooked in previous studies can influence the engagement process. Our framework provides an in-depth understanding of the mechanism of gamification for learning, and can serve as a theoretical foundation for future research and design.", "title": "" }, { "docid": "5ec47bf6ab665012fc321e41634c8b7b", "text": "This paper presents an extensive indoor radio propagation characteristics at 28 GHz office environment. A full 3D ray tracing simulation results are compared with measurement results and features high correlation. Means of differences between simulation and measurement are 5.13 dB for antenna 1 and 4.51 dB for antenna 2, and standard deviations are 4.03 dB and 3.11 dB. Furthermore novel passive repeaters in both indoor and outdoor environments are presented and compared. The ray tracing simulation procedures for repeaters are introduced and the simulation results are well matched with measured results.", "title": "" } ]
scidocsrr
abb412e13755fcc8d9414488bd32e157
Semi-supervised Learning with Deep Generative Models for Asset Failure Prediction
[ { "docid": "1eb2aaf3e7b2f98e84105405b123fa7e", "text": "Prognostics technique aims to accurately estimate the Remaining Useful Life (RUL) of a subsystem or a component using sensor data, which has many real world applications. However, many of the existing algorithms are based on linear models, which cannot capture the complex relationship between the sensor data and RUL. Although Multilayer Perceptron (MLP) has been applied to predict RUL, it cannot learn salient features automatically, because of its network structure. A novel deep Convolutional Neural Network (CNN) based regression approach for estimating the RUL is proposed in this paper. Although CNN has been applied on tasks such as computer vision, natural language processing, speech recognition etc., this is the first attempt to adopt CNN for RUL estimation in prognostics. Different from the existing CNN structure for computer vision, the convolution and pooling filters in our approach are applied along the temporal dimension over the multi-channel sensor data to incorporate automated feature learning from raw sensor signals in a systematic way. Through the deep architecture, the learned features are the higher-level abstract representation of low-level raw sensor signals. Furthermore, feature learning and RUL estimation are mutually enhanced by the supervised feedback. We compared with several state-of-the-art algorithms on two publicly available data sets to evaluate the effectiveness of this proposed approach. The encouraging results demonstrate that our proposed deep convolutional neural network based regression approach for RUL estimation is not only more efficient but also more accurate.", "title": "" } ]
[ { "docid": "b64bc3ef968ae965e414468759f7943c", "text": "Pulse width modulation (PWM) techniques can be classified into continuous pulse width modulation (CPWM) and discontinuous pulse width modulation (DPWM) types. The switching loss of power devices in DPWM converters is lower than that in CPWM converters. Lower loss could reduce the junction temperature fluctuation in converters of wind turbine generator system (WTGS) and may result in longer power devices lifetime. However, employing DPWM scheme under all WTGS operation conditions will lead to power quality concern. To solve this problem, a new hybrid modulation scheme which combines the CPWM and DPWM methods for WTGS converters is presented in this paper. In the presented hybrid modulation method, two modulation schemes are switched back and forth according to the wind speed in the wind farm site. The performance of the presented modulation scheme is verified and compared with that of other PWM schemes through a case study of 1.2 MW WTGS in long-term mission profiles. The results show that the lifetime of power devices with the presented hybrid approach is longer than that with the CPWM, and is shorter than that with the DPWMs. Moreover, the power quality of the power converters with the hybrid modulation scheme can be guaranteed in all operation conditions, which may not be achieved with DPWMs.", "title": "" }, { "docid": "6496a3b7bd653cc9a73286e55caf3b28", "text": "Motivated by the real-world application of categorizing system log messages into defined situation categories, this paper describes an interactive text categorization method, PICCIL, that leverages supervised machine learning to reduce the burden of assigning categories to documents in large finite data sets but, by coupling human expertise to the machine learning, does so without sacrificing accuracy. PICCIL uses keywords and keyword rules both to preclassify documents and to assist in the manual process of grouping and reviewing documents. The reviewed documents, in turn, are used to refine the keyword rules iteratively to improve subsequent grouping and document review. We apply PICCIL to the problem of assigning semantic situation labels to the entries of a catalog of log events to support on-line labeling of log events", "title": "" }, { "docid": "b97ce684c08bf00147b9e16f2e489dd2", "text": "Thyroid storm, an endocrine emergency first described in 1926, remains a diagnostic and therapeutic challenge. No laboratory abnormalities are specific to thyroid storm, and the available scoring system is based on the clinical criteria. The exact mechanisms underlying the development of thyroid storm from uncomplicated hyperthyroidism are not well understood. A heightened response to thyroid hormone is often incriminated along with increased or abrupt availability of free hormones. Patients exhibit exaggerated signs and symptoms of hyperthyroidism and varying degrees of organ decompensation. Treatment should be initiated promptly targeting all steps of thyroid hormone formation, release, and action. Patients who fail medical therapy should be treated with therapeutic plasma exchange or thyroidectomy. The mortality of thyroid storm is currently reported at 10%. Patients who have survived thyroid storm should receive definite therapy for their underlying hyperthyroidism to avoid any recurrence of this potentially fatal condition.", "title": "" }, { "docid": "c319111c7ed9e816ba8db253cf9a5bcd", "text": "Soft actuators made of highly elastic polymers allow novel robotic system designs, yet application-specific soft robotic systems are rarely reported. Taking notice of the characteristics of soft pneumatic actuators (SPAs) such as high customizability and low inherent stiffness, we report in this work the use of soft pneumatic actuators for a biomedical use - the development of a soft robot for rodents, aimed to provide a physical assistance during gait rehabilitation of a spinalized animal. The design requirements to perform this unconventional task are introduced. Customized soft actuators, soft joints and soft couplings for the robot are presented. Live animal experiment was performed to evaluate and show the potential of SPAs for their use in the current and future biomedical applications.", "title": "" }, { "docid": "7e2ba771e25a2e6716ce59522ace2835", "text": "Online debate sites are a large source of informal and opinion-sharing dialogue on current socio-political issues. Inferring users’ stance (PRO or CON) towards discussion topics in domains such as politics or news is an important problem, and is of utility to researchers, government organizations, and companies. Predicting users’ stance supports identification of social and political groups, building of better recommender systems, and personalization of users’ information preferences to their ideological beliefs. In this paper, we develop a novel collective classification approach to stance classification, which makes use of both structural and linguistic features, and which collectively labels the posts’ stance across a network of the users’ posts. We identify both linguistic features of the posts and features that capture the underlying relationships between posts and users. We use probabilistic soft logic (PSL) (Bach et al., 2013) to model post stance by leveraging both these local linguistic features as well as the observed network structure of the posts to reason over the dataset. We evaluate our approach on 4FORUMS (Walker et al., 2012b), a collection of discussions from an online debate site on issues ranging from gun control to gay marriage. We show that our collective classification model is able to easily incorporate rich, relational information and outperforms a local model which uses only linguistic information.", "title": "" }, { "docid": "72682ac5c2ec0a1ad1f211f3de562062", "text": "Red blood cell (RBC) aggregation is greatly affected by cell deformability and reduced deformability and increased RBC aggregation are frequently observed in hypertension, diabetes mellitus, and sepsis, thus measurement of both these parameters is essential. In this study, we investigated the effects of cell deformability and fibrinogen concentration on disaggregating shear stress (DSS). The DSS was measured with varying cell deformability and geometry. The deformability of cells was gradually decreased with increasing concentration of glutaraldehyde (0.001~0.005%) or heat treatment at 49.0°C for increasing time intervals (0~7 min), which resulted in a progressive increase in the DSS. However, RBC rigidification by either glutaraldehyde or heat treatment did not cause the same effect on RBC aggregation as deformability did. The effect of cell deformability on DSS was significantly increased with an increase in fibrinogen concentration (2~6 g/L). These results imply that reduced cell deformability and increased fibrinogen levels play a synergistic role in increasing DSS, which could be used as a novel independent hemorheological index to characterize microcirculatory diseases, such as diabetic complications with high sensitivity.", "title": "" }, { "docid": "5b34624e72b1ed936ddca775cca329ca", "text": "The advent of Cloud computing as a newmodel of service provisioning in distributed systems encourages researchers to investigate its benefits and drawbacks on executing scientific applications such as workflows. One of the most challenging problems in Clouds is workflow scheduling, i.e., the problem of satisfying the QoS requirements of the user as well as minimizing the cost of workflow execution. We have previously designed and analyzed a two-phase scheduling algorithm for utility Grids, called Partial Critical Paths (PCP), which aims to minimize the cost of workflow execution while meeting a userdefined deadline. However, we believe Clouds are different from utility Grids in three ways: on-demand resource provisioning, homogeneous networks, and the pay-as-you-go pricing model. In this paper, we adapt the PCP algorithm for the Cloud environment and propose two workflow scheduling algorithms: a one-phase algorithmwhich is called IaaS Cloud Partial Critical Paths (IC-PCP), and a two-phase algorithm which is called IaaS Cloud Partial Critical Paths with Deadline Distribution (IC-PCPD2). Both algorithms have a polynomial time complexity which make them suitable options for scheduling large workflows. The simulation results show that both algorithms have a promising performance, with IC-PCP performing better than IC-PCPD2 in most cases. © 2012 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "3489e9d639223116cb4681959928a198", "text": "The prevailing concept in modern cognitive neuroscience is that cognitive functions are performed predominantly at the network level, whereas the role of individual neurons is unlikely to extend beyond forming the simple basic elements of these networks. Within this conceptual framework, individuals of outstanding cognitive abilities appear as a result of a favorable configuration of the microarchitecture of the cognitive-implicated networks, whose final formation in ontogenesis may occur in a relatively random way. Here I suggest an alternative concept, which is based on neurological data and on data from human behavioral genetics. I hypothesize that cognitive functions are performed mainly at the intracellular, probably at the molecular level. Central to this hypothesis is the idea that the neurons forming the networks involved in cognitive processes are complex elements whose functions are not limited to generating electrical potentials and releasing neurotransmitters. According to this hypothesis, individuals of outstanding abilities are so due to a ‘lucky’ combination of specific genes that determine the intrinsic properties of neurons involved in cognitive functions of the brain.", "title": "" }, { "docid": "8ac0bb34c0c393dddf91e81182632551", "text": "The choice of activation functions in deep networks has a significant effect on the training dynamics and task performance. Currently, the most successful and widely-used activation function is the Rectified Linear Unit (ReLU). Although various hand-designed alternatives to ReLU have been proposed, none have managed to replace it due to inconsistent gains. In this work, we propose to leverage automatic search techniques to discover new activation functions. Using a combination of exhaustive and reinforcement learning-based search, we discover multiple novel activation functions. We verify the effectiveness of the searches by conducting an empirical evaluation with the best discovered activation function. Our experiments show that the best discovered activation function, f(x) = x · sigmoid(βx), which we name Swish, tends to work better than ReLU on deeper models across a number of challenging datasets. For example, simply replacing ReLUs with Swish units improves top-1 classification accuracy on ImageNet by 0.9% for Mobile NASNet-A and 0.6% for Inception-ResNet-v2. The simplicity of Swish and its similarity to ReLU make it easy for practitioners to replace ReLUs with Swish units in any neural network.", "title": "" }, { "docid": "eade87f676c023cd3024226b48131ffb", "text": "Finding the dense regions of a graph and relations among them is a fundamental task in network analysis. Nucleus decomposition is a principled framework of algorithms that generalizes the k-core and k-truss decompositions. It can leverage the higher-order structures to locate the dense subgraphs with hierarchical relations. Computation of the nucleus decomposition is performed in multiple steps, known as the peeling process, and it requires global information about the graph at any time. This prevents the scalable parallelization of the computation. Also, it is not possible to compute approximate and fast results by the peeling process, because it does not produce the densest regions until the algorithm is complete. In a previous work, Lu et al. proposed to iteratively compute the h-indices of vertex degrees to obtain the core numbers and prove that the convergence is obtained after a finite number of iterations. In this work, we generalize the iterative h-index computation for any nucleus decomposition and prove convergence bounds. We present a framework of local algorithms to obtain the exact and approximate nucleus decompositions. Our algorithms are pleasingly parallel and can provide approximations to explore time and quality trade-offs. Our shared-memory implementation verifies the efficiency, scalability, and effectiveness of our algorithms on real-world networks. In particular, using 24 threads, we obtain up to 4.04x and 7.98x speedups for k-truss and (3, 4) nucleus decompositions.", "title": "" }, { "docid": "5a7d3bfaae94ee144153369a5d23a0a4", "text": "This paper introduces a spiking hierarchical model for object recognition which utilizes the precise timing information inherently present in the output of biologically inspired asynchronous address event representation (AER) vision sensors. The asynchronous nature of these systems frees computation and communication from the rigid predetermined timing enforced by system clocks in conventional systems. Freedom from rigid timing constraints opens the possibility of using true timing to our advantage in computation. We show not only how timing can be used in object recognition, but also how it can in fact simplify computation. Specifically, we rely on a simple temporal-winner-take-all rather than more computationally intensive synchronous operations typically used in biologically inspired neural networks for object recognition. This approach to visual computation represents a major paradigm shift from conventional clocked systems and can find application in other sensory modalities and computational tasks. We showcase effectiveness of the approach by achieving the highest reported accuracy to date (97.5% ± 3.5%) for a previously published four class card pip recognition task and an accuracy of 84.9% ± 1.9% for a new more difficult 36 class character recognition task.", "title": "" }, { "docid": "a214ed60c288762210189f14a8cf8256", "text": "We propose a CNN-based approach for 3D human body pose estimation from single RGB images that addresses the issue of limited generalizability of models trained solely on the starkly limited publicly available 3D pose data. Using only the existing 3D pose data and 2D pose data, we show state-of-the-art performance on established benchmarks through transfer of learned features, while also generalizing to in-the-wild scenes. We further introduce a new training set for human body pose estimation from monocular images of real humans that has the ground truth captured with a multi-camera marker-less motion capture system. It complements existing corpora with greater diversity in pose, human appearance, clothing, occlusion, and viewpoints, and enables an increased scope of augmentation. We also contribute a new benchmark that covers outdoor and indoor scenes, and demonstrate that our 3D pose dataset shows better in-the-wild performance than existing annotated data, which is further improved in conjunction with transfer learning from 2D pose data. All in all, we argue that the use of transfer learning of representations in tandem with algorithmic and data contributions is crucial for general 3D body pose estimation.", "title": "" }, { "docid": "6101fe189ad6ad7de6723784eec68b42", "text": "We present a novel system for the automatic extraction of the main melody from polyphonic music recordings. Our approach is based on the creation and characterization of pitch contours, time continuous sequences of pitch candidates grouped using auditory streaming cues. We define a set of contour characteristics and show that by studying their distributions we can devise rules to distinguish between melodic and non-melodic contours. This leads to the development of new voicing detection, octave error minimization and melody selection techniques. A comparative evaluation of the proposed approach shows that it outperforms current state-of-the-art melody extraction systems in terms of overall accuracy. Further evaluation of the algorithm is provided in the form of a qualitative error analysis and the study of the effect of key parameters and algorithmic components on system performance. Finally, we conduct a glass ceiling analysis to study the current limitations of the method, and possible directions for future work are proposed.", "title": "" }, { "docid": "ff3f051b9fde8a8e1a877e998851c9ec", "text": "We present an overview and evaluation of a new, systematic approach for generation of highly realistic, annotated synthetic data for training of deep neural networks in computer vision tasks. The main contribution is a procedural world modeling approach enabling high variability coupled with physically accurate image synthesis, and is a departure from the hand-modeled virtual worlds and approximate image synthesis methods used in real-time applications. The benefits of our approach include flexible, physically accurate and scalable image synthesis, implicit wide coverage of classes and features, and complete data introspection for annotations, which all contribute to quality and cost efficiency. To evaluate our approach and the efficacy of the resulting data, we use semantic segmentation for autonomous vehicles and robotic navigation as the main application, and we train multiple deep learning architectures using synthetic data with and without fine tuning on organic (i.e. real-world) data. The evaluation shows that our approach improves the neural network’s performance and that even modest implementation efforts produce state-of-the-art results. ∗apostolia.tsirikoglou@liu.se †magnus@7dlabs.com ‡jonas.unger@liu.se", "title": "" }, { "docid": "c993d3a77bcd272e8eadc66155ee15e1", "text": "This paper presents animated pose templates (APTs) for detecting short-term, long-term, and contextual actions from cluttered scenes in videos. Each pose template consists of two components: 1) a shape template with deformable parts represented in an And-node whose appearances are represented by the Histogram of Oriented Gradient (HOG) features, and 2) a motion template specifying the motion of the parts by the Histogram of Optical-Flows (HOF) features. A shape template may have more than one motion template represented by an Or-node. Therefore, each action is defined as a mixture (Or-node) of pose templates in an And-Or tree structure. While this pose template is suitable for detecting short-term action snippets in two to five frames, we extend it in two ways: 1) For long-term actions, we animate the pose templates by adding temporal constraints in a Hidden Markov Model (HMM), and 2) for contextual actions, we treat contextual objects as additional parts of the pose templates and add constraints that encode spatial correlations between parts. To train the model, we manually annotate part locations on several keyframes of each video and cluster them into pose templates using EM. This leaves the unknown parameters for our learning algorithm in two groups: 1) latent variables for the unannotated frames including pose-IDs and part locations, 2) model parameters shared by all training samples such as weights for HOG and HOF features, canonical part locations of each pose, coefficients penalizing pose-transition and part-deformation. To learn these parameters, we introduce a semi-supervised structural SVM algorithm that iterates between two steps: 1) learning (updating) model parameters using labeled data by solving a structural SVM optimization, and 2) imputing missing variables (i.e., detecting actions on unlabeled frames) with parameters learned from the previous step and progressively accepting high-score frames as newly labeled examples. This algorithm belongs to a family of optimization methods known as the Concave-Convex Procedure (CCCP) that converge to a local optimal solution. The inference algorithm consists of two components: 1) Detecting top candidates for the pose templates, and 2) computing the sequence of pose templates. Both are done by dynamic programming or, more precisely, beam search. In experiments, we demonstrate that this method is capable of discovering salient poses of actions as well as interactions with contextual objects. We test our method on several public action data sets and a challenging outdoor contextual action data set collected by ourselves. The results show that our model achieves comparable or better performance compared to state-of-the-art methods.", "title": "" }, { "docid": "489a131de4f9fb15e971087387862b87", "text": "AIM\nTo assess caffeine intake habits of Osijek high school students and identify the most important sources of caffeine intake.\n\n\nMETHODS\nAdjusted Wisconsin University Caffeine Consumption Questionnaire was administered to 571 high school students (371 boys and 200 girls in the ninth grade) from Osijek, the largest town in eastern Croatia. The level of caffeine in soft drinks was determined by the high pressure liquid chromatography method, and in chocolate and coffee from the literature data.\n\n\nRESULTS\nOnly 10% of our participants did not use foodstuffs containing caffeine. The intake of caffeine originated from soft drinks (50%), coffee (37%), and chocolate (13%). The mean caffeine concentration in soft drinks was 100-/+26.9 mg/L. The mean estimated caffeine intake was 62.8-/+59.8 mg/day. There was no statistically significant difference between boys and girls in caffeine consumption (1.0-/+0.9 mg/kg bw for boys vs 1.1-/+1.4 mg/kg bw for girls). Daily caffeine intake of 50-100 mg was recorded in 32% of girls and 29% of boys, whereas intake greater than 100 mg/day was recorded in 18% of girls and 25% of boys.\n\n\nCONCLUSION\nSoft drinks containing caffeine were the major source of caffeine intake in high school students. Large-scale public health measures are needed to inform the public on health issues related to excessive intake of caffeine-containing foodstuffs by children and adolescents.", "title": "" }, { "docid": "95410e1bfb8a5f42ff949d061b1cd4b9", "text": "This paper presents a high-level hand feature extraction method for real-time gesture recognition. Firstly, the fingers are modelled as cylindrical objects due to their parallel edge feature. Then a novel algorithm is proposed to directly extract fingers from salient hand edges. Considering the hand geometrical characteristics, the hand posture is segmented and described based on the finger positions, palm center location and wrist position. A weighted radial projection algorithm with the origin at the wrist position is applied to localize each finger. The developed system can not only extract extensional fingers but also flexional fingers with high accuracy. Furthermore, hand rotation and finger angle variation have no effect on the algorithm performance. The orientation of the gesture can be calculated without the aid of arm direction and it would not be disturbed by the bare arm area. Experiments have been performed to demonstrate that the proposed method can directly extract high-level hand feature and estimate hand poses in real-time. & 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "62ea6783f6a3e6429621286b4a1f068d", "text": "Aviation delays inconvenience travelers and result in financial losses for stakeholders. Without complex data pre-processing, delay data collected by the existing IATA delay coding system are inadequate to support advanced delay analytics, e.g. large-scale delay propagation tracing in an airline network. Consequently, we developed three new coding schemes aiming at improving the current IATA system. These schemes were tested with specific analysis tasks using simulated delay data and were benchmarked against the IATA system. It was found that a coding scheme with a well-designed reporting style can facilitate automated data analytics and data mining, and an improved grouping of delay codes can minimise potential confusion at the data entry and recording stages. © 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "e46726f1608cb7cd2c08a7ae8ebdb876", "text": "Dual-mode and/or dual-band microwave filters often employ high quality factor (Q ), physically large, and frequency static cavity resonators or low Q, compact, and tunable planar resonators. While each resonator type has advantages, choosing a dual-mode and/or dual-band resonator type is often limited by these extremes. In this paper, a new dual-mode and/or dual-band resonator is shown with Q (360-400) that is higher than that of planar resonators while being frequency tunable (6.7% tuning range) and compact relative to standard cavity resonators. In addition, both degenerate modes of the resonator are tunable using a single actuator. The resonator is used in a single-resonator two-pole filter design and a double-resonator dual-band filter design. An analytical model is developed and design techniques are given for both designs. Measured results confirm that the proposed resonator fits between the design spaces of established dual-mode and/or dual-band resonator types and could find application in systems that require a combination of relatively high Q, tuning capability, and ease of integration.", "title": "" }, { "docid": "4c59e73611e04e830cbc2676a50ec8ca", "text": "This paper proposes a model of neural network which can be used to combine Long Short Term Memory networks (LSTM) with Deep Neural Networks (DNN). Autocorrelation coefficient is added to model to improve the accuracy of prediction model. It can provide better than the other traditional precision of the model. And after considering the autocorrelation features, the neural network of LSTM and DNN has certain advantages in the accuracy of the large granularity data sets. Several experiments were held using real-world data to show effectivity of LSTM model and accuracy were improve with autocorrelation considered.", "title": "" } ]
scidocsrr
7603863e232d4524ad77241726ab3950
Probabilistic text analytics framework for information technology service desk tickets
[ { "docid": "ef08ef786fd759b33a7d323c69be19db", "text": "Language modeling approaches to information retrieval are attractive and promising because they connect the problem of retrieval with that of language model estimation, which has been studied extensively in other application areas such as speech recognition. The basic idea of these approaches is to estimate a language model for each document, and then rank documents by the likelihood of the query according to the estimated language model. A core problem in language model estimation is smoothing, which adjusts the maximum likelihood estimator so as to correct the inaccuracy due to data sparseness. In this paper, we study the problem of language model smoothing and its influence on retrieval performance. We examine the sensitivity of retrieval performance to the smoothing parameters and compare several popular smoothing methods on different test collection.", "title": "" } ]
[ { "docid": "579333c5b2532b0ad04d0e3d14968a54", "text": "We present a learning to rank approach to classify folktales, such as fairy tales and urban legends, according to their story type, a concept that is widely used by folktale researchers to organize and classify folktales. A story type represents a collection of similar stories often with recurring plot and themes. Our work is guided by two frequently used story type classification schemes. Contrary to most information retrieval problems, the text similarity in this problem goes beyond topical similarity. We experiment with approaches inspired by distributed information retrieval and features that compare subject-verb-object triplets. Our system was found to be highly effective compared with a baseline system.", "title": "" }, { "docid": "b3de5fa8a61042c486ca9819448a444d", "text": "This paper proposes a novel optimization algorithm called Hyper-Spherical Search (HSS) algorithm. Like other evolutionary algorithms, the proposed algorithm starts with an initial population. Population individuals are of two types: particles and hyper-sphere centers that all together form particle sets. Searching the hyper-sphere inner space made by the hyper-sphere center and its particle is the basis of the proposed evolutionary algorithm. The HSS algorithm hopefully converges to a state at which there exists only one hyper-sphere center, and its particles are at the same position and have the same cost function value as the hyper-sphere center. Applying the proposed algorithm to some benchmark cost functions shows its ability in dealing with different types of optimization problems. The proposed method is compared with the genetic algorithm (GA), particle swarm optimization (PSO) and harmony search algorithm (HSA). The results show that the HSS algorithm has faster convergence and results in better solutions than GA, PSO and HSA.", "title": "" }, { "docid": "83adebddcfd162922e55d89bf2dea9e6", "text": "In this paper, we present an orientation inference framework for reconstructing implicit surfaces from unoriented point clouds. The proposed method starts from building a surface approximation hierarchy comprising of a set of unoriented local surfaces, which are represented as a weighted combination of radial basis functions. We formulate the determination of the globally consistent orientation as a graph optimization problem by treating the local implicit patches as nodes. An energy function is defined to penalize inconsistent orientation changes by checking the sign consistency between neighboring local surfaces. An optimal labeling of the graph nodes indicating the orientation of each local surface can, thus, be obtained by minimizing the total energy defined on the graph. The local inference results are propagated over the model in a front-propagation fashion to obtain the global solution. The reconstructed surfaces are consolidated by a simple and effective inspection procedure to locate the erroneously fitted local surfaces. A progressive reconstruction algorithm that iteratively includes more oriented points to improve the fitting accuracy and efficiently updates the RBF coefficients is proposed. We demonstrate the performance of the proposed method by showing the surface reconstruction results on some real-world 3-D data sets with comparison to those by using the previous methods.", "title": "" }, { "docid": "897434ecb3fbf9ea6aae02aeca9cc267", "text": "The three stage design of a microstrip slotted holy shaped patch structure intended to serve high frequency applications in the frequency range between 19.52 GHz to 31.5 GHz is proposed in this paper. The geometrical stages use FR4 epoxy substrate with small dimensions of 10 mm × 8.7 mm × 1.6 mm and employ coaxial feeding technique. An analysis of the three design stages has been done over HFSS-15to obtain the corresponding reflection coefficient, bandwidth, radiation pattern, gain and VSWR. The graphical as well as tabulated comparison of the standard parameters has been included in the results section.", "title": "" }, { "docid": "a0e14f5c359de4aa8e7640cf4ff5effa", "text": "In speech translation, we are faced with the problem of how to couple the speech recognition process and the translation process. Starting from the Bayes decision rule for speech translation, we analyze how the interaction between the recognition process and the translation process can be modelled. In the light of this decision rule, we discuss the already existing approaches to speech translation. None of the existing approaches seems to have addressed this direct interaction. We suggest two new methods, the local averaging approximation and the monotone alignments.", "title": "" }, { "docid": "9a5137b87e70af421d93aa7dd70bfacd", "text": "The human immune system has numerous properties that make it ripe for exploitation in the computational domain, such as robustness and fault tolerance, and many different algorithms, collectively termed Artificial Immune Systems (AIS), have been inspired by it. Two generations of AIS are currently in use, with the first generation relying on simplified immune models and the second generation utilising interdisciplinary collaboration to develop a deeper understanding of the immune system and hence produce more complex models. Both generations of algorithms have been successfully applied to a variety of problems, including anomaly detection, pattern recognition, optimisation and robotics. In this chapter an overview of AIS is presented, its evolution is discussed, and it is shown that the diversification of the field is linked to the diversity of the immune system itself, leading to a number of algorithms as opposed to one archetypal system. Two case studies are also presented to help provide insight into the mechanisms of AIS; these are the idiotypic network approach and the Dendritic Cell Algorithm.", "title": "" }, { "docid": "8543e4cd67ef3f23efabd0b130bfe9f9", "text": "A promising way of software reuse is Component-Based Software Development (CBSD). There is an increasing number of OSS products available that can be freely used in product development. However, OSS communities themselves have not yet taken full advantage of the “reuse mechanism”. Many OSS projects duplicate effort and code, even when sharing the same application domain and topic. One successful counter-example is the FFMpeg multimedia project, since several of its components are widely and consistently reused into other OSS projects. This paper documents the history of the libavcodec library of components from the FFMpeg project, which at present is reused in more than 140 OSS projects. Most of the recipients use it as a blackbox component, although a number of OSS projects keep a copy of it in their repositories, and modify it as such. In both cases, we argue that libavcodec is a successful example of reusable OSS library of compo-", "title": "" }, { "docid": "7fa9bacbb6b08065ecfe0530f082a391", "text": "This paper considers the task of articulated human pose estimation of multiple people in real world images. We propose an approach that jointly solves the tasks of detection and pose estimation: it infers the number of persons in a scene, identifies occluded body parts, and disambiguates body parts between people in close proximity of each other. This joint formulation is in contrast to previous strategies, that address the problem by first detecting people and subsequently estimating their body pose. We propose a partitioning and labeling formulation of a set of body-part hypotheses generated with CNN-based part detectors. Our formulation, an instance of an integer linear program, implicitly performs non-maximum suppression on the set of part candidates and groups them to form configurations of body parts respecting geometric and appearance constraints. Experiments on four different datasets demonstrate state-of-the-art results for both single person and multi person pose estimation.", "title": "" }, { "docid": "c9e5a1b9c18718cc20344837e10b08f7", "text": "Reconnaissance is the initial and essential phase of a successful advanced persistent threat (APT). In many cases, attackers collect information from social media, such as professional social networks. This information is used to select members that can be exploited to penetrate the organization. Detecting such reconnaissance activity is extremely hard because it is performed outside the organization premises. In this paper, we propose a framework for management of social network honeypots to aid in detection of APTs at the reconnaissance phase. We discuss the challenges that such a framework faces, describe its main components, and present a case study based on the results of a field trial conducted with the cooperation of a large European organization. In the case study, we analyze the deployment process of the social network honeypots and their maintenance in real social networks. The honeypot profiles were successfully assimilated into the organizational social network and received suspicious friend requests and mail messages that revealed basic indications of a potential forthcoming attack. In addition, we explore the behavior of employees in professional social networks, and their resilience and vulnerability toward social network infiltration.", "title": "" }, { "docid": "682921e4e2f000384fdcb9dc6fbaa61a", "text": "The use of Cloud Computing for computation offloading in the robotics area has become a field of interest today. The aim of this work is to demonstrate the viability of cloud offloading in a low level and intensive computing task: a vision-based navigation assistance of a service mobile robot. In order to do so, a prototype, running over a ROS-based mobile robot (Erratic by Videre Design LLC) is presented. The information extracted from on-board stereo cameras will be used by a private cloud platform consisting of five bare-metal nodes with AMD Phenom 965 × 4 CPU, with the cloud middleware Openstack Havana. The actual task is the shared control of the robot teleoperation, that is, the smooth filtering of the teleoperated commands with the detected obstacles to prevent collisions. All the possible offloading models for this case are presented and analyzed. Several performance results using different communication technologies and offloading models are explained as well. In addition to this, a real navigation case in a domestic circuit was done. The tests demonstrate that offloading computation to the Cloud improves the performance and navigation results with respect to the case where all processing is done by the robot.", "title": "" }, { "docid": "6deaeb7d3fdb3a9ffce007af333061ac", "text": "This paper proposes a simple CMOS exponential current circuit that is capable to control a Variable Gain Amplifier with a linear-in-dB manner. The proposed implementation is based on a Taylor's series approximation of the exponential function. A simple VGA architecture has been designed in a CMOS 90nm technology, in order to validate the theoretical analysis. The approximation achieves a 17dB linear range with less than 0.5dB approximation error, while the overall power consumption is less than 300μW.", "title": "" }, { "docid": "977efac2809f4dc455e1289ef54008b0", "text": "A novel 3-D NAND flash memory device, VSAT (Vertical-Stacked-Array-Transistor), has successfully been achieved. The VSAT was realized through a cost-effective and straightforward process called PIPE (planarized-Integration-on-the-same-plane). The VSAT combined with PIPE forms a unique 3-D vertical integration method that may be exploited for ultra-high-density Flash memory chip and solid-state-drive (SSD) applications. The off-current level in the polysilicon-channel transistor dramatically decreases by five orders of magnitude by using an ultra-thin body of 20 nm thick and a double-gate-in-series structure. In addition, hydrogen annealing improves the subthreshold swing and the mobility of the polysilicon-channel transistor.", "title": "" }, { "docid": "3dfe5099c72f3ef3341c2d053ee0d2c2", "text": "In this paper, the authors introduce a type of transverse flux reluctance machines. These machines work without permanent magnets or electric rotor excitation and hold several advantages, including a high power density, high torque, and compact design. Disadvantages are a high fundamental frequency and a high torque ripple that complicates the control of the motor. The device uses soft magnetic composites (SMCs) for the magnetic circuit, which allows complex stator geometries with 3-D magnetic flux paths. The winding is made from hollow copper tubes, which also form the main heat sink of the machine by using water as a direct copper coolant. Models concerning the design and computation of the magnetic circuit, torque, and the power output are described. A crucial point in this paper is the determination of hysteresis and eddy-current losses in the SMC and the calculation of power losses and current displacement in the copper winding. These are calculated with models utilizing a combination of analytic approaches and finite-element method simulations. Finally, a thermal model based on lumped parameters is introduced, and calculated temperature rises are presented.", "title": "" }, { "docid": "8fd79b51fd744b675751c45cc0256787", "text": "New grid codes demand the wind turbine systems to ride through recurring grid faults. In this paper, the performance of the doubly Ffed induction generator (DFIG) wind turbine system under recurring symmetrical grid faults is analyzed. The mathematical model of the DFIG under recurring symmetrical grid faults is established. The analysis is based on the DFIG wind turbine system with the typical low-voltage ride-through strategy-with rotor-side crowbar. The stator natural flux produced by the voltage recovery after the first grid fault may be superposed on the stator natural flux produced by the second grid fault, so that the transient rotor and stator current and torque fluctuations under the second grid fault may be influenced by the characteristic of the first grid fault, including the voltage dips level and the grid fault angle, as well as the duration between two faults. The mathematical model of the DFIG under recurring grid faults is verified by simulations on a 1.5-MW DFIG wind turbine system model and experiments on a 30-kW reduced scale DFIG test system.", "title": "" }, { "docid": "ca4e2cff91621bca4018ce1eca5450e2", "text": "Decentralized optimization algorithms have received much attention due to the recent advances in network information processing. However, conventional decentralized algorithms based on projected gradient descent are incapable of handling high-dimensional constrained problems, as the projection step becomes computationally prohibitive. To address this problem, this paper adopts a projection-free optimization approach, a.k.a. the Frank–Wolfe (FW) or conditional gradient algorithm. We first develop a decentralized FW (DeFW) algorithm from the classical FW algorithm. The convergence of the proposed algorithm is studied by viewing the decentralized algorithm as an <italic>inexact </italic> FW algorithm. Using a diminishing step size rule and letting <inline-formula><tex-math notation=\"LaTeX\">$t$ </tex-math></inline-formula> be the iteration number, we show that the DeFW algorithm's convergence rate is <inline-formula><tex-math notation=\"LaTeX\">${\\mathcal O}(1/t)$</tex-math></inline-formula> for convex objectives; is <inline-formula><tex-math notation=\"LaTeX\">${\\mathcal O}(1/t^2)$</tex-math></inline-formula> for strongly convex objectives with the optimal solution in the interior of the constraint set; and is <inline-formula> <tex-math notation=\"LaTeX\">${\\mathcal O}(1/\\sqrt{t})$</tex-math></inline-formula> toward a stationary point for smooth but nonconvex objectives. We then show that a consensus-based DeFW algorithm meets the above guarantees with two communication rounds per iteration. We demonstrate the advantages of the proposed DeFW algorithm on low-complexity robust matrix completion and communication efficient sparse learning. Numerical results on synthetic and real data are presented to support our findings.", "title": "" }, { "docid": "d8c45560377ac2774b1bbe8b8a61b1fb", "text": "Markov logic uses weighted formulas to compactly encode a probability distribution over possible worlds. Despite the use of logical formulas, Markov logic networks (MLNs) can be difficult to interpret, due to the often counter-intuitive meaning of their weights. To address this issue, we propose a method to construct a possibilistic logic theory that exactly captures what can be derived from a given MLN using maximum a posteriori (MAP) inference. Unfortunately, the size of this theory is exponential in general. We therefore also propose two methods which can derive compact theories that still capture MAP inference, but only for specific types of evidence. These theories can be used, among others, to make explicit the hidden assumptions underlying an MLN or to explain the predictions it makes.", "title": "" }, { "docid": "70ea4bbe03f2f733ff995dc4e8fea920", "text": "The spread of malicious or accidental misinformation in social media, especially in time-sensitive situations, such as real-world emergencies, can have harmful effects on individuals and society. In this work, we developed models for automated verification of rumors (unverified information) that propagate through Twitter. To predict the veracity of rumors, we identified salient features of rumors by examining three aspects of information spread: linguistic style used to express rumors, characteristics of people involved in propagating information, and network propagation dynamics. The predicted veracity of a time series of these features extracted from a rumor (a collection of tweets) is generated using Hidden Markov Models. The verification algorithm was trained and tested on 209 rumors representing 938,806 tweets collected from real-world events, including the 2013 Boston Marathon bombings, the 2014 Ferguson unrest, and the 2014 Ebola epidemic, and many other rumors about various real-world events reported on popular websites that document public rumors. The algorithm was able to correctly predict the veracity of 75% of the rumors faster than any other public source, including journalists and law enforcement officials. The ability to track rumors and predict their outcomes may have practical applications for news consumers, financial markets, journalists, and emergency services, and more generally to help minimize the impact of false information on Twitter.", "title": "" }, { "docid": "26a9fb64389a5dbbbd8afdc6af0b6f07", "text": "specifications of the essential structure of a system. Models in the analysis or preliminary design stages focus on the key concepts and mechanisms of the eventual system. They correspond in certain ways with the final system. But details are missing from the model, which must be added explicitly during the design process. The purpose of the abstract models is to get the high-level pervasive issues correct before tackling the more localized details. These models are intended to be evolved into the final models by a careful process that guarantees that the final system correctly implements the intent of the earlier models. There must be traceability from these essential models to the full models; otherwise, there is no assurance that the final system correctly incorporates the key properties that the essential model sought to show. Essential models focus on semantic intent. They do not need the full range of implementation options. Indeed, low-level performance distinctions often obscure the logical semantics. The path from an essential model to a complete implementation model must be clear and straightforward, however, whether it is generated automatically by a code generator or evolved manually by a designer. Full specifications of a final system. An implementation model includes enough information to build the system. It must include not only the logical semantics of the system and the algorithms, data structures, and mechanisms that ensure proper performance, but also organizational decisions about the system artifacts that are necessary for cooperative work by humans and processing by tools. This kind of model must include constructs for packaging the model for human understanding and for computer convenience. These are not properties of the target application itself. Rather, they are properties of the construction process. Exemplars of typical or possible systems. Well-chosen examples can give insight to humans and can validate system specifications and implementations. Even a large Chapter 2 • The Nature and Purpose of Models 17 collection of examples, however, necessarily falls short of a definitive description. Ultimately, we need models that specify the general case; that is what a program is, after all. Examples of typical data structures, interaction sequences, or object histories can help a human trying to understand a complicated situation, however. Examples must be used with some care. It is logically impossible to induce the general case from a set of examples, but well-chosen prototypes are the way most people think. An example model includes instances rather than general descriptors. It therefore tends to have a different feel than a generic descriptive model. Example models usually use only a subset of the UML constructs, those that deal with instances. Both descriptive models and exemplar models are useful in modeling a system. Complete or partial descriptions of systems. A model can be a complete description of a single system with no outside references. More often, it is organized as a set of distinct, discrete units, each of which may be stored and manipulated separately as a part of the entire description. Such models have “loose ends” that must be bound to other models in a complete system. Because the pieces have coherence and meaning, they can be combined with other pieces in various ways to produce many different systems. Achieving reuse is an important goal of good modeling. Models evolve over time. Models with greater degrees of detail are derived from more abstract models, and more concrete models are derived from more logical models. For example, a model might start as a high-level view of the entire system, with a few key services in brief detail and no embellishments. Over time, much more detail is added and variations are introduced. Also over time, the focus shifts from a front-end, user-centered logical view to a back-end, implementationcentered physical view. As the developers work with a system and understand it better, the model must be iterated at all levels to capture that understanding; it is impossible to understand a large system in a single, linear pass. There is no one “right” form for a model.", "title": "" }, { "docid": "762197e61c90492d2d405fe2a832092f", "text": "This paper proposes a methodology to design and optimize the footprint of miniaturized 3-dB branch-line hybrid couplers, which consists of high-impedance transmission lines and distributed capacitors. To minimize the physical size of the coupler, the distributed capacitors are placed within the empty space of the hybrid. The proposed design methodology calls for the joint optimization of the length of the reduced high-impedance transmission lines and the area of the distributed capacitors. A prototype at S-band was designed and built to validate the approach. It showed a size reduction by 62% compared with the conventional 3-dB branch-line hybrid coupler while providing similar performance and bandwidth.", "title": "" }, { "docid": "cca9972ce9d49d1347274b446e6be00b", "text": "Miura folding is famous all over the world. It is an element of the ancient Japanese tradition of origami and reaches as far as astronautical engineering through the construction of solar panels. This article explains how to achieve the Miura folding, and describes its application to maps. The author also suggests in this context that nature may abhor the right angle, according to observation of the wing base of a dragonfly. AMS Subject Classification: 51M05, 00A09, 97A20", "title": "" } ]
scidocsrr
f37802285fe1c5aa36f12e3d75f9a9ce
Active sample selection in scalar fields exhibiting non-stationary noise with parametric heteroscedastic Gaussian process regression
[ { "docid": "444e84c8c46c066b0a78ad4a743a9c78", "text": "This paper presents a novel Gaussian process (GP) approach to regression with input-dependent noise rates. We follow Goldberg et al.'s approach and model the noise variance using a second GP in addition to the GP governing the noise-free output value. In contrast to Goldberg et al., however, we do not use a Markov chain Monte Carlo method to approximate the posterior noise variance but a most likely noise approach. The resulting model is easy to implement and can directly be used in combination with various existing extensions of the standard GPs such as sparse approximations. Extensive experiments on both synthetic and real-world data, including a challenging perception problem in robotics, show the effectiveness of most likely heteroscedastic GP regression.", "title": "" }, { "docid": "528d0d198bb092ece6f824d4e1912bcd", "text": "Monitoring marine ecosystems is challenging due to the dynamic and unpredictable nature of environmental phenomena. In this work we survey a series of techniques used in information gathering that can be used to increase experts' understanding of marine ecosystems through dynamic monitoring. To achieve this, an underwater glider simulator is constructed, and four different path planning algorithms are investigated: Boustrophendon paths, a gradient based approach, a Level-Sets method, and Sequential Bayesian Optimization. Each planner attempts to maximize the time the glider spends in an area where ocean variables are above a threshold value of interest. To emulate marine ecosystem sensor data, ocean temperatures are used. The planners are simulated 50 times each at random starting times and locations. After validation through simulation, we show that informed decision making improves performance, but more accurate prediction of ocean conditions would be necessary to benefit from long horizon lookahead planning.", "title": "" } ]
[ { "docid": "3cfa80815c0e4835e4e081348717459a", "text": "β-defensins are small cationic peptides, with potent immunoregulatory and antimicrobial activity which are produced constitutively and inducibly by eukaryotic cells. This study profiles the expression of a cluster of 19 novel defensin genes which spans 320 kb on chromosome 13 in Bos taurus. It also assesses the genetic variation in these genes between two divergently selected cattle breeds. Using quantitative real-time PCR (qRT-PCR), all 19 genes in this cluster were shown to be expressed in the male genital tract and 9 in the female genital tract, in a region-specific manner. These genes were sequenced in Norwegian Red (NR) and Holstein-Friesian (HF) cattle for population genetic analysis. Of the 17 novel single nucleotide polymorphisms (SNPs) identified, 7 were non-synonymous, 6 synonymous and 4 outside the protein coding region. Significant frequency differences in SNPs in bovine β-defensins (BBD) 115, 117, 121, and 122 were detected between the two breeds, which was also reflected at the haplotype level (P < 0.05). There was clear segregation of the haplotypes into two blocks on chromosome 13 in both breeds, presumably due to historical recombination. This study documents genetic variation in this β-defensin gene cluster between Norwegian Red and Holstein-Friesian cattle which may result from divergent selection for production and fertility traits in these two breeds. Regional expression in the epididymis and fallopian tube suggests a potential reproductive-immunobiology role for these genes in cattle.", "title": "" }, { "docid": "f81cd7e1cfbfc15992fba9368c1df30b", "text": "The most challenging issue of conventional Time Amplifiers (TAs) is their limited Dynamic Range (DR). This paper presents a mathematical analysis to clarify principle of operation of conventional 2× TA's. The mathematical derivations release strength reduction of the current sources of the TA is the simplest way to increase DR. Besides, a new technique is presented to expand the Dynamic Range (DR) of conventional 2× TAs. Proposed technique employs current subtraction in place of changing strength of current sources using conventional gain compensation methods, which results in more stable gain over a wider DR. The TA is simulated using Spectre-rf in TSMC 0.18um COMS technology. DR of the 2× TA is expanded to 300ps only with 9% gain error while it consumes only 28uW from a 1.2V supply voltage.", "title": "" }, { "docid": "969a8e447fb70d22a7cbabe7fc47a9c9", "text": "A novel multi-level AC six-phase motor drive is developed in this paper. The scheme is based on three conventional 2-level three-phase voltage source inverters (VSIs) supplying the open-end windings of a dual three-phase motor (six-phase induction machine). The proposed inverter is capable of supply the machine with multi-level voltage waveforms. The developed system is compared with the conventional solution and it is demonstrated that the drive system permits to reduce the harmonic distortion of the machine currents, to reduce the total semiconductor losses and to decrease the power processed by converter switches. The system model and the Pulse-Width Modulation (PWM) strategy are presented. The experimental verification was obtained by using IGBTs with dedicated drives and a digital signal processor (DSP) with plug-in boards and sensors.", "title": "" }, { "docid": "d9a9339672121fb6c3baeb51f11bfcd8", "text": "The VISION (video indexing for searching over networks) digital video library system has been developed in our laboratory as a testbed for evaluating automatic and comprehensive mechanisms for video archive creation and content-based search, ®ltering and retrieval of video over local and wide area networks. In order to provide access to video footage within seconds of broadcast, we have developed a new pipelined digital video processing architecture which is capable of digitizing, processing, indexing and compressing video in real time on an inexpensive general purpose computer. These videos were automatically partitioned into short scenes using video, audio and closed-caption information. The resulting scenes are indexed based on their captions and stored in a multimedia database. A clientserver-based graphical user interface was developed to enable users to remotely search this archive and view selected video segments over networks of di€erent bandwidths. Additionally, VISION classi®es the incoming videos with respect to a taxonomy of categories and will selectively send users videos which match their individual pro®les. # 1999 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "7e4a222322346abc281d72534902d707", "text": "Humic substances (HS) have been widely recognized as a plant growth promoter mainly by changes on root architecture and growth dynamics, which result in increased root size, branching and/or greater density of root hair with larger surface area. Stimulation of the H+-ATPase activity in cell membrane suggests that modifications brought about by HS are not only restricted to root structure, but are also extended to the major biochemical pathways since the driving force for most nutrient uptake is the electrochemical gradient across the plasma membrane. Changes on root exudation profile, as well as primary and secondary metabolism were also observed, though strongly dependent on environment conditions, type of plant and its ontogeny. Proteomics and genomic approaches with diverse plant species subjected to HS treatment had often shown controversial patterns of protein and gene expression. This is a clear indication that HS effects of plants are complex and involve non-linear, cross-interrelated and dynamic processes that need be treated with an interdisciplinary view. Being the humic associations recalcitrant to microbiological attack, their use as vehicle to introduce beneficial selected microorganisms to crops has been proposed. This represents a perspective for a sort of new biofertilizer designed for a sustainable agriculture, whereby plants treated with HS become more susceptible to interact with bioinoculants, while HS may concomitantly modify the structure/activity of the microbial community in the rhizosphere compartment. An enhanced knowledge of the effects on plants physiology and biochemistry and interaction with rhizosphere and endophytic microbes should lead to achieve increased crop productivity through a better use of HS inputs in Agriculture.", "title": "" }, { "docid": "cefabe1b4193483d258739674b53f773", "text": "This paper describes design and development of omnidirectional magnetic climbing robots with high maneuverability for inspection of ferromagnetic 3D human made structures. The main focus of this article is design, analysis and implementation of magnetic omnidirectional wheels for climbing robots. We discuss the effect of the associated problems of such wheels, e.g. vibration, on climbing robots. This paper also describes the evolution of magnetic omnidirectional wheels throughout the design and development of several solutions, resulting in lighter and smaller wheels which have less vibration and adapt better to smaller radius structures. These wheels are installed on a chassis which adapts passively to flat and curved structures, enabling the robot to climb and navigate on such structures.", "title": "" }, { "docid": "1ebdcfe9c477e6a29bfce1ddeea960aa", "text": "Bitcoin—a cryptocurrency built on blockchain technology—was the first currency not controlled by a single entity.1 Initially known to a few nerds and criminals,2 bitcoin is now involved in hundreds of thousands of transactions daily. Bitcoin has achieved values of more than US$15,000 per coin (at the end of 2017), and this rising value has attracted attention. For some, bitcoin is digital fool’s gold. For others, its underlying blockchain technology heralds the dawn of a new digital era. Both views could be right. The fortunes of cryptocurrencies don’t define blockchain. Indeed, the biggest effects of blockchain might lie beyond bitcoin, cryptocurrencies, or even the economy. Of course, the technical questions about blockchain have not all been answered. We still struggle to overcome the high levels of processing intensity and energy use. These questions will no doubt be confronted over time. If the technology fails, the future of blockchain will be different. In this article, I’ll assume technical challenges will be solved, and although I’ll cover some technical issues, these aren’t the main focus of this paper. In a 2015 article, “The Trust Machine,” it was argued that the biggest effects of blockchain are on trust.1 The article referred to public trust in economic institutions, that is, that such organizations and intermediaries will act as expected. When they don’t, trust deteriorates. Trust in economic institutions hasn’t recovered from the recession of 2008.3 Technology can exacerbate distrust: online trades with distant counterparties can make it hard to settle disputes face to face. Trusted intermediaries can be hard to find, and that’s where blockchain can play a part. Permanent record-keeping that can be sequentially updated but not erased creates visible footprints of all activities conducted on the chain. This reduces the uncertainty of alternative facts or truths, thus creating the “trust machine” The Economist describes. As trust changes, so too does governance.4 Vitalik Buterin of the Ethereum blockchain platform calls blockchain “a magic computer” to which anyone can upload self-executing programs.5 All states of every Beyond Bitcoin: The Rise of Blockchain World", "title": "" }, { "docid": "061c8e8e9d6a360c36158193afee5276", "text": "Distribution transformers are one of the most important equipment in power network. Because of, the large number of transformers distributed over a wide area in power electric systems, the data acquisition and condition monitoring is a important issue. This paper presents design and implementation of a mobile embedded system and a novel software to monitor and diagnose condition of transformers, by record key operation indictors of a distribution transformer like load currents, transformer oil, ambient temperatures and voltage of three phases. The proposed on-line monitoring system integrates a Global Service Mobile (GSM) Modem, with stand alone single chip microcontroller and sensor packages. Data of operation condition of transformer receives in form of SMS (Short Message Service) and will be save in computer server. Using the suggested online monitoring system will help utility operators to keep transformers in service for longer of time.", "title": "" }, { "docid": "3e2df9d6ed3cad12fcfda19d62a0b42e", "text": "We present a conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each. Our method, called the Relation Network (RN), is trained end-to-end from scratch. During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting. Once trained, a RN is able to classify images of new classes by computing relation scores between query images and the few examples of each new class without further updating the network. Besides providing improved performance on few-shot learning, our framework is easily extended to zero-shot learning. Extensive experiments on five benchmarks demonstrate that our simple approach provides a unified and effective approach for both of these two tasks.", "title": "" }, { "docid": "f0da127d64aa6e9c87d4af704f049d07", "text": "The introduction of the blue-noise spectra-high-frequency white noise with minimal energy at low frequencies-has had a profound impact on digital halftoning for binary display devices, such as inkjet printers, because it represents an optimal distribution of black and white pixels producing the illusion of a given shade of gray. The blue-noise model, however, does not directly translate to printing with multiple ink intensities. New multilevel printing and display technologies require the development of corresponding quantization algorithms for continuous tone images, namely multitoning. In order to define an optimal distribution of multitone pixels, this paper develops the theory and design of multitone, blue-noise dithering. Here, arbitrary multitone dot patterns are modeled as a layered superposition of stack-constrained binary patterns. Multitone blue-noise exhibits minimum energy at low frequencies and a staircase-like, ascending, spectral pattern at higher frequencies. The optimum spectral profile is described by a set of principal frequencies and amplitudes whose calculation requires the definition of a spectral coherence structure governing the interaction between patterns of dots of different intensities. Efficient algorithms for the generation of multitone, blue-noise dither patterns are also introduced.", "title": "" }, { "docid": "79b91aae9a2911e48026f857e88149f4", "text": "Fine-grained visual recognition is challenging because it highly relies on the modeling of various semantic parts and fine-grained feature learning. Bilinear pooling based models have been shown to be effective at fine-grained recognition, while most previous approaches neglect the fact that inter-layer part feature interaction and fine-grained feature learning are mutually correlated and can reinforce each other. In this paper, we present a novel model to address these issues. First, a crosslayer bilinear pooling approach is proposed to capture the inter-layer part feature relations, which results in superior performance compared with other bilinear pooling based approaches. Second, we propose a novel hierarchical bilinear pooling framework to integrate multiple cross-layer bilinear features to enhance their representation capability. Our formulation is intuitive, efficient and achieves state-of-the-art results on the widely used fine-grained recognition datasets.", "title": "" }, { "docid": "8756ef13409ae696ffaf034c873fdaf6", "text": "This paper addresses a data-driven prognostics method for the estimation of the Remaining Useful Life (RUL) and the associated confidence value of bearings. The proposed method is based on the utilization of the Wavelet Packet Decomposition (WPD) technique, and the Mixture of Gaussians Hidden Markov Models (MoG-HMM). The method relies on two phases: an off-line phase, and an on-line phase. During the first phase, the raw data provided by the sensors are first processed to extract features in the form of WPD coefficients. The extracted features are then fed to dedicated learning algorithms to estimate the parameters of a corresponding MoG-HMM, which best fits the degradation phenomenon. The generated model is exploited during the second phase to continuously assess the current health state of the physical component, and to estimate its RUL value with the associated confidence. The developed method is tested on benchmark data taken from the “NASA prognostics data repository” related to several experiments of failures on bearings done under different operating conditions. Furthermore, the method is compared to traditional time-feature prognostics and simulation results are given at the end of the paper. The results of the developed prognostics method, particularly the estimation of the RUL, can help improving the availability, reliability, and security while reducing the maintenance costs. Indeed, the RUL and associated confidence value are relevant information which can be used to take appropriate maintenance and exploitation decisions. In practice, this information may help the maintainers to prepare the necessary material and human resources before the occurrence of a failure. Thus, the traditional maintenance policies involving corrective and preventive maintenance can be replaced by condition based maintenance.", "title": "" }, { "docid": "fb5a38c1dbbc7416f9b15ee19be9cc06", "text": "This study uses a body motion interactive game developed in Scratch 2.0 to enhance the body strength of children with disabilities. Scratch 2.0, using an augmented-reality function on a program platform, creates real world and virtual reality displays at the same time. This study uses a webcam integration that tracks movements and allows participants to interact physically with the project, to enhance the motivation of children with developmental disabilities to perform physical activities. This study follows a single-case research using an ABAB structure, in which A is the baseline and B is the intervention. The experimental period was 2 months. The experimental results demonstrated that the scores for 3 children with developmental disabilities increased considerably during the intervention phrases. The developmental applications of these results are also discussed.", "title": "" }, { "docid": "c313f49d5dd8b553b0638696b6d4482a", "text": "Artificial Bee Colony Algorithm (ABC) is nature-inspired metaheuristic, which imitates the foraging behavior of bees. ABC as a stochastic technique is easy to implement, has fewer control parameters, and could easily be modify and hybridized with other metaheuristic algorithms. Due to its successful implementation, several researchers in the optimization and artificial intelligence domains have adopted it to be the main focus of their research work. Since 2005, several related works have appeared to enhance the performance of the standard ABC in the literature, to meet up with challenges of recent research problems being encountered. Interestingly, ABC has been tailored successfully, to solve a wide variety of discrete and continuous optimization problems. Some other works have modified and hybridized ABC to other algorithms, to further enhance the structure of its framework. In this review paper, we provide a thorough and extensive overview of most research work focusing on the application of ABC, with the expectation that it would serve as a reference material to both old and new, incoming researchers to the field, to support their understanding of current trends and assist their future research prospects and directions. The advantages, applications and drawbacks of the newly developed ABC hybrids are highlighted, critically analyzed and discussed accordingly.", "title": "" }, { "docid": "0659c4f6cd4a6d8ab35dd7dba6c0974e", "text": "Purpose – The purpose of this paper is to examine an integrated model of factors affecting attitudes toward online shopping in Jordan. The paper introduces an integrated model of the roles of perceived website reputation, relative advantage, perceived website image, and trust that affect attitudes toward online shopping. Design/methodology/approach – A structured and self-administered online survey was employed targeting online shoppers of a reputable online retailer in Jordan; MarkaVIP. A sample of 273 of online shoppers was involved in the online survey. A series of exploratory and confirmatory factor analyses were used to assess the research constructs, unidimensionality, validity, and composite reliability (CR). Structural path model analysis was also used to test the proposed research model and hypotheses. Findings – The empirical findings of this study indicate that perceived website reputation, relative advantage, perceived website image, and trust have directly and indirectly affected consumers’ attitudes toward online shopping. Online consumers’ shopping attitudes are mainly affected by perceived relative advantage and trust. Trust is a product of relative advantage and that the later is a function of perceived website reputation. Relative advantage and perceived website reputation are key predictors of perceived website image. Perceived website image was found to be a direct predictor of trust. Also, the authors found that 26 percent of variation in online shopping attitudes was directly caused by relative advantage, trust, and perceived website image. Research limitations/implications – The research examined online consumers’ attitudes toward one website only therefore the generalizability of the research finding is limited to the local Jordanian website; MarkaVIP. Future research is encouraged to conduct comparative studies between local websites and international ones, e.g., Amazon and e-bay in order to shed lights on consumers’ attitudes toward both websites. The findings are limited to online shoppers in Jordan. A fruitful area of research is to conduct a comparative analysis between online and offline attitudes toward online shopping behavior. Also, replications of the current study’s model in different countries would most likely strengthen and validate its findings. The design of the study is quantitative using an online survey to measure online consumers’ attitudes through a cross-sectional design. Future research is encouraged to use qualitative research design and methodology to provide a deeper understanding of consumers’ attitudes and behaviors toward online and offline shopping in Jordan and elsewhere. Practical implications – The paper supports the importance of perceived website reputation, relative advantage, trust, and perceived web image as keys drivers of attitudes toward online shopping. It further underlines the importance of relative advantage and trust as major contributors to building positive attitudes toward online shopping. In developing countries (e.g. Jordan) where individuals are generally described as risk averse, the level of trust is critical in determining the attitude of individuals toward online shopping. Moreover and given the modest economic situation in Jordan, relative advantage is another significant factor affecting consumers’ attitudes toward online shopping. Indeed, if online shopping would not add a significant value and benefits to consumers, they would have negative attitude toward this technology. This is at the heart of marketing theory and relationship marketing practice. Further, relative advantage is a key predictor of both perceived Business Process Management", "title": "" }, { "docid": "96be7a58f4aec960e2ad2273dea26adb", "text": "Because time series are a ubiquitous and increasingly prevalent type of data, there has been much research effort devoted to time series data mining recently. As with all data mining problems, the key to effective and scalable algorithms is choosing the right representation of the data. Many high level representations of time series have been proposed for data mining. In this work, we introduce a new technique based on a bit level approximation of the data. The representation has several important advantages over existing techniques. One unique advantage is that it allows raw data to be directly compared to the reduced representation, while still guaranteeing lower bounds to Euclidean distance. This fact can be exploited to produce faster exact algorithms for similarly search. In addition, we demonstrate that our new representation allows time series clustering to scale to much larger datasets.", "title": "" }, { "docid": "0acf9ef6e025805a76279d1c6c6c55e7", "text": "Android mobile devices are enjoying a lion's market share in smartphones and mobile devices. This also attracts malware writers to target the Android platform. Recently, we have discovered a new Android malware distribution channel: releasing malicious firmwares with pre-installed malware to the wild. This poses significant risk since users of mobile devices cannot change the content of the malicious firmwares. Furthermore, pre-installed applications have \" more permissions\" (i.e., silent installation) than other legitimate mobile apps, so they can download more malware or access users' confidential information. To understand and address this new form of malware distribution channel, we design and implement \"DroidRay\": a security evaluation system for customized Android firmwares. DroidRay uses both static and dynamic analyses to evaluate the firmware security on both the application and system levels. To understand the impact of this new malware distribution channel, we analyze 250 Android firmwares and 24,009 pre-installed applications. We reveal how the malicious firmware and pre-installed malware are injected, and discovered 1,947 (8.1%) pre-installed applications have signature vulnerability and 19 (7.6%) firmwares contain pre-installed malware. In addition, 142 (56.8%) firmwares have the default signature vulnerability, five (2.0%) firmwares contain malicious hosts file, at most 40 (16.0%) firmwares have the native level privilege escalation vulnerability and at least 249 (99.6%) firmwares have the Java level privilege escalation vulnerability. Lastly, we investigate a real-world case of a pre-installed zero-day malware known as CEPlugnew, which involves 348,018 infected Android smartphones, and we show its degree and geographical penetration. This shows the significance of this new malware distribution channel, and DroidRay is an effective tool to combat this new form of malware spreading.", "title": "" }, { "docid": "cab97e23b7aa291709ecf18e29f580cf", "text": "Recent findings show that coding genes are not the only targets that miRNAs interact with. In fact, there is a pool of different RNAs competing with each other to attract miRNAs for interactions, thus acting as competing endogenous RNAs (ceRNAs). The ceRNAs indirectly regulate each other via the titration mechanism, i.e. the increasing concentration of a ceRNA will decrease the number of miRNAs that are available for interacting with other targets. The cross-talks between ceRNAs, i.e. their interactions mediated by miRNAs, have been identified as the drivers in many disease conditions, including cancers. In recent years, some computational methods have emerged for identifying ceRNA-ceRNA interactions. However, there remain great challenges and opportunities for developing computational methods to provide new insights into ceRNA regulatory mechanisms.In this paper, we review the publically available databases of ceRNA-ceRNA interactions and the computational methods for identifying ceRNA-ceRNA interactions (also known as miRNA sponge interactions). We also conduct a comparison study of the methods with a breast cancer dataset. Our aim is to provide a current snapshot of the advances of the computational methods in identifying miRNA sponge interactions and to discuss the remaining challenges.", "title": "" }, { "docid": "748926afd2efcae529a58fbfa3996884", "text": "The purpose of this research was to investigate preservice teachers’ perceptions about using m-phones and laptops in education as mobile learning tools. A total of 1087 preservice teachers participated in the study. The results indicated that preservice teachers perceived laptops potentially stronger than m-phones as m-learning tools. In terms of limitations the situation was balanced for laptops and m-phones. Generally, the attitudes towards using laptops in education were not exceedingly positive but significantly more positive than m-phones. It was also found that such variables as program/department, grade, gender and possessing a laptop are neutral in causing a practically significant difference in preservice teachers’ views. The results imply an urgent need to grow awareness among participating student teachers towards the concept of m-learning, especially m-learning through m-phones. Introduction The world is becoming a mobigital virtual space where people can learn and teach digitally anywhere and anytime. Today, when timely access to information is vital, mobile devices such as cellular phones, smartphones, mp3 and mp4 players, iPods, digital cameras, data-travelers, personal digital assistance devices (PDAs), netbooks, laptops, tablets, iPads, e-readers such as the Kindle, Nook, etc have spread very rapidly and become common (El-Hussein & Cronje, 2010; Franklin, 2011; Kalinic, Arsovski, Stefanovic, Arsovski & Rankovic, 2011). Mobile devices are especially very popular among young population (Kalinic et al, 2011), particularly among university students (Cheon, Lee, Crooks & Song, 2012; Park, Nam & Cha, 2012). Thus, the idea of learning through mobile devices has gradually become a trend in the field of digital learning (Jeng, Wu, Huang, Tan & Yang, 2010). This is because learning with mobile devices promises “new opportunities and could improve the learning process” (Kalinic et al, 2011, p. 1345) and learning with mobile devices can help achieving educational goals if used through appropriate learning strategies (Jeng et al, 2010). As a matter of fact, from a technological point of view, mobile devices are getting more capable of performing all of the functions necessary in learning design (El-Hussein & Cronje, 2010). This and similar ideas have brought about the concept of mobile learning or m-learning. British Journal of Educational Technology Vol 45 No 4 2014 606–618 doi:10.1111/bjet.12064 © 2013 British Educational Research Association Although mobile learning applications are at their early days, there inevitably emerges a natural pressure by students on educators to integrate m-learning (Franklin, 2011) and so a great deal of attention has been drawn in these applications in the USA, Europe and Asia (Wang & Shen, 2012). Several universities including University of Glasgow, University of Sussex and University of Regensburg have been trying to explore and include the concept of m-learning in their learning systems (Kalinic et al, 2011). Yet, the success of m-learning integration requires some degree of awareness and positive attitudes by students towards m-learning. In this respect, in-service or preservice teachers’ perceptions about m-learning become more of an issue, since their attitudes are decisive in successful integration of m-learning (Cheon et al, 2012). Then it becomes critical whether the teachers, in-service or preservice, have favorable perceptions and attitudinal representations regarding m-learning. Theoretical framework M-learning M-learning has a recent history. When developed as the next phase of e-learning in early 2000s (Peng, Su, Chou & Tsai, 2009), its potential for education could not be envisaged (Attewell, 2005). However, recent developments in mobile and wireless technologies facilitated the departure from traditional learning models with time and space constraints, replacing them with Practitioner Notes What is already known about this topic • Mobile devices are very popular among young population, especially among university students. • Though it has a recent history, m-learning (ie, learning through mobile devices) has gradually become a trend. • M-learning brings new opportunities and can improve the learning process. Previous research on m-learning mostly presents positive outcomes in general besides some drawbacks. • The success of integrating m-learning in teaching practice requires some degree of awareness and positive attitudes by students towards m-learning. What this paper adds • Since teachers’ attitudes are decisive in successful integration of m-learning in teaching, the present paper attempts to understand whether preservice teachers have favorable perceptions and attitudes regarding m-learning. • Unlike much of the previous research on m-learning that handle perceptions about m-learning in a general sense, the present paper takes a more specific approach to distinguish and compare the perceptions about two most common m-learning tools: m-phones and laptops. • It also attempts to find out the variables that cause differences in preservice teachers’ perceptions about using these m-learning devices. Implications for practice and/or policy • Results imply an urgent need to grow awareness and further positive attitudes among participating student teachers towards m-learning, especially through m-phones. • Some action should be taken by the faculty and administration to pedagogically inform and raise awareness about m-learning among preservice teachers. Preservice teachers’ perceptions of M-learning tools 607 © 2013 British Educational Research Association models embedded into our everyday environment, and the paradigm of mobile learning emerged (Vavoula & Karagiannidis, 2005). Today it spreads rapidly and promises to be one of the efficient ways of education (El-Hussein & Cronje, 2010). Partly because it is a new concept, there is no common definition of m-learning in the literature yet (Peng et al, 2009). A good deal of literature defines m-learning as a derivation or extension of e-learning, which is performed using mobile devices such as PDA, mobile phones, laptops, etc (Jeng et al, 2010; Kalinic et al, 2011; Motiwalla, 2007; Riad & El-Ghareeb, 2008). Other definitions highlight certain characteristics of m-learning including portability through mobile devices, wireless Internet connection and ubiquity. For example, a common definition of m-learning in scholarly literature is “the use of portable devices with Internet connection capability in education contexts” (Kinash, Brand & Mathew, 2012, p. 639). In a similar vein, Park et al (2012, p. 592) defines m-learning as “any educational provision where the sole or dominant technologies are handheld or palmtop devices.” On the other hand, m-learning is likely to be simply defined stressing its property of ubiquity, referring to its ability to happen whenever and wherever needed (Peng et al, 2009). For example, Franklin (2011, p. 261) defines mobile learning as “learning that happens anywhere, anytime.” Though it is rather a new research topic and the effectiveness of m-learning in terms of learning achievements has not been fully investigated (Park et al, 2012), there is already an agreement that m-learning brings new opportunities and can improve the learning process (Kalinic et al, 2011). Moreover, the literature review by Wu et al (2012) notes that 86% of the 164 mobile learning studies present positive outcomes in general. Several perspectives of m-learning are attributed in the literature in association with these positive outcomes. The most outstanding among them is the feature of mobility. M-learning makes sense as an educational activity because the technology and its users are mobile (El-Hussein & Cronje, 2010). Hence, learning outside the classroom walls is possible (Nordin, Embi & Yunus, 2010; Şad, 2008; Saran, Seferoğlu & Çağıltay, 2009), enabling students to become an active participant, rather than a passive receiver of knowledge (Looi et al, 2010). This unique feature of m-learning brings about not only the possibility of learning anywhere without limits of classroom or library but also anytime (Çavuş & İbrahim, 2009; Hwang & Chang, 2011; Jeng et al, 2010; Kalinic et al, 2011; Motiwalla, 2007; Sha, Looi, Chen & Zhang, 2012; Sølvberg & Rismark, 2012). This especially offers learners a certain amount of “freedom and independence” (El-Hussein & Cronje, 2010, p. 19), as well as motivation and ability to “self-regulate their own learning” (Sha et al, 2012, p. 366). This idea of learning coincides with the principles of and meet the requirements of other popular paradigms in education including lifelong learning (Nordin et al, 2010), student-centeredness (Sha et al, 2012) and constructivism (Motiwalla, 2007). Beside the favorable properties referred in the m-learning literature, some drawbacks of m-learning are frequently criticized. The most pronounced one is the small screen sizes of the m-learning tools that makes learning activity difficult (El-Hussein & Cronje, 2010; Kalinic et al, 2011; Riad & El-Ghareeb, 2008; Suki & Suki, 2011). Another problem is the weight and limited battery lives of m-tools, particularly the laptops (Riad & El-Ghareeb, 2008). Lack of understanding or expertise with the technology also hinders nontechnical students’ active use of m-learning (Corbeil & Valdes-Corbeil, 2007; Franklin, 2011). Using mobile devices in classroom can cause distractions and interruptions (Cheon et al, 2012; Fried, 2008; Suki & Suki, 2011). Another concern seems to be about the challenged role of the teacher as the most learning activities take place outside the classroom (Sølvberg & Rismark, 2012). M-learning in higher education Mobile learning is becoming an increasingly promising way of delivering instruction in higher education (El-Hussein & Cronje, 2010). This is justified by the current statistics about the 608 British Journal of Educational Technology Vol 45 No 4 2014 © 2013 British Education", "title": "" }, { "docid": "ce0004549d9eec7f47a0a60e11179bba", "text": "We present in this paper a statistical framework that generates accurate and fluent product description from product attributes. Specifically, after extracting templates and learning writing knowledge from attribute-description parallel data, we use the learned knowledge to decide what to say and how to say for product description generation. To evaluate accuracy and fluency for the generated descriptions, in addition to BLEU and Recall, we propose to measure what to say (in terms of attribute coverage) and to measure how to say (by attribute-specified generation) separately. Experimental results show that our framework is effective.", "title": "" } ]
scidocsrr
cbc2c0f62b7501d1880d4f27128d399d
Salient Structure Detection by Context-Guided Visual Search
[ { "docid": "c0dbb410ebd6c84bd97b5f5e767186b3", "text": "A new hypothesis about the role of focused attention is proposed. The feature-integration theory of attention suggests that attention must be directed serially to each stimulus in a display whenever conjunctions of more than one separable feature are needed to characterize or distinguish the possible objects presented. A number of predictions were tested in a variety of paradigms including visual search, texture segregation, identification and localization, and using both separable dimensions (shape and color) and local elements or parts of figures (lines, curves, etc. in letters) as the features to be integrated into complex wholes. The results were in general consistent with the hypothesis. They offer a new set of criteria for distinguishing separable from integral features and a new rationale for predicting which tasks will show attention limits and which will not.", "title": "" } ]
[ { "docid": "b42f3575dad9615a40f491291661e7c5", "text": "Novel neural models have been proposed in recent years for learning under domain shift. Most models, however, only evaluate on a single task, on proprietary datasets, or compare to weak baselines, which makes comparison of models difficult. In this paper, we re-evaluate classic general-purpose bootstrapping approaches in the context of neural networks under domain shifts vs. recent neural approaches and propose a novel multi-task tri-training method that reduces the time and space complexity of classic tri-training. Extensive experiments on two benchmarks are negative: while our novel method establishes a new state-of-the-art for sentiment analysis, it does not fare consistently the best. More importantly, we arrive at the somewhat surprising conclusion that classic tri-training, with some additions, outperforms the state of the art. We conclude that classic approaches constitute an important and strong baseline.", "title": "" }, { "docid": "f84de5ba61de555c2d90afc2c8c2b465", "text": "Visual sensor networks have emerged as an important class of sensor-based distributed intelligent systems, with unique performance, complexity, and quality of service challenges. Consisting of a large number of low-power camera nodes, visual sensor networks support a great number of novel vision-based applications. The camera nodes provide information from a monitored site, performing distributed and collaborative processing of their collected data. Using multiple cameras in the network provides different views of the scene, which enhances the reliability of the captured events. However, the large amount of image data produced by the cameras combined with the network’s resource constraints require exploring new means for data processing, communication, and sensor management. Meeting these challenges of visual sensor networks requires interdisciplinary approaches, utilizing vision processing, communications and networking, and embedded processing. In this paper, we provide an overview of the current state-of-the-art in the field of visual sensor networks, by exploring several relevant research directions. Our goal is to provide a better understanding of current research problems in the different research fields of visual sensor networks, and to show how these different research fields should interact to solve the many challenges of visual sensor networks.", "title": "" }, { "docid": "520de9b576c112171ce0d08650a25093", "text": "Figurative language represents one of the most difficult tasks regarding natural language processing. Unlike literal language, figurative language takes advantage of linguistic devices such as irony, humor, sarcasm, metaphor, analogy, and so on, in order to communicate indirect meanings which, usually, are not interpretable by simply decoding syntactic or semantic information. Rather, figurative language reflects patterns of thought within a communicative and social framework that turns quite challenging its linguistic representation, as well as its computational processing. In this Ph. D. thesis we address the issue of developing a linguisticbased framework for figurative language processing. In particular, our efforts are focused on creating some models capable of automatically detecting instances of two independent figurative devices in social media texts: humor and irony. Our main hypothesis relies on the fact that language reflects patterns of thought; i.e. to study language is to study patterns of conceptualization. Thus, by analyzing two specific domains of figurative language, we aim to provide arguments concerning how people mentally conceive humor and irony, and how they verbalize each device in social media platforms. In this context, we focus on showing how fine-grained knowledge, which relies on shallow and deep linguistic layers, can be translated into valuable patterns to automatically identify figurative uses of language. Contrary to most researches that deal with figurative language, we do not support our arguments on prototypical examples neither of humor nor of irony. Rather, we try to find patterns in texts such as blogs, web comments, tweets, etc., whose intrinsic characteristics are quite different to the characteristics described in the specialized literature. Apart from providing a linguistic inventory for detecting humor and irony at textual level, in this investigation we stress out the importance of considering user-generated tags in order to automatically build resources for figurative language processing, such as ad hoc corpora in which human annotation is not necessary. Finally, each model is evaluated in terms of its relevance to properly identify instances of humor and irony, respectively. To this end, several experiments are carried out taking into consideration different data sets and applicability scenarios. Our findings point out that figurative language processing (especially humor and irony) can provide fine-grained knowledge in tasks as diverse as sentiment analysis, opinion mining, information retrieval, or trend discovery.", "title": "" }, { "docid": "62f4c947cae38cc7071b87597b54324a", "text": "A bugbear of uncalibrated stereo reconstruction is that cameras which deviate from the pinhole model have to be pre-calibrated in order to correct for nonlinear lens distortion. If they are not, and point correspondence is attempted using the uncorrected images, the matching constraints provided by the fundamental matrix must be set so loose that point matching is significantly hampered. This paper shows how linear estimation of the fundamental matrix from two-view point correspondences may be augmented to include one term of radial lens distortion. This is achieved by (1) changing from the standard radiallens model to another which (as we show) has equivalent power, but which takes a simpler form in homogeneous coordinates, and (2) expressing fundamental matrix estimation as a Quadratic Eigenvalue Problem (QEP), for which efficient algorithms are well known. I derive the new estimator, and compare its performance against bundle-adjusted calibration-grid data. The new estimator is fast enough to be included in a RANSAC-based matching loop, and we show cases of matching being rendered possible by its use. I show how the same lens can be calibrated in a natural scene where the lack of straight lines precludes most previous techniques. The modification when the multi-view relation is a planar homography or trifocal tensor is described.", "title": "" }, { "docid": "d061ac8a6c312c768a9dfc6e59cfe6a8", "text": "The assessment of crop yield losses is needed for the improvement of production systems that contribute to the incomes of rural families and food security worldwide. However, efforts to quantify yield losses and identify their causes are still limited, especially for perennial crops. Our objectives were to quantify primary yield losses (incurred in the current year of production) and secondary yield losses (resulting from negative impacts of the previous year) of coffee due to pests and diseases, and to identify the most important predictors of coffee yields and yield losses. We established an experimental coffee parcel with full-sun exposure that consisted of six treatments, which were defined as different sequences of pesticide applications. The trial lasted three years (2013-2015) and yield components, dead productive branches, and foliar pests and diseases were assessed as predictors of yield. First, we calculated yield losses by comparing actual yields of specific treatments with the estimated attainable yield obtained in plots which always had chemical protection. Second, we used structural equation modeling to identify the most important predictors. Results showed that pests and diseases led to high primary yield losses (26%) and even higher secondary yield losses (38%). We identified the fruiting nodes and the dead productive branches as the most important and useful predictors of yields and yield losses. These predictors could be added in existing mechanistic models of coffee, or can be used to develop new linear mixed models to estimate yield losses. Estimated yield losses can then be related to production factors to identify corrective actions that farmers can implement to reduce losses. The experimental and modeling approaches of this study could also be applied in other perennial crops to assess yield losses.", "title": "" }, { "docid": "abdc445e498c6d04e8f046e9c2610f9f", "text": "Ontologies have recently received popularity in the area of knowledge management and knowledge sharing, especially after the evolution of the Semantic Web and its supporting technologies. An ontology defines the terms and concepts (meaning) used to describe and represent an area of knowledge.The aim of this paper is to identify all possible existing ontologies and ontology management tools (Protégé 3.4, Apollo, IsaViz & SWOOP) that are freely available and review them in terms of: a) interoperability, b) openness, c) easiness to update and maintain, d) market status and penetration. The results of the review in ontologies are analyzed for each application area, such as transport, tourism, personal services, health and social services, natural languages and other HCI-related domains. Ontology Building/Management Tools are used by different groups of people for performing diverse tasks. Although each tool provides different functionalities, most of the users just use only one, because they are not able to interchange their ontologies from one tool to another. In addition, we considered the compatibility of different ontologies with different development and management tools. The paper is also concerns the detection of commonalities and differences between the examined ontologies, both on the same domain (application area) and among different domains.", "title": "" }, { "docid": "376911fb47b9954a35f9910326f9b97e", "text": "Immunotherapy enhances a patient’s immune system to fight disease and has recently been a source of promising new cancer treatments. Among the many immunotherapeutic strategies, immune checkpoint blockade has shown remarkable benefit in the treatment of a range of cancer types. Immune checkpoint blockade increases antitumor immunity by blocking intrinsic downregulators of immunity, such as cytotoxic T-lymphocyte antigen 4 (CTLA-4) and programmed cell death 1 (PD-1) or its ligand, programmed cell death ligand 1 (PD-L1). Several immune checkpoint–directed antibodies have increased overall survival for patients with various cancers and are approved by the Food and Drug Administration (Table 1). By increasing the activity of the immune system, immune checkpoint blockade can have inflammatory side effects, which are often termed immune-related adverse events. Although any organ system can be affected, immune-related adverse events most commonly involve the gastrointestinal tract, endocrine glands, skin, and liver.1 Less often, the central nervous system and cardiovascular, pulmonary, musculoskeletal, and hematologic systems are involved. The wide range of potential immune-related adverse events requires multidisciplinary, collaborative management by providers across the clinical spectrum (Fig. 1). No prospective trials have defined strategies for effectively managing specific immune-related adverse events; thus, clinical practice remains variable. Nevertheless, several professional organizations are working to harmonize expert consensus on managing specific immune-related adverse events. In this review, we focus on 10 essential questions practitioners will encounter while caring for the expanding population of patients with cancer who are being treated with immune checkpoint blockade (Table 2).", "title": "" }, { "docid": "cb00e564a81ace6b75e776f1fe41fb8f", "text": "INDIVIDUAL PROCESSES IN INTERGROUP BEHAVIOR ................................ 3 From Individual to Group Impressions ...................................................................... 3 GROUP MEMBERSHIP AND INTERGROUP BEHAVIOR .................................. 7 The Scope and Range of Ethnocentrism .................................................................... 8 The Development of Ethnocentrism .......................................................................... 9 Intergroup Conflict and Competition ........................................................................ 12 Interpersonal and intergroup behavior ........................................................................ 13 Intergroup conflict and group cohesion ........................................................................ 15 Power and status in intergroup behavior ...................................................................... 16 Social Categorization and Intergroup Behavior ........................................................ 20 Social categorization: cognitions, values, and groups ...................................................... 20 Social categorization a d intergroup discrimination ...................................................... 23 Social identity and social comparison .......................................................................... 24 THE REDUCTION FINTERGROUP DISCRIMINATION ................................ 27 Intergroup Cooperation and Superordinate Goals \" 28 Intergroup Contact. .... ................................................................................................ 28 Multigroup Membership and \"lndividualizat~’on\" of the Outgroup .......................... 29 SUMMARY .................................................................................................................... 30", "title": "" }, { "docid": "fb941f03dd02f1d7fc7ded54ae462afd", "text": "In this paper we discuss the development and implementation of an Arabic automatic speech recognition engine. The engine can recognize both continuous speech and isolated words. The system was developed using the Hidden Markov Model Toolkit. First, an Arabic dictionary was built by composing the words to its phones. Next, Mel Frequency Cepstral Coefficients (MFCC) of the speech samples are derived to extract the speech feature vectors. Then, the training of the engine based on triphones is developed to estimate the parameters for a Hidden Markov Model. To test the engine, the database consisting of speech utterance from thirteen Arabian native speakers is used which is divided into ten speaker-dependent and three speaker-independent samples. The experimental results showed that the overall system performance was 90.62%, 98.01 % and 97.99% for sentence correction, word correction and word accuracy respectively.", "title": "" }, { "docid": "e95fa624bb3fd7ea45650213088a43b0", "text": "In recent years, much research has been conducted on image super-resolution (SR). To the best of our knowledge, however, few SR methods were concerned with compressed images. The SR of compressed images is a challenging task due to the complicated compression artifacts, while many images suffer from them in practice. The intuitive solution for this difficult task is to decouple it into two sequential but independent subproblems, i.e., compression artifacts reduction (CAR) and SR. Nevertheless, some useful details may be removed in CAR stage, which is contrary to the goal of SR and makes the SR stage more challenging. In this paper, an end-to-end trainable deep convolutional neural network is designed to perform SR on compressed images (CISRDCNN), which reduces compression artifacts and improves image resolution jointly. Experiments on compressed images produced by JPEG (we take the JPEG as an example in this paper) demonstrate that the proposed CISRDCNN yields state-of-the-art SR performance on commonly used test images and imagesets. The results of CISRDCNN on real low quality web images are also very impressive, with obvious quality enhancement. Further, we explore the application of the proposed SR method in low bit-rate image coding, leading to better rate-distortion performance than JPEG.", "title": "" }, { "docid": "a73da9191651ae5d0330d6f64f838f67", "text": "Language selection (or control) refers to the cognitive mechanism that controls which language to use at a given moment and context. It allows bilinguals to selectively communicate in one target language while minimizing the interferences from the nontarget language. Previous studies have suggested the participation in language control of different brain areas. However, the question remains whether the selection of one language among others relies on a language-specific neural module or general executive regions that also allow switching between different competing behavioral responses including the switching between various linguistic registers. In this functional magnetic resonance imaging study, we investigated the neural correlates of language selection processes in German-French bilingual subjects during picture naming in different monolingual and bilingual selection contexts. We show that naming in the first language in the bilingual context (compared with monolingual contexts) increased activation in the left caudate and anterior cingulate cortex. Furthermore, the activation of these areas is even more extended when the subjects are using a second weaker language. These findings show that language control processes engaged in contexts during which both languages must remain active recruit the left caudate and the anterior cingulate cortex (ACC) in a manner that can be distinguished from areas engaged in intralanguage task switching.", "title": "" }, { "docid": "e67b75e11ca6dd9b4e6c77b3cb92cceb", "text": "The incidence of malignant melanoma continues to increase worldwide. This cancer can strike at any age; it is one of the leading causes of loss of life in young persons. Since this cancer is visible on the skin, it is potentially detectable at a very early stage when it is curable. New developments have converged to make fully automatic early melanoma detection a real possibility. First, the advent of dermoscopy has enabled a dramatic boost in clinical diagnostic ability to the point that melanoma can be detected in the clinic at the very earliest stages. The global adoption of this technology has allowed accumulation of large collections of dermoscopy images of melanomas and benign lesions validated by histopathology. The development of advanced technologies in the areas of image processing and machine learning have given us the ability to allow distinction of malignant melanoma from the many benign mimics that require no biopsy. These new technologies should allow not only earlier detection of melanoma, but also reduction of the large number of needless and costly biopsy procedures. Although some of the new systems reported for these technologies have shown promise in preliminary trials, widespread implementation must await further technical progress in accuracy and reproducibility. In this paper, we provide an overview of computerized detection of melanoma in dermoscopy images. First, we discuss the various aspects of lesion segmentation. Then, we provide a brief overview of clinical feature segmentation. Finally, we discuss the classification stage where machine learning algorithms are applied to the attributes generated from the segmented features to predict the existence of melanoma.", "title": "" }, { "docid": "b898d7a2da7a10ef756317bc7f44f37c", "text": "Cellulosomes are multienzyme complexes that are produced by anaerobic cellulolytic bacteria for the degradation of lignocellulosic biomass. They comprise a complex of scaffoldin, which is the structural subunit, and various enzymatic subunits. The intersubunit interactions in these multienzyme complexes are mediated by cohesin and dockerin modules. Cellulosome-producing bacteria have been isolated from a large variety of environments, which reflects their prevalence and the importance of this microbial enzymatic strategy. In a given species, cellulosomes exhibit intrinsic heterogeneity, and between species there is a broad diversity in the composition and configuration of cellulosomes. With the development of modern technologies, such as genomics and proteomics, the full protein content of cellulosomes and their expression levels can now be assessed and the regulatory mechanisms identified. Owing to their highly efficient organization and hydrolytic activity, cellulosomes hold immense potential for application in the degradation of biomass and are the focus of much effort to engineer an ideal microorganism for the conversion of lignocellulose to valuable products, such as biofuels.", "title": "" }, { "docid": "ddd353b5903f12c14cc3af1163ac617c", "text": "Unmanned Aerial Vehicles (UAVs) have recently received notable attention because of their wide range of applications in urban civilian use and in warfare. With air traffic densities increasing, it is more and more important for UAVs to be able to predict and avoid collisions. The main goal of this research effort is to adjust real-time trajectories for cooperative UAVs to avoid collisions in three-dimensional airspace. To explore potential collisions, predictive state space is utilized to present the waypoints of UAVs in the upcoming situations, which makes the proposed method generate the initial collision-free trajectories satisfying the necessary constraints in a short time. Further, a rolling optimization algorithm (ROA) can improve the initial waypoints, minimizing its total distance. Several scenarios are illustrated to verify the proposed algorithm, and the results show that our algorithm can generate initial collision-free trajectories more efficiently than other methods in the common airspace.", "title": "" }, { "docid": "cbcdc411e22786dcc1b3655c5e917fae", "text": "Local intracellular Ca(2+) transients, termed Ca(2+) sparks, are caused by the coordinated opening of a cluster of ryanodine-sensitive Ca(2+) release channels in the sarcoplasmic reticulum of smooth muscle cells. Ca(2+) sparks are activated by Ca(2+) entry through dihydropyridine-sensitive voltage-dependent Ca(2+) channels, although the precise mechanisms of communication of Ca(2+) entry to Ca(2+) spark activation are not clear in smooth muscle. Ca(2+) sparks act as a positive-feedback element to increase smooth muscle contractility, directly by contributing to the global cytoplasmic Ca(2+) concentration ([Ca(2+)]) and indirectly by increasing Ca(2+) entry through membrane potential depolarization, caused by activation of Ca(2+) spark-activated Cl(-) channels. Ca(2+) sparks also have a profound negative-feedback effect on contractility by decreasing Ca(2+) entry through membrane potential hyperpolarization, caused by activation of large-conductance, Ca(2+)-sensitive K(+) channels. In this review, the roles of Ca(2+) sparks in positive- and negative-feedback regulation of smooth muscle function are explored. We also propose that frequency and amplitude modulation of Ca(2+) sparks by contractile and relaxant agents is an important mechanism to regulate smooth muscle function.", "title": "" }, { "docid": "31e052aaf959a4c5d6f1f3af6587d6cd", "text": "We introduce a learning framework called learning using privileged information (LUPI) to the computer vision field. We focus on the prototypical computer vision problem of teaching computers to recognize objects in images. We want the computers to be able to learn faster at the expense of providing extra information during training time. As additional information about the image data, we look at several scenarios that have been studied in computer vision before: attributes, bounding boxes and image tags. The information is privileged as it is available at training time but not at test time. We explore two maximum-margin techniques that are able to make use of this additional source of information, for binary and multiclass object classification. We interpret these methods as learning easiness and hardness of the objects in the privileged space and then transferring this knowledge to train a better classifier in the original space. We provide a thorough analysis and comparison of information transfer from privileged to the original data spaces for both LUPI methods. Our experiments show that incorporating privileged information can improve the classification accuracy. Finally, we conduct user studies to understand which samples are easy and which are hard for human learning, and explore how this information is related to easy and hard samples when learning a classifier.", "title": "" }, { "docid": "72be75e973b6a843de71667566b44929", "text": "We think that hand pose estimation technologies with a camera should be developed for character conversion systems from sign languages with a not so high performance terminal. Fingernail positions can be used for getting finger information which can’t be obtained from outline information. Therefore, we decided to construct a practical fingernail detection system. The previous fingernail detection method, using distribution density of strong nail-color pixels, was not good at removing some skin areas having gloss like finger side area. Therefore, we should use additional information to remove them. We thought that previous method didn’t use boundary information and this information would be available. Color continuity information is available for getting it. In this paper, therefore, we propose a new fingernail detection method using not only distribution density but also color continuity to improve accuracy. We investigated the relationship between wrist rotation angles and percentages of correct detection. The number of users was three. As a result, we confirmed that our proposed method raised accuracy compared with previous method and could detect only fingernails with at least 85% probability from -90 to 40 degrees and from 40 to 90 degrees. Therefore, we concluded that our proposed method was effective.", "title": "" }, { "docid": "56f18b39a740dd65fc2907cdef90ac99", "text": "This paper describes a dynamic artificial neural network based mobile robot motion and path planning system. The method is able to navigate a robot car on flat surface among static and moving obstacles, from any starting point to any endpoint. The motion controlling ANN is trained online with an extended backpropagation through time algorithm, which uses potential fields for obstacle avoidance. The paths of the moving obstacles are predicted with other ANNs for better obstacle avoidance. The method is presented through the realization of the navigation system of a mobile robot.", "title": "" }, { "docid": "262c11ab9f78e5b3f43a31ad22cf23c5", "text": "Responding to threats in the environment is crucial for survival. Certain types of threat produce defensive responses without necessitating previous experience and are considered innate, whereas other threats are learned by experiencing aversive consequences. Two important innate threats are whether an encountered stimulus is a member of the same species (social threat) and whether a stimulus suddenly appears proximal to the body (proximal threat). These threats are manifested early in human development and robustly elicit defensive responses. Learned threat, on the other hand, enables adaptation to threats in the environment throughout the life span. A well-studied form of learned threat is fear conditioning, during which a neutral stimulus acquires the ability to eliciting defensive responses through pairings with an aversive stimulus. If innate threats can facilitate fear conditioning, and whether different types of innate threats can enhance each other, is largely unknown. We developed an immersive virtual reality paradigm to test how innate social and proximal threats are related to each other and how they influence conditioned fear. Skin conductance responses were used to index the autonomic component of the defensive response. We found that social threat modulates proximal threat, but that neither proximal nor social threat modulates conditioned fear. Our results suggest that distinct processes regulate autonomic activity in response to proximal and social threat on the one hand, and conditioned fear on the other.", "title": "" }, { "docid": "d7a620c961341e35fc8196b331fb0e68", "text": "Software vulnerabilities have had a devastating effect on the Internet. Worms such as CodeRed and Slammer can compromise hundreds of thousands of hosts within hours or even minutes, and cause millions of dollars of damage [32, 51]. To successfully combat these fast automatic Internet attacks, we need fast automatic attack detection and filtering mechanisms. In this paper we propose dynamic taint analysis for automatic detection and analysis of overwrite attacks, which include most types of exploits. This approach does not need source code or special compilation for the monitored program, and hence works on commodity software. To demonstrate this idea, we have implemented TaintCheck, a mechanism that can perform dynamic taint analysis by performing binary rewriting at run time. We show that TaintCheck reliably detects most types of exploits. We found that TaintCheck produced no false positives for any of the many different programs that we tested. Further, we show how we can use a two-tiered approach to build a hybrid exploit detector that enjoys the same accuracy as TaintCheck but have extremely low performance overhead. Finally, we propose a new type of automatic signature generation—semanticanalysis based signature generation. We show that by backtracing the chain of tainted data structure rooted at the detection point, TaintCheck can automatically identify which original flow and which part of the original flow have caused the attack and identify important invariants of the payload that can be used as signatures. Semantic-analysis based signature generation can be more accurate, resilient against polymorphic worms, and robust to attacks exploiting polymorphism than the pattern-extraction based signature generation methods.", "title": "" } ]
scidocsrr
27c9b07f0509e9b149f818587988b009
Context-aware Frame-Semantic Role Labeling
[ { "docid": "44582f087f9bb39d6e542ff7b600d1c7", "text": "We propose a new deterministic approach to coreference resolution that combines the global information and precise features of modern machine-learning models with the transparency and modularity of deterministic, rule-based systems. Our sieve architecture applies a battery of deterministic coreference models one at a time from highest to lowest precision, where each model builds on the previous model's cluster output. The two stages of our sieve-based architecture, a mention detection stage that heavily favors recall, followed by coreference sieves that are precision-oriented, offer a powerful way to achieve both high precision and high recall. Further, our approach makes use of global information through an entity-centric model that encourages the sharing of features across all mentions that point to the same real-world entity. Despite its simplicity, our approach gives state-of-the-art performance on several corpora and genres, and has also been incorporated into hybrid state-of-the-art coreference systems for Chinese and Arabic. Our system thus offers a new paradigm for combining knowledge in rule-based systems that has implications throughout computational linguistics.", "title": "" } ]
[ { "docid": "e718e98400738013ecd050e57f5083fb", "text": "© 2011 Jang-Hee Yoo and Mark S. Nixon 259 We present a new method for an automated markerless system to describe, analyze, and classify human gait motion. The automated system consists of three stages: i) detection and extraction of the moving human body and its contour from image sequences, ii) extraction of gait figures by the joint angles and body points, and iii) analysis of motion parameters and feature extraction for classifying human gait. A sequential set of 2D stick figures is used to represent the human gait motion, and the features based on motion parameters are determined from the sequence of extracted gait figures. Then, a knearest neighbor classifier is used to classify the gait patterns. In experiments, this provides an alternative estimate of biomechanical parameters on a large population of subjects, suggesting that the estimate of variance by marker-based techniques appeared generous. This is a very effective and well-defined representation method for analyzing the gait motion. As such, the markerless approach confirms uniqueness of the gait as earlier studies and encourages further development along these lines.", "title": "" }, { "docid": "558218868956bcd05363825fb42ef75e", "text": "Imitation learning algorithms learn viable policies by imitating an expert’s behavior when reward signals are not available. Generative Adversarial Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies when the expert’s behavior is available as a fixed set of trajectories.We evaluate in terms of the expert’s cost function and observe that the distribution of trajectory-costs is often more heavy-tailed for GAIL-agents than the expert at a number of benchmark continuous-control tasks. Thus, high-cost trajectories, corresponding to tail-end events of catastrophic failure, are more likely to be encountered by the GAIL-agents than the expert. This makes the reliability of GAIL-agents questionable when it comes to deployment in risk-sensitive applications like robotic surgery and autonomous driving. In this work, we aim to minimize the occurrence of tail-end events by minimizing tail risk within the GAIL framework. We quantify tail risk by the Conditional-Value-atRisk (CVaR) of trajectories and develop the Risk-Averse Imitation Learning (RAIL) algorithm. We observe that the policies learned with RAIL show lower tail-end risk than those of vanilla GAIL. Thus, the proposed RAIL algorithm appears as a potent alternative to GAIL for improved reliability in risk-sensitive applications.", "title": "" }, { "docid": "99485ae4547e0904198c04e88db23556", "text": "Qualitative microbiological measurement methods in which the measurement results are either 0 (microorganism not detected) or 1 (microorganism detected) are discussed. The performance of such a measurement method is described by its probability of detection as a function of the contamination (CFU/g or CFU/mL) of the test material, or by the LOD(p), i.e., the contamination that is detected (measurement result 1) with a specified probability p. A complementary log-log model was used to statistically estimate these performance characteristics. An intralaboratory experiment for the detection of Listeria monocytogenes in various food matrixes illustrates the method. The estimate of LOD50% is compared with the Spearman-Kaerber method.", "title": "" }, { "docid": "93bad64439be375200cce65a37c6b8c6", "text": "The mobile social network (MSN) combines techniques in social science and wireless communications for mobile networking. The MSN can be considered as a system which provides a variety of data delivery services involving the social relationship among mobile users. This paper presents a comprehensive survey on the MSN specifically from the perspectives of applications, network architectures, and protocol design issues. First, major applications of the MSN are reviewed. Next, different architectures of the MSN are presented. Each of these different architectures supports different data delivery scenarios. The unique characteristics of social relationship in MSN give rise to different protocol design issues. These research issues (e.g., community detection, mobility, content distribution, content sharing protocols, and privacy) and the related approaches to address data delivery in the MSN are described. At the end, several important research directions are outlined.", "title": "" }, { "docid": "9c7fbbde15c03078bce7bd8d07fa6d2a", "text": "• For each sense sij, we create a sense embedding E(sij), again a D-dimensional vector. • The lemma embeddings can be decomposed into a mix (e.g. a convex combination) of sense vectors, for instance F(rock) = 0.3 · E(rock-1) + 0.7 · E(rock-2). The “mix variables” pij are non-negative and sum to 1 for each lemma. • The intuition of the optimization that each sense sij should be “close” to a number of other concepts, called the network neighbors, that we know are related to it, as defined by a semantic network. For instance, rock-2 might be defined by the network to be related to other types of music.", "title": "" }, { "docid": "47b9d5585a0ca7d10cb0fd9da673dd0f", "text": "A novel deep architecture, the tensor deep stacking network (T-DSN), is presented. The T-DSN consists of multiple, stacked blocks, where each block contains a bilinear mapping from two hidden layers to the output layer, using a weight tensor to incorporate higher order statistics of the hidden binary (([0,1])) features. A learning algorithm for the T-DSN's weight matrices and tensors is developed and described in which the main parameter estimation burden is shifted to a convex subproblem with a closed-form solution. Using an efficient and scalable parallel implementation for CPU clusters, we train sets of T-DSNs in three popular tasks in increasing order of the data size: handwritten digit recognition using MNIST (60k), isolated state/phone classification and continuous phone recognition using TIMIT (1.1 m), and isolated phone classification using WSJ0 (5.2 m). Experimental results in all three tasks demonstrate the effectiveness of the T-DSN and the associated learning methods in a consistent manner. In particular, a sufficient depth of the T-DSN, a symmetry in the two hidden layers structure in each T-DSN block, our model parameter learning algorithm, and a softmax layer on top of T-DSN are shown to have all contributed to the low error rates observed in the experiments for all three tasks.", "title": "" }, { "docid": "d3d6a1793ce81ba0f4f0ffce0477a0ec", "text": "Portable Document Format (PDF) is one of the widely-accepted document format. However, it becomes one of the most attractive targets for exploitation by malware developers and vulnerability researchers. Malicious PDF files can be used in Advanced Persistent Threats (APTs) targeting individuals, governments, and financial sectors. The existing tools such as intrusion detection systems (IDSs) and antivirus packages are inefficient to mitigate this kind of attacks. This is because these techniques need regular updates with the new malicious PDF files which are increasing every day. In this paper, a new algorithm is presented for detecting malicious PDF files based on data mining techniques. The proposed algorithm consists of feature selection stage and classification stage. The feature selection stage is used to the select the optimum number of features extracted from the PDF file to achieve high detection rate and low false positive rate with small computational overhead. Experimental results show that the proposed algorithm can achieve 99.77% detection rate, 99.84% accuracy, and 0.05% false positive rate.", "title": "" }, { "docid": "583c6d4b7ed442cecfd1000c6c4f2a86", "text": "Web applications are increasingly subject to mass attacks, with vulnerabilities found easily in both open source and commercial applications as evinced by the fact that approximately half of reported vulnerabilities are found in web applications. In this paper, we perform an empirical investigation of the evolution of vulnerabilities in fourteen of the most widely used open source PHP web applications, finding that vulnerabilities densities declined from 28.12 to 19.96 vulnerabilities per thousand lines of code from 2006 to 2010. We also investigate whether complexity metrics or a security resources indicator (SRI) metric can be used to identify vulnerable web application showing that average cyclomatic complexity is an effective predictor of vulnerability for several applications, especially for those with low SRI scores.", "title": "" }, { "docid": "991a8c7011548af52367e426ba9beed6", "text": "Dihydrogen, methane, and carbon dioxide isotherm measurements were performed at 1-85 bar and 77-298 K on the evacuated forms of seven porous covalent organic frameworks (COFs). The uptake behavior and capacity of the COFs is best described by classifying them into three groups based on their structural dimensions and corresponding pore sizes. Group 1 consists of 2D structures with 1D small pores (9 A for each of COF-1 and COF-6), group 2 includes 2D structures with large 1D pores (27, 16, and 32 A for COF-5, COF-8, and COF-10, respectively), and group 3 is comprised of 3D structures with 3D medium-sized pores (12 A for each of COF-102 and COF-103). Group 3 COFs outperform group 1 and 2 COFs, and rival the best metal-organic frameworks and other porous materials in their uptake capacities. This is exemplified by the excess gas uptake of COF-102 at 35 bar (72 mg g(-1) at 77 K for hydrogen, 187 mg g(-1) at 298 K for methane, and 1180 mg g(-1) at 298 K for carbon dioxide), which is similar to the performance of COF-103 but higher than those observed for COF-1, COF-5, COF-6, COF-8, and COF-10 (hydrogen at 77 K, 15 mg g(-1) for COF-1, 36 mg g(-1) for COF-5, 23 mg g(-1) for COF-6, 35 mg g(-1) for COF-8, and 39 mg g(-1) for COF-10; methane at 298 K, 40 mg g(-1) for COF-1, 89 mg g(-1) for COF-5, 65 mg g(-1) for COF-6, 87 mg g(-1) for COF-8, and 80 mg g(-1) for COF-10; carbon dioxide at 298 K, 210 mg g(-1) for COF-1, 779 mg g(-1) for COF-5, 298 mg g(-1) for COF-6, 598 mg g(-1) for COF-8, and 759 mg g(-1) for COF-10). These findings place COFs among the most porous and the best adsorbents for hydrogen, methane, and carbon dioxide.", "title": "" }, { "docid": "91fbf465741c6a033a00a4aa982630b4", "text": "This paper presents an integrated functional link interval type-2 fuzzy neural system (FLIT2FNS) for predicting the stock market indices. The hybrid model uses a TSK (Takagi–Sugano–Kang) type fuzzy rule base that employs type-2 fuzzy sets in the antecedent parts and the outputs from the Functional Link Artificial Neural Network (FLANN) in the consequent parts. Two other approaches, namely the integrated FLANN and type-1 fuzzy logic system and Local Linear Wavelet Neural Network (LLWNN) are also presented for a comparative study. Backpropagation and particle swarm optimization (PSO) learning algorithms have been used independently to optimize the parameters of all the forecasting models. To test the model performance, three well known stock market indices like the Standard’s & Poor’s 500 (S&P 500), Bombay stock exchange (BSE), and Dow Jones industrial average (DJIA) are used. The mean absolute percentage error (MAPE) and root mean square error (RMSE) are used to find out the performance of all the three models. Finally, it is observed that out of three methods, FLIT2FNS performs the best irrespective of the time horizons spanning from 1 day to 1 month. © 2011 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "5356a208f0f6eb4659b2a09a106bab8d", "text": "Objective: Traditional Cognitive Training with paper-pencil tasks (PPCT) and Computer-Based Cognitive Training (C-BCT) both are effective for people with Mild Cognitive Impairment (MCI). The aim of this study is to evaluate the efficacy of a C-BCT program versus a PPCT one. Methods: One hundred and twenty four (n=124) people with amnesic & multiple domains MCI (aMCImd) diagnosis were randomly assigned in two groups, a PPCT group (n=65), and a C-BCT (n=59). The groups were matched at baseline in age, gender, education, cognitive and functional performance. Both groups attended 48 weekly 1-hour sessions of attention and executive function training for 12 months. Neuropsychological assessment was performed at baseline and 12 months later. Results: At the follow up, the PPCT group was better than the C-BCT group in visual selective attention (p≤ 0.022). The C-BCT group showed improvement in working memory (p=0.042) and in speed of switching of attention (p=0.012), while the PPCT group showed improvement in general cognitive function (p=0.005), learning ability (p=0.000), delayed verbal recall (p=0.000), visual perception (p=0.013) and visual memory (p=0.000), verbal fluency (p=0.000), visual selective attention (p=0.021), speed of switching of attention (p=0.001), visual selective attention/multiple choices (p=0.010) and Activities of Daily Living (ADL) as well (p=0.001). Conclusion: Both C-BCT and PPCT are beneficial for people with aMCImd concerning cognitive functions. However, the administration of a traditional PPCT program seems to affect a greater range of cognitive abilities and transfer the primary cognitive benefit in real life.", "title": "" }, { "docid": "395f97b609acb40a8922eb4a6d398c0a", "text": "Ambient obscurance (AO) produces perceptually important illumination effects such as darkened corners, cracks, and wrinkles; proximity darkening; and contact shadows. We present the AO algorithm from the Alchemy engine used at Vicarious Visions in commercial games. It is based on a new derivation of screen-space obscurance for robustness, and the insight that a falloff function can cancel terms in a visibility integral to favor efficient operations. Alchemy creates contact shadows that conform to surfaces, captures obscurance from geometry of varying scale, and provides four intuitive appearance parameters: world-space radius and bias, and aesthetic intensity and contrast.\n The algorithm estimates obscurance at a pixel from sample points read from depth and normal buffers. It processes dynamic scenes at HD 720p resolution in about 4.5 ms on Xbox 360 and 3 ms on NVIDIA GeForce580.", "title": "" }, { "docid": "dc259f1208eac95817d067b9cd13fa7c", "text": "This paper introduces a novel approach to texture synthesis based on generative adversarial networks (GAN) (Goodfellow et al., 2014). We extend the structure of the input noise distribution by constructing tensors with different types of dimensions. We call this technique Periodic Spatial GAN (PSGAN). The PSGAN has several novel abilities which surpass the current state of the art in texture synthesis. First, we can learn multiple textures from datasets of one or more complex large images. Second, we show that the image generation with PSGANs has properties of a texture manifold: we can smoothly interpolate between samples in the structured noise space and generate novel samples, which lie perceptually between the textures of the original dataset. In addition, we can also accurately learn periodical textures. We make multiple experiments which show that PSGANs can flexibly handle diverse texture and image data sources. Our method is highly scalable and it can generate output images of arbitrary large size.", "title": "" }, { "docid": "fb9bbfc3e301cb669663a12d1f18a11f", "text": "In extensively modified landscapes, how the matrix is managed determines many conservation outcomes. Recent publications revise popular conceptions of a homogeneous and static matrix, yet we still lack an adequate conceptual model of the matrix. Here, we identify three core effects that influence patch-dependent species, through impacts associated with movement and dispersal, resource availability, and the abiotic environment. These core effects are modified by five 'dimensions': spatial and temporal variation in matrix quality; spatial scale; temporal scale of matrix variation; and adaptation. The conceptual domain of the matrix, defined as three core effects and their interaction with these five dimensions, provides a much-needed framework to underpin management of fragmented landscapes and highlights new research priorities.", "title": "" }, { "docid": "a9b769e33467cdcc86ab47b5183e5a5b", "text": "The focus of this study is to examine the motivations of online community members to share information and rumors. We investigated an online community of interest, the members of which voluntarily associate and communicate with people with similar interests. Community members, posters and lurkers alike, were surveyed on the influence of extrinsic and intrinsic motivations, as well as normative influences, on their willingness to share information and rumors with others. The results indicated that posters and lurkers are differently motivated by intrinsic factors to share, and that extrinsic rewards like improved reputation and status-building within the community are motivating factors for rumor mongering. The results are discussed and future directions for this area of research are offered.", "title": "" }, { "docid": "b181715b75842987e5f30ccd5765e378", "text": "Klondike Solitaire – also known as Patience – is a well-known single player card game. We studied several classes of Klondike Solitaire game configurations. We present a dynamic programming solution for counting the number of “unplayable” games. This method is extended for a subset of games which allow exactly one move. With an algorithm based on the inclusion-exclusion principle, symmetry elimination and a trade-off between lookup tables and dynamic programming we count the number of games that cannot be won due to a specific type of conflict. The size of a larger class of conflicting configurations is approximated with a Monte Carlo simulation. We investigate how much gameplay is limited by the stock. We give a recursion and show that Pfaff-Fuss-Catalan is a lower bound. We consider trivial games and report on two remarkable patterns we discovered.", "title": "" }, { "docid": "b9a5cedbec1b6cd5091fb617c0513a13", "text": "The cerebellum undergoes a protracted development, making it particularly vulnerable to a broad spectrum of developmental events. Acquired destructive and hemorrhagic insults may also occur. The main steps of cerebellar development are reviewed. The normal imaging patterns of the cerebellum in prenatal ultrasound and magnetic resonance imaging (MRI) are described with emphasis on the limitations of these modalities. Because of confusion in the literature regarding the terminology used for cerebellar malformations, some terms (agenesis, hypoplasia, dysplasia, and atrophy) are clarified. Three main pathologic settings are considered and the main diagnoses that can be suggested are described: retrocerebellar fluid enlargement with normal or abnormal biometry (Dandy-Walker malformation, Blake pouch cyst, vermian agenesis), partially or globally decreased cerebellar biometry (cerebellar hypoplasia, agenesis, rhombencephalosynapsis, ischemic and/or hemorrhagic damage), partially or globally abnormal cerebellar echogenicity (ischemic and/or hemorrhagic damage, cerebellar dysplasia, capillary telangiectasia). The appropriate timing for performing MRI is also discussed.", "title": "" }, { "docid": "17f7360d6eda0ddddbf27c6de21a3746", "text": "Accurate, robust, inexpensive gaze tracking in the car can help keep a driver safe by facilitating the more effective study of how to improve (1) vehicle interfaces and (2) the design of future Advanced Driver Assistance Systems. In this paper, we estimate head pose and eye pose from monocular video using methods developed extensively in prior work and ask two new interesting questions. First, how much better can we classify driver gaze using head and eye pose versus just using head pose? Second, are there individual-specific gaze strategies that strongly correlate with how much gaze classification improves with the addition of eye pose information? We answer these questions by evaluating data drawn from an on-road study of 40 drivers. The main insight of the paper is conveyed through the analogy of an “owl” and “lizard” which describes the degree to which the eyes and the head move when shifting gaze. When the head moves a lot (“owl”), not much classification improvement is attained by estimating eye pose on top of head pose. On the other hand, when the head stays still and only the eyes move (“lizard”), classification accuracy increases significantly from adding in eye pose. We characterize how that accuracy varies between people, gaze strategies, and gaze regions.", "title": "" }, { "docid": "8b5bf5c5717d77c7a8b836758e9cd37e", "text": "Purpose – Due to the size and velocity at which user generated content is created on social media services such as Twitter, analysts are often limited by the need to pre-determine the specific topics and themes they wish to follow. Visual analytics software may be used to support the interactive discovery of emergent themes. The paper aims to discuss these issues. Design/methodology/approach – Tweets collected from the live Twitter stream matching a user’s query are stored in a database, and classified based on their sentiment. The temporally changing sentiment is visualized, along with sparklines showing the distribution of the top terms, hashtags, user mentions, and authors in each of the positive, neutral, and negative classes. Interactive tools are provided to support sub-querying and the examination of emergent themes. Findings – A case study of using Vista to analyze sport fan engagement within a mega-sport event (2013 Le Tour de France) is provided. The authors illustrate how emergent themes can be identified and isolated from the large collection of data, without the need to identify these a priori. Originality/value – Vista provides mechanisms that support the interactive exploration among Twitter data. By combining automatic data processing and machine learning methods with interactive visualization software, researchers are relieved of tedious data processing tasks, and can focus on the analysis of high-level features of the data. In particular, patterns of Twitter use can be identified, emergent themes can be isolated, and purposeful samples of the data can be selected by the researcher for further analysis.", "title": "" }, { "docid": "4ca5fec568185d3699c711cc86104854", "text": "Attackers often create systems that automatically rewrite and reorder their malware to avoid detection. Typical machine learning approaches, which learn a classifier based on a handcrafted feature vector, are not sufficiently robust to such reorderings. We propose a different approach, which, similar to natural language modeling, learns the language of malware spoken through the executed instructions and extracts robust, time domain features. Echo state networks (ESNs) and recurrent neural networks (RNNs) are used for the projection stage that extracts the features. These models are trained in an unsupervised fashion. A standard classifier uses these features to detect malicious files. We explore a few variants of ESNs and RNNs for the projection stage, including Max-Pooling and Half-Frame models which we propose. The best performing hybrid model uses an ESN for the recurrent model, Max-Pooling for non-linear sampling, and logistic regression for the final classification. Compared to the standard trigram of events model, it improves the true positive rate by 98.3% at a false positive rate of 0.1%.", "title": "" } ]
scidocsrr
f96b809d0bef1e2640bcb9c2b9486305
Music segmentation and summarization based on self-similarity matrix
[ { "docid": "6327964ae4eb3410a1772edee4ff358d", "text": "We introduce a method for the automatic extraction of musical structures in popular music. The proposed algorithm uses non-negative matrix factorization to segment regions of acoustically similar frames in a self-similarity matrix of the audio data. We show that over the dimensions of the NMF decomposition, structural parts can easily be modeled. Based on that observation, we introduce a clustering algorithm that can explain the structure of the whole music piece. The preliminary evaluation we report in the the paper shows very encouraging results.", "title": "" } ]
[ { "docid": "328860ae6cccc7530de9aab8a1a58c5e", "text": "Electrochemical approaches have played crucial roles in bio sensing because of their Potential in achieving sensitive, specific and low-cost detection of biomolecules and other bio evidences. Engineering the electrochemical sensing interface with nanomaterials tends to new generations of label-free biosensors with improved performances in terms of sensitive area and response signals. Here we applied Silicon Nanowire (SiNW) array electrodes (in an integrated architecture of working, counter and reference electrodes) grown by low pressure chemical vapor deposition (LPCVD) system with VLS procedure to electrochemically diagnose the presence of breast cancer cells as well as their response to anticancer drugs. Mebendazole (MBZ), has been used as antitubulin drug. It perturbs the anodic/cathodic response of the cell covered biosensor by releasing Cytochrome C in cytoplasm. Reduction of cytochrome C would change the ionic state of the cells monitored by SiNW biosensor. By applying well direct bioelectrical contacts with cancer cells, SiNWs can detect minor signal transduction and bio recognition events, resulting in precise biosensing. Our device detected the trace of MBZ drugs (with the concentration of 2nM) on electrochemical activity MCF-7 cells. Also, experimented biological analysis such as confocal and Flowcytometry assays confirmed the electrochemical results.", "title": "" }, { "docid": "5c8ed4f3831ce864cbdaea07171b5a57", "text": "Hyper-beta-alaninemia is a rare metabolic condition that results in elevated plasma and urinary β-alanine levels and is characterized by neurotoxicity, hypotonia, and respiratory distress. It has been proposed that at least some of the symptoms are caused by oxidative stress; however, only limited information is available on the mechanism of reactive oxygen species generation. The present study examines the hypothesis that β-alanine reduces cellular levels of taurine, which are required for normal respiratory chain function; cellular taurine depletion is known to reduce respiratory function and elevate mitochondrial superoxide generation. To test the taurine hypothesis, isolated neonatal rat cardiomyocytes and mouse embryonic fibroblasts were incubated with medium lacking or containing β-alanine. β-alanine treatment led to mitochondrial superoxide accumulation in conjunction with a decrease in oxygen consumption. The defect in β-alanine-mediated respiratory function was detected in permeabilized cells exposed to glutamate/malate but not in cells utilizing succinate, suggesting that β-alanine leads to impaired complex I activity. Taurine treatment limited mitochondrial superoxide generation, supporting a role for taurine in maintaining complex I activity. Also affected by taurine is mitochondrial morphology, as β-alanine-treated fibroblasts undergo fragmentation, a sign of unhealthy mitochondria that is reversed by taurine treatment. If left unaltered, β-alanine-treated fibroblasts also undergo mitochondrial apoptosis, as evidenced by activation of caspases 3 and 9 and the initiation of the mitochondrial permeability transition. Together, these data show that β-alanine mediates changes that reduce ATP generation and enhance oxidative stress, factors that contribute to heart failure.", "title": "" }, { "docid": "6ac231de51b69685fcb45d4ef2b32051", "text": "This paper deals with a design and motion planning algorithm of a caterpillar-based pipeline robot that can be used for inspection of 80-100-mm pipelines in an indoor pipeline environment. The robot system uses a differential drive to steer the robot and spring loaded four-bar mechanisms to assure that the robot expands to grip the pipe walls. Unique features of this robot are the caterpillar wheels, the analysis of the four-bar mechanism supporting the treads, a closed-form kinematic approach, and an intuitive user interface. In addition, a new motion planning approach is proposed, which uses springs to interconnect two robot modules and allows the modules to cooperatively navigate through difficult segments of the pipes. Furthermore, an analysis method of selecting optimal compliance to assure functionality and cooperation is suggested. Simulation and experimental results are used throughout the paper to highlight algorithms and approaches.", "title": "" }, { "docid": "0ff159433ed8958109ba8006822a2d67", "text": "In this paper we present VideoSET, a method for Video Summary Evaluation through Text that can evaluate how well a video summary is able to retain the semantic information contained in its original video. We observe that semantics is most easily expressed in words, and develop a text-based approach for the evaluation. Given a video summary, a text representation of the video summary is first generated, and an NLP-based metric is then used to measure its semantic distance to ground-truth text summaries written by humans. We show that our technique has higher agreement with human judgment than pixel-based distance metrics. We also release text annotations and ground-truth text summaries for a number of publicly available video datasets, for use by the computer vision community.", "title": "" }, { "docid": "2bc0102fdc3a66ca5262bdaa90a94187", "text": "Visual localization enables autonomous vehicles to navigate in their surroundings and Augmented Reality applications to link virtual to real worlds. In order to be practically relevant, visual localization approaches need to be robust to a wide variety of viewing condition, including day-night changes, as well as weather and seasonal variations. In this paper, we introduce the first benchmark datasets specifically designed for analyzing the impact of such factors on visual localization. Using carefully created ground truth poses for query images taken under a wide variety of conditions, we evaluate the impact of various factors on the quality of 6 degree-of-freedom (6DOF) camera pose estimation through extensive experiments with state-of-the-art localization approaches. Based on our results, we draw conclusions about the difficulty of different conditions and propose promising avenues for future work. We will eventually make our two novel benchmarks publicly available.", "title": "" }, { "docid": "033fae2e8e219fb74ae8f39b5c176f25", "text": "Wireless Sensor Networks (WSNs) have become a leading solution in many important applications such as intrusion detection, target tracking, industrial automation, smart building and so on. Typically, a WSN consists of a large number of small, low-cost sensor nodes that are distributed in the target area for collecting data of interest. For a WSN to provide high throughput in an energy-efficient way, designing an efficient Medium Access Control (MAC) protocol is of paramount importance because the MAC layer coordinates nodes' access to the shared wireless medium. To show the evolution of WSN MAC protocols, this article surveys the latest progresses in WSN MAC protocol designs over the period 2002-2011. In the early development stages, designers were mostly concerned with energy efficiency because sensor nodes are usually limited in power supply. Recently, new protocols are being developed to provide multi-task support and efficient delivery of bursty traffic. Therefore, research attention has turned back to throughput and delay. This article details the evolution of WSN MAC protocols in four categories: asynchronous, synchronous, frame-slotted, and multichannel. These designs are evaluated in terms of energy efficiency, data delivery performance, and overhead needed to maintain a protocol's mechanisms. With extensive analysis of the protocols many future directions are stated at the end of this survey. The performance of different classes of protocols could be substantially improved in future designs by taking into consideration the recent advances in technologies and application demands.", "title": "" }, { "docid": "1d4c583da38709054140152fe328294c", "text": "This paper analyzes the assumptions of the decision making models in the context of artificial general intelligence (AGI). It is argued that the traditional approaches, exemplified by decision theory and reinforcement learning, are inappropriate for AGI, because their fundamental assumptions on available knowledge and resource cannot be satisfied here. The decision making process in the AGI system NARS is introduced and compared with the traditional approaches. It is concluded that realistic decision-making models must acknowledge the insufficiency of knowledge and resources, and make assumptions accordingly. 1 Formalizing decision-making An AGI system needs to make decisions from time to time. To achieve its goals, the system must execute certain operations, which are chosen from all possible operations, according to the system’s beliefs on the relations between the operations and the goals, as well as their applicability to the current situation. On this topic, the dominating normative model is decision theory [12, 3]. According to this model, “decision making” means to choose one action from a finite set of actions that is applicable at the current state. Each action leads to some consequent states according to a probability distribution, and each consequent state is associated with a utility value. The rational choice is the action that has the maximum expected utility (MEU). When the decision extends from single actions to action sequences, it is often formalized as a Markov decision process (MDP), where the utility function is replaced by a reward value at each state, and the optimal policy, as a collection of decisions, is the one that achieves the maximum expected total reward (usually with a discount for future rewards) in the process. In AI, the best-known approach toward solving this problem is reinforcement learning [4, 16], which uses various algorithms to approach the optimal policy. Decision theory and reinforcement learning have been widely considered as setting the theoretical foundation of AI research [11], and the recent progress in deep learning [9] is increasing the popularity of these models. In the current AGI research, an influential model in this tradition is AIXI [2], in which reinforcement learning is combined with Solomonoff induction [15] to provide the probability values according to algorithmic complexity of the hypotheses used in prediction. 2 P. Wang and P. Hammer Every formal model is based on some fundamental assumptions to encapsulate certain beliefs about the process to be modeled, so as to provide a coherent foundation for the conclusions derived in the model, and also to set restrictions on the situations where the model can be legally applied. In the following, four major assumptions of the above models are summarized. The assumption on task: The task of “decision making” is to select the best action from all applicable actions at each state of the process. The assumption on belief: The selection is based on the system’s beliefs about the actions, represented as probability distributions among their consequent states. The assumption on desire: The selection is guided by the system’s desires measured by a (utility or reward) value function defined on states, and the best action is the one that with the maximum expectation. The assumption on budget: The system can afford the computational resources demanded by the selection algorithm. There are many situations where the above assumptions can be reasonably accepted, and the corresponding models have been successfully applied [11, 9]. However, there are reasons to argue that artificial general intelligence (AGI) is not such a field, and there are non-trivial issues on each of the four assumptions. Issues on task: For a general-purpose system, it is unrealistic to assume that at any state all the applicable actions are explicitly listed. Actually, in human decision making the evaluation-choice step is often far less significant than diagnosis or design [8]. Though in principle it is reasonable to assume the system’s actions are recursively composed of a set of basic operations, decision makings often do not happen at the level of basic operations, but at the level of composed actions, where there are usually infinite possibilities. So decision making is often not about selection, but selective composition. Issues on belief: For a given action, the system’s beliefs about its possible consequences are not necessarily specified as a probability distribution among following states. Actions often have unanticipated consequences, and even the beliefs about the anticipated consequences usually do not fully specify a “state” of the environment or the system itself. Furthermore, the system’s beliefs about the consequences may be implicitly inconsistent, so does not correspond to a probability distribution. Issues on desire: Since an AGI system typically has multiple goals with conflicting demands, usually no uniform value function can evaluate all actions with respect to all goals within limited time. Furthermore, the goals in an AGI system change over time, and it is unrealistic to expect such a function to be defined on all future states. How desirable a situation is should be taken as part of the problem to be solved, rather than as a given. Issues on budget: An AGI is often expected to handle unanticipated problems in real time with various time requirements. In such a situation, even if the decision-making algorithms are considered as of “tractable” computational complexity, they may still fail to satisfy the requirement on response time in the given situation. Assumptions of Decision-Making Models in AGI 3 None of the above issues is completely unknown, and various attempts have been proposed to extend the traditional models [13, 22, 1], though none of them has rejected the four assumptions altogether. Instead, a typical attitude is to take decision theory and reinforcement learning as idealized models for the actual AGI systems to approximate, as well as to be evaluated accordingly [6]. What this paper explores is the possibility of establishing normative models of decision making without accepting any of the above four assumptions. In the following, such a model is introduced, then compared with the classical models. 2 Decision making in NARS The decision-making model to be introduced comes from the NARS project [17, 18, 20]. The objective of this project is to build an AGI in the framework of a reasoning system. Decision making is an important function of the system, though it is not carried out by a separate algorithm or module, but tightly interwoven with other functions, such as reasoning and learning. Limited by the paper length, the following description only briefly covers the aspects of NARS that are directly related to the current discussion. NARS is designed according to the theory that “intelligence” is the ability for a system to be adaptive while working with insufficient knowledge and resources, that is, the system must depend on finite processing capability, make real-time responses, open to unanticipated problems and events, and learn from its experience. Under this condition, it is impossible for the truth-value of beliefs of the system to be defined either in the model-theoretic style as the extent of agreement with the state of affairs, or in the proof-theoretic style as the extent of agreement with the given axioms. Instead, it is defined as the extent of agreement with the available evidence collected from the system’s experience. Formally, for a given statement S, the amount of its positive evidence and negative evidence are defined in an idealized situation and measured by amounts w and w−, respectively, and the total amount evidence is w = w + w−. The truth-value of S is a pair of real numbers, 〈f, c〉, where f , frequency, is w/w so in [0, 1], and c, confidence, is w/(w + 1) so in (0, 1). Therefore a belief has a form of “S〈f, c〉”. As the content of belief, statement S is a sentence in a formal language Narsese. Each statement expresses a relation among a few concepts. For the current discussion, it is enough to know that a statement may have various internal structures for different types of conceptual relation, and can contain other statements as components. In particular, implication statement P ⇒ Q and equivalence statement P ⇔ Q express “If P then Q” and “P if and only if Q”, respectively, where P and Q are statements themselves. As a reasoning system, NARS can carry out three types of inference tasks: Judgment. A judgment also has the form of “S〈f, c〉”, and represents a piece of new experience to be absorbed into the system’s beliefs. Besides adding it into memory, the system may also use it to revise or update the previous beliefs on statement S, as well as to derive new conclusions using various inference rules (including deduction, induction, abduction, analogy, etc.). Each 4 P. Wang and P. Hammer rule uses a truth-value function to calculate the truth-value of the conclusion according to the evidence provided by the premises. For example, the deduction rule can take P 〈f1, c1〉 and P ⇒ Q 〈f2, c2〉 to derive Q〈f, c〉, where 〈f, c〉 is calculated from 〈f1, c1〉 and 〈f2, c2〉 by the truth-value function for deduction. There is also a revision rule that merges distinct bodies of evidence on the same statement to produce more confident judgments. Question. A question has the form of “S?”, and represents a request for the system to find the truth-value of S according to its current beliefs. A question may contain variables to be instantiated. Besides looking in the memory for a matching belief, the system may also use the inference rules backwards to generate derived questions, whose answers will lead to answers of the original question. For example, from question Q? and belief P ⇒ Q 〈f, c〉, a new question P? can be proposed by the deduction rule. When there are multiple candidate answers, a choice rule ", "title": "" }, { "docid": "dc71729ebd3c2a66c73b16685c8d12af", "text": "A list of related materials, with annotations to guide further exploration of the article's ideas and applications 11 Further Reading A company's bid to rally an industry ecosystem around a new competitive view is an uncertain gambit. But the right strategic approaches and the availability of modern digital infrastructures improve the odds for success.", "title": "" }, { "docid": "df2b5f4edb9631b910da72ee3058fd68", "text": "A method to reduce peak electricity demand in building climate control by using real-time electricity pricing and applying model predictive control (MPC) is investigated. We propose to use a newly developed time-varying, hourly-based electricity tariff for end-consumers, that has been designed to truly reflect marginal costs of electricity provision, based on spot market prices as well as on electricity grid load levels, which is directly incorporated into the MPC cost function. Since this electricity tariff is only available for a limited time window into the future we use least-squares support vector machines for electricity tariff price forecasting and thus provide the MPC controller with the necessary estimated time-varying costs for the whole prediction horizon. In the given context, the hourly pricing provides an economic incentive for a building controller to react sensitively with respect to high spot market electricity prices and high grid loading, respectively. Within the proposed tariff regime, grid-friendly behaviour is rewarded. It can be shown that peak electricity demand of buildings can be significantly reduced. The here presented study is an example for the successful implementation of demand response (DR) in the field of building climate control.", "title": "" }, { "docid": "7c7bec32e3949f3a6c0e1109cacd80f5", "text": "Attackers can render distributed denial-of-service attacks more difficult to defend against by bouncing their flooding traffic off of reflectors; that is, by spoofing requests from the victim to a large set of Internet servers that will in turn send their combined replies to the victim. The resulting dilution of locality in the flooding stream complicates the victim's abilities both to isolate the attack traffic in order to block it, and to use traceback techniques for locating the source of streams of packets with spoofed source addresses, such as ITRACE [Be00a], probabilistic packet marking [SWKA00], [SP01], and SPIE [S+01]. We discuss a number of possible defenses against reflector attacks, finding that most prove impractical, and then assess the degree to which different forms of reflector traffic will have characteristic signatures that the victim can use to identify and filter out the attack traffic. Our analysis indicates that three types of reflectors pose particularly significant threats: DNS and Gnutella servers, and TCP-based servers (particularly Web servers) running on TCP implementations that suffer from predictable initial sequence numbers. We argue in conclusion in support of \"reverse ITRACE\" [Ba00] and for the utility of packet traceback techniques that work even for low volume flows, such as SPIE.", "title": "" }, { "docid": "872bda80d61c5ef4f30f073a69076050", "text": "Given a terabyte click log, can we build an efficient and effective click model? It is commonly believed that web search click logs are a gold mine for search business, because they reflect users' preference over web documents presented by the search engine. Click models provide a principled approach to inferring user-perceived relevance of web documents, which can be leveraged in numerous applications in search businesses. Due to the huge volume of click data, scalability is a must.\n We present the click chain model (CCM), which is based on a solid, Bayesian framework. It is both scalable and incremental, perfectly meeting the computational challenges imposed by the voluminous click logs that constantly grow. We conduct an extensive experimental study on a data set containing 8.8 million query sessions obtained in July 2008 from a commercial search engine. CCM consistently outperforms two state-of-the-art competitors in a number of metrics, with over 9.7% better log-likelihood, over 6.2% better click perplexity and much more robust (up to 30%) prediction of the first and the last clicked position.", "title": "" }, { "docid": "16bf05d14d0f4bed68ecbf2fb60b2cc7", "text": "Amaç: Akıllı telefonlar iletişim amaçlı kullanımları yanında internet, fotoğraf makinesi, video-ses kayıt cihazı, navigasyon, müzik çalar gibi birçok özelliğin bir arada toplandığı günümüzün popüler teknolojik cihazlarıdır. Akıllı telefonların kullanımı hızla artmaktadır. Bu hızlı artış akıllı telefonlara bağımlılığı ve problemli kullanımı beraberinde getirmektedir. Bizim bildiğimiz kadarıyla Türkiye’de akıllı telefonlara bağımlılığı değerlendiren ölçek yoktur. Bu çalışmanın amacı Akıllı Telefon Bağımlılığı Ölçeği’nin Türkçe’ye uyarlanması, geçerlik ve güvenilirliğinin incelenmesidir. Yöntem: Çalışmanın örneklemini Süleyman Demirel Üniversitesi Tıp Fakültesi’nde eğitim gören ve akıllı telefon kullanıcısı olan 301 üniversite öğrencisi oluşturmuştur. Çalışmada veri toplama araçları olarak Akıllı Telefon Bağımlılığı Ölçeği, Bilgi Formu, İnternet Bağımlılığı Ölçeği ve Problemli Cep Telefonu Kullanımı Ölçeği kullanılmıştır. Ölçekler, tüm katılımcılara Bilgi Formu hep ilk sırada olacak şekilde karışık sırayla verilmiştir. Ölçeklerin doldurulması yaklaşık 20 dakika sürmüştür. Test-tekrar-test uygulaması rastgele belirlenmiş 30 öğrenci ile (rumuz yardımıyla) üç hafta sonra yapılmıştır. Ölçeğin faktör yapısı açıklayıcı faktör analizi ve varimaks rotasyonu ile incelenmiştir. Güvenilirlik analizi için iç tutarlılık, iki-yarım güvenilirlik ve test-tekrar test güvenilirlik analizleri uygulanmıştır. Ölçüt bağıntılı geçerlilik analizinde Pearson korelasyon analizi kullanılmıştır. Bulgular: Faktör Analizi yedi faktörlü bir yapı ortaya koymuş, maddelerin faktör yüklerinin 0,349-0,824 aralığında değiştiği belirlenmiştir. Ölçeğin Cronbach alfa iç tutarlılık katsayısı 0,947 bulunmuştur. Ölçeğin diğer ölçeklerle arasındaki korelasyonlar istatistiksel olarak anlamlı bulunmuştur. Test-tekrar test güvenilirliğinin yüksek olduğu (r=0,814) bulunmuştur. İki yarım güvenilirlik analizinde Guttman Splithalf katsayısı 0,893 olarak saptanmıştır. Kız öğrencilerde ölçek toplam puan ortalamasının erkeklerden istatistiksel olarak önemli düzeyde yüksek olduğu bulunmuştur (p=0,03). Yaş ile ölçek toplam puanı arasında anlamlı olmayan negatif ilişki saptanmıştır (r=-0.086, p=0,13). En yüksek ölçek puan ortalaması 16 saat üzeri kullananlarda gözlenmiş olup 4 saatten az kullananlardan istatistiksel olarak önemli derecede fazla bulunmuştur (p=0,01). Ölçek toplam puanı akıllı telefonu en çok kullanım amacına göre karşılaştırıldığında en yüksek ortalamanın oyun kategorisinde olduğu ancak internet (p=0,44) ve sosyal ağ (p=0,98) kategorilerinden farklı olmadığı, ayrıca telefon (p=0,02), SMS (p=0,02) ve diğer kullanım amacı (p=0,04) kategori ortalamalarından istatistiksel olarak önemli derecede fazla olduğu bulunmuştur. Akıllı telefon bağımlısı olduğunu düşünenlerin ve bu konuda emin olmayanların toplam ölçek puanları akıllı telefon bağımlısı olduğunu düşünmeyenlerin toplam ölçek puanlarından anlamlı şekilde yüksek bulunmuştur (p=0,01). Sonuç: Bu çalışmada, Akıllı telefon Bağımlılığı Ölçeği’nin Türkçe formunun akıllı telefon bağımlılığının değerlendirilmesinde geçerli ve güvenilir bir ölçüm aracı olduğu bulunmuştur.", "title": "" }, { "docid": "0db200113ef14c8e88a3388c595148a6", "text": "Entity disambiguation is the task of mapping ambiguous terms in natural-language text to its entities in a knowledge base. It finds its application in the extraction of structured data in RDF (Resource Description Framework) from textual documents, but equally so in facilitating artificial intelligence applications, such as Semantic Search, Reasoning and Question & Answering. We propose a new collective, graph-based disambiguation algorithm utilizing semantic entity and document embeddings for robust entity disambiguation. Robust thereby refers to the property of achieving better than state-of-the-art results over a wide range of very different data sets. Our approach is also able to abstain if no appropriate entity can be found for a specific surface form. Our evaluation shows, that our approach achieves significantly (>5%) better results than all other publicly available disambiguation algorithms on 7 of 9 datasets without data set specific tuning. Moreover, we discuss the influence of the quality of the knowledge base on the disambiguation accuracy and indicate that our algorithm achieves better results than non-publicly available state-of-the-art algorithms.", "title": "" }, { "docid": "6fe71d8d45fa940f1a621bfb5b4e14cd", "text": "We present Attract-Repel, an algorithm for improving the semantic quality of word vectors by injecting constraints extracted from lexical resources. Attract-Repel facilitates the use of constraints from mono- and cross-lingual resources, yielding semantically specialized cross-lingual vector spaces. Our evaluation shows that the method can make use of existing cross-lingual lexicons to construct high-quality vector spaces for a plethora of different languages, facilitating semantic transfer from high- to lower-resource ones. The effectiveness of our approach is demonstrated with state-of-the-art results on semantic similarity datasets in six languages. We next show that Attract-Repel-specialized vectors boost performance in the downstream task of dialogue state tracking (DST) across multiple languages. Finally, we show that cross-lingual vector spaces produced by our algorithm facilitate the training of multilingual DST models, which brings further performance improvements.", "title": "" }, { "docid": "1a98b0d00afd29474fb40b76ca2b0ce6", "text": "The intended readership of this volume is the full range of behavioral scientists, mental health professionals, and students aspiring to such roles who work with children. This includes psychologists (applied, clinical, counseling, developmental, school, including academics, researchers, and practitioners), family counselors, psychiatrists, social workers, psychiatric nurses, child protection workers, and any other mental health professionals who work with children, adolescents, and their families.", "title": "" }, { "docid": "9818399b4c119b58723c59e76bbfc1bd", "text": "Many vertex-centric graph algorithms can be expressed using asynchronous parallelism by relaxing certain read-after-write data dependences and allowing threads to compute vertex values using stale (i.e., not the most recent) values of their neighboring vertices. We observe that on distributed shared memory systems, by converting synchronous algorithms into their asynchronous counterparts, algorithms can be made tolerant to high inter-node communication latency. However, high inter-node communication latency can lead to excessive use of stale values causing an increase in the number of iterations required by the algorithms to converge. Although by using bounded staleness we can restrict the slowdown in the rate of convergence, this also restricts the ability to tolerate communication latency. In this paper we design a relaxed memory consistency model and consistency protocol that simultaneously tolerate communication latency and minimize the use of stale values. This is achieved via a coordinated use of best effort refresh policy and bounded staleness. We demonstrate that for a range of asynchronous graph algorithms and PDE solvers, on an average, our approach outperforms algorithms based upon: prior relaxed memory models that allow stale values by at least 2.27x; and Bulk Synchronous Parallel (BSP) model by 4.2x. We also show that our approach frequently outperforms GraphLab, a popular distributed graph processing framework.", "title": "" }, { "docid": "7e0d65fee19baefe31a4e14bf25f42ee", "text": "This paper describes the process for documenting programs using Aspect-Oriented PHP through AOPHPdoc. We discuss some of the problems involved in documenting Aspect-Oriented programs, solutions to these problems, and the creation of documentation with AOPHPdoc. A survey of programmers found no preference for Javadoc-styled documentation over the colored-coded AOPHP documentation.", "title": "" }, { "docid": "ff5d3f4ef4431c7144c12f5da563e347", "text": "Ankle inversion-eversion compliance is an important feature of conventional prosthetic feet, and control of inversion, or roll, in robotic prostheses could improve balance for people with amputation. We designed a tethered ankle-foot prosthesis with two independently-actuated toes that are coordinated to provide plantarflexion and inversion-eversion torques. This configuration allows a simple lightweight structure with a total mass of 0.72 kg. Strain gages on the toes measure torque with less than 2.7% RMS error, while compliance in the Bowden cable tether provides series elasticity. Benchtop tests demonstrated a 90% rise time of less than 33 ms and peak torques of 180 N·m in plantarflexion and ±30 N·m in inversion-eversion. The phase-limited closedloop torque bandwidth is 20 Hz with a 90 N·m amplitude chirp in plantarflexion, and 24 Hz with a 20 N·m amplitude chirp in inversion-eversion. The system has low sensitivity to toe position disturbances at frequencies of up to 18 Hz. Walking trials with five values of constant inversion-eversion torque demonstrated RMS torque tracking errors of less than 3.7% in plantarflexion and less than 5.9% in inversion-eversion. These properties make the platform suitable for haptic rendering of virtual devices in experiments with humans, which may reveal strategies for improving balance or allow controlled comparisons of conventional prosthesis features. A similar morphology may be effective for autonomous devices.", "title": "" }, { "docid": "36c568dd8c860a44aa376db3319f09b9", "text": "Future autonomous vehicles and ADAS (Advanced Driver Assistance Systems) need real-time audio and video transmission together with control data traffic (CDT). Audio/video stream delay analysis has been largely investigated in AVB (Audio Video Bridging) context, but not yet with the presence of the CDT in the new TSN context. In this paper we present a local delay analysis of AVB frames under hierarchical scheduling of credit-based shaping and time-aware shaping on TSN switches. We present the effects of time aware shaping on AVB traffic, how it changes the relative order of transmission of frames leading to bursts and worst case scenarios for lower priority streams. We also show that these bursts are upper-bounded by the Credit-Bases Shaper, hence the worst-case transmissions delay of a given stream is also upper-bounded. We present the analysis to compute the worst case delay for a frame, as well as the feasibility condition necessary for the analysis to be applied. Our methods (analysis and simulation) are applied to an automotive use case, which is defined within the Eurostars RETINA project, and where both control data traffic and AVB traffic must be guaranteed.", "title": "" }, { "docid": "11a69c06f21e505b3e05384536108325", "text": "Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.", "title": "" } ]
scidocsrr
1f5ff5962abe3b911cae545f7c2c5ef8
A Virtual Assembly Design Environment
[ { "docid": "4e0d896369c546d0284fad6c2fae7f23", "text": "Virtual reality is a technology which is often regarded as a natural extension to 3D computer graphics with advanced input and output devices. This technology has only recently matured enough to warrant serious engineering applications. The integration of this new technology with software systems for engineering, design, and manufacturing will provide a new boost to the field of computer-aided engineering. One aspect of design and manufacturing which may be significantly affected by virtual reality is design for assembly. This paper presents a research effort aimed at creating a virtual assembly design environment.", "title": "" } ]
[ { "docid": "3c0a088db2b845fd400367b62c800c6e", "text": "This paper considers the redeployment problem for a fleet of ambulances. This problem is encountered in the real-time management of emergency medical services. A dynamic model is proposed and a dynamic ambulance management system is described. This system includes a parallel tabu search heuristic to precompute redeployment scenarios. Simulations based on real-data confirm the efficiency of the proposed approach. Résumé On considère dans cet article le problème de redéploiement d'une flotte d'ambulances. Ce problème intervient lors de la gestion en temps réel d'un système de véhicules d'urgence. On propose pour ce problème un modèle dynamique et l'on décrit un système de gestion en temps réel de la flotte d'ambulance. Ce système comprend une méthode heuristique parallèle afin de déterminer à l'avance des scénarios de redéploiement. Des simulations produites à partir de données réelles confirment la pertinence de l'approche proposée. Mots-clefs : véhicules d'urgence, modèles de couverture, heuristique avec recherche tabou, temps réel.", "title": "" }, { "docid": "3d45b63a4643c34c56633afd7e270922", "text": "In this paper we perform a comparative analysis of three models for feature representation of text documents in the context of document classification. In particular, we consider the most often used family of models bag-of-words, recently proposed continuous space models word2vec and doc2vec, and the model based on the representation of text documents as language networks. While the bag-of-word models have been extensively used for the document classification task, the performance of the other two models for the same task have not been well understood. This is especially true for the network-based model that have been rarely considered for representation of text documents for classification. In this study, we measure the performance of the document classifiers trained using the method of random forests for features generated the three models and their variants. The results of the empirical comparison show that the commonly used bag-of-words model has performance comparable to the one obtained by the emerging continuous-space model of doc2vec. In particular, the low-dimensional variants of doc2vec generating up to 75 features are among the top-performing document representation models. The results finally point out that doc2vec shows a superior performance in the tasks of classifying large Corresponding Author: Department of Informatics, University of Rijeka, Radmile Matejčić 2, 51000 Rijeka, Croatia, +385 51 584 714 Email addresses: smarti@uniri.hr (Sanda Martinčić-Ipšić), tanja.milicic@student.uniri.hr (Tanja Miličić), Todorovski@fu.uni-lj.si (Ljupčo Todorovski) Preprint submitted to ?? July 6, 2017 documents.", "title": "" }, { "docid": "79f1473d4eb0c456660543fda3a648f1", "text": "Weexamine the problem of learning and planning on high-dimensional domains with long horizons and sparse rewards. Recent approaches have shown great successes in many Atari 2600 domains. However, domains with long horizons and sparse rewards, such as Montezuma’s Revenge and Venture, remain challenging for existing methods. Methods using abstraction [5, 13] have shown to be useful in tackling long-horizon problems. We combine recent techniques of deep reinforcement learning with existing model-based approaches using an expert-provided state abstraction. We construct toy domains that elucidate the problem of long horizons, sparse rewards and high-dimensional inputs, and show that our algorithm significantly outperforms previous methods on these domains. Our abstraction-based approach outperforms Deep QNetworks [11] on Montezuma’s Revenge and Venture, and exhibits backtracking behavior that is absent from previous methods.", "title": "" }, { "docid": "218c93b9e7be1ddbf86cd7dca9065fde", "text": "Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present LIAR: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from POLITIFACT.COM, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate metadata with text. We show that this hybrid approach can improve a text-only deep learning model.", "title": "" }, { "docid": "0297b1f3565e4d1a3554137ac4719cfd", "text": "Systems to automatically provide a representative summary or `Key Phrase' of a piece of music are described. For a `rock' song with `verse' and `chorus' sections, we aim to return the chorus or in any case the most repeated and hence most memorable section. The techniques are less applicable to music with more complicated structure although possibly our general framework could still be used with di erent heuristics. Our process consists of three steps. First we parameterize the song into features. Next we use these features to discover the song structure, either by clustering xed-length segments or by training a hidden Markov model (HMM) for the song. Finally, given this structure, we use heuristics to choose the Key Phrase. Results for summaries of 18 Beatles songs evaluated by ten users show that the technique based on clustering is superior to the HMM approach and to choosing the Key Phrase at random.", "title": "" }, { "docid": "34bbc3054be98f2cc0edc25a00fe835d", "text": "The increasing prevalence of co-processors such as the Intel Xeon Phi, has been reshaping the high performance computing (HPC) landscape. The Xeon Phi comes with a large number of power efficient CPU cores, but at the same time, it's a highly memory constraint environment leaving the task of memory management entirely up to application developers. To reduce programming complexity, we are focusing on application transparent, operating system (OS) level hierarchical memory management.\n In particular, we first show that state of the art page replacement policies, such as approximations of the least recently used (LRU) policy, are not good candidates for massive many-cores due to their inherent cost of remote translation lookaside buffer (TLB) invalidations, which are inevitable for collecting page usage statistics. The price of concurrent remote TLB invalidations grows rapidly with the number of CPU cores in many-core systems and outpace the benefits of the page replacement algorithm itself. Building upon our previous proposal, per-core Partially Separated Page Tables (PSPT), in this paper we propose Core-Map Count based Priority (CMCP) page replacement policy, which exploits the auxiliary knowledge of the number of mapping CPU cores of each page and prioritizes them accordingly. In turn, it can avoid TLB invalidations for page usage statistic purposes altogether. Additionally, we describe and provide an implementation of the experimental 64kB page support of the Intel Xeon Phi and reveal some intriguing insights regarding its performance. We evaluate our proposal on various applications and find that CMCP can outperform state of the art page replacement policies by up to 38%. We also show that the choice of appropriate page size depends primarily on the degree of memory constraint in the system.", "title": "" }, { "docid": "86f77d7cbf5e43339c27ae8047fb9560", "text": "FotoFile is an experimental system for multimediaorganization and retrieval, based upon the design goal of makingmultimedia content accessible to non-expert users. Search andretrieval are done in terms that are natural to the task. Thesystem blends human and automatic annotation methods. It extendstextual search, browsing, and retrieval technologies to supportmultimedia data types.", "title": "" }, { "docid": "220e3ddeb515040b2028b012be3f3385", "text": "This paper presents first an overview of the well-known voltage and current dc-link converter topologies used to implement a three-phase PWM ac-ac converter system. Starting from the voltage source inverter and the current source rectifier, the basics of space vector modulation are summarized. Based on that, the topology of the indirect matrix converter (IMC) and its modulation are gradually developed from a voltage dc-link back-to-back converter by omitting the dc-link capacitor. In the next step, the topology of the conventional (direct) matrix converter (CMC) is introduced, and the relationship between the IMC and the CMCs is discussed in a figurative manner by investigating the switching states. Subsequently, three-phase ac-ac buck-type chopper circuits are considered as a special case of matrix converters (MCs), and a summary of extended MC topologies is provided, including three-level and hybrid MCs. Therewith, a common knowledge basis of the individual converter topologies is established.", "title": "" }, { "docid": "38036ea0a6f79ff62027e8475859acb9", "text": "The constantly increasing demand for nutraceuticals is paralleled by a more pronounced request for natural ingredients and health-promoting foods. The multiple functional properties of cactus pear fit well this trend. Recent data revealed the high content of some chemical constituents, which can give added value to this fruit on a nutritional and technological functionality basis. High levels of betalains, taurine, calcium, magnesium, and antioxidants are noteworthy.", "title": "" }, { "docid": "c9993b2d046bf0e796014f2a434dc1a0", "text": "Recently, diverse types of chaotic image encryption algorithms have been explored to meet the high demands in realizing secured real time image sharing applications. In this context, to achieve high sensitivity and superior key space, a multiple chaotic map based image encryption algorithm has been proposed. The proposed algorithm employs three-stage permutation and diffusion to withstand several attacks and the same is modelled in reconfigurable platform namely Field Programmable Gate Array (FPGA). The comprehensive analysis is done with various parameters to exhibit the robustness of the proposed algorithm and its ability to withstand brute-force, differential and statistical attacks. The synthesized result demonstrates that the reconfigurable hardware architecture takes approximately 0.098 ms for encrypting an image of size 256 × 256. Further the resource utilization and timing analyzer results are reported.", "title": "" }, { "docid": "4a4789547dcbe5b23190f2ab7cda01d7", "text": "Model predictive control (MPC) has been one of the most promising control strategies in industrial processes for decades. Due to its remarkable advantages, it has been extended to many areas of robotic research, especially motion control. Therefore, the goal of this paper is to review motion control of wheeled mobile robots (WMRs) using MPC. Principles as well as key issues in real-time implementations are first addressed. We then update the current literature of MPC for motion control. We also classify publications by using three criteria, i.e., MPC models, robot kinematic models, and basic motion tasks. MPC models categorized here include nonlinear MPC, linear MPC, neural network MPC, and generalized predictive control (GPC), while robot kinematic models we focus on consist of unicycle-type vehicles, car-like vehicles, and omnidirectional vehicles. Basic motion tasks, in general, are classified into three groups, i.e., trajectory tracking, path following, and point stabilization. To show that MPC strategies are capable of real-time implementations, some experimental scenarios from our previous work are given. We also conclude by identifying some future research directions.", "title": "" }, { "docid": "2b688f9ca05c2a79f896e3fee927cc0d", "text": "This paper presents a new synchronous-reference frame (SRF)-based control method to compensate power-quality (PQ) problems through a three-phase four-wire unified PQ conditioner (UPQC) under unbalanced and distorted load conditions. The proposed UPQC system can improve the power quality at the point of common coupling on power distribution systems under unbalanced and distorted load conditions. The simulation results based on Matlab/Simulink are discussed in detail to support the SRF-based control method presented in this paper. The proposed approach is also validated through experimental study with the UPQC hardware prototype.", "title": "" }, { "docid": "b1bb8eda4f7223a4c6dd8201ff5abfae", "text": "Recommender systems are constructed to search the content of interest from overloaded information by acquiring useful knowledge from massive and complex data. Since the amount of information and the complexity of the data structure grow, it has become a more interesting and challenging topic to find an efficient way to process, model, and analyze the information. Due to the Global Positioning System (GPS) data recording the taxi's driving time and location, the GPS-equipped taxi can be regarded as the detector of an urban transport system. This paper proposes a Taxi-hunting Recommendation System (Taxi-RS) processing the large-scale taxi trajectory data, in order to provide passengers with a waiting time to get a taxi ride in a particular location. We formulated the data offline processing system based on HotSpotScan and Preference Trajectory Scan algorithms. We also proposed a new data structure for frequent trajectory graph. Finally, we provided an optimized online querying subsystem to calculate the probability and the waiting time of getting a taxi. Taxi-RS is built based on the real-world trajectory data set generated by 12 000 taxis in one month. Under the condition of guaranteeing the accuracy, the experimental results show that our system can provide more accurate waiting time in a given location compared with a naïve algorithm.", "title": "" }, { "docid": "58e0e5c5a8fdbb14403173600f551a9b", "text": "Charisma, the ability to command authority on the basis of personal qualities, is more difficult to define than to identify. How do charismatic leaders such as Fidel Castro or Pope John Paul II attract and retain their followers? We present results of an analysis of subjective ratings of charisma from a corpus of American political speech. We identify the associations between charisma ratings and ratings of other personal attributes. We also examine acoustic/prosodic and lexical features of this speech and correlate these with charisma ratings.", "title": "" }, { "docid": "d8c5ff196db9acbea12e923b2dcef276", "text": "MoS<sub>2</sub>-graphene-based hybrid structures are biocompatible and useful in the field of biosensors. Herein, we propose a heterostructured MoS<sub>2</sub>/aluminum (Al) film/MoS<sub>2</sub>/graphene as a highly sensitive surface plasmon resonance (SPR) biosensor based on the Otto configuration. The sensitivity of the proposed biosensor is enhanced by using three methods. First, prisms of different refractive index have been discussed and it is found that sensitivity can be enhanced by using a low refractive index prism. Second, the influence of the thickness of the air layer on the sensitivity is analyzed and the optimal thickness of air is obtained. Finally, the sensitivity improvement and mechanism by using molybdenum disulfide (MoS<sub>2</sub>)–graphene hybrid structure is revealed. The maximum sensitivity ∼ 190.83°/RIU is obtained with six layers of MoS<sub>2</sub> coating on both surfaces of Al thin film.", "title": "" }, { "docid": "6a8ac89da0b4a9f78cbcb141fc8239a5", "text": "In this paper, we investigate a very challenging task of automatically generating presentation slides for academic papers. The generated presentation slides can be used as drafts to help the presenters prepare their formal slides in a quicker way. A novel system called PPSGen is proposed to address this task. It first employs the regression method to learn the importance scores of the sentences in an academic paper, and then exploits the integer linear programming (ILP) method to generate well-structured slides by selecting and aligning key phrases and sentences. Evaluation results on a test set of 200 pairs of papers and slides collected on the web demonstrate that our proposed PPSGen system can generate slides with better quality. A user study is also illustrated to show that PPSGen has a few evident advantages over baseline methods.", "title": "" }, { "docid": "631f90dd7545286a527acaf6059eebc4", "text": "Arabic is usually written without short vowels and additional diacritics, which are nevertheless important for several applications. We present a novel algorithm for restoring these symbols, using a cascade of probabilistic finitestate transducers trained on the Arabic treebank, integrating a word-based language model, a letter-based language model, and an extremely simple morphological model. This combination of probabilistic methods and simple linguistic information yields high levels of accuracy.", "title": "" }, { "docid": "ec2a377d643326c5e7f64f6f01f80a04", "text": "October 2006 | Volume 3 | Issue 10 | e294 Cultural competency has become a fashionable term for clinicians and researchers. Yet no one can defi ne this term precisely enough to operationalize it in clinical training and best practices. It is clear that culture does matter in the clinic. Cultural factors are crucial to diagnosis, treatment, and care. They shape health-related beliefs, behaviors, and values [1,2]. But the large claims about the value of cultural competence for the art of professional care-giving around the world are simply not supported by robust evaluation research showing that systematic attention to culture really improves clinical services. This lack of evidence is a failure of outcome research to take culture seriously enough to routinely assess the cost-effectiveness of culturally informed therapeutic practices, not a lack of effort to introduce culturally informed strategies into clinical settings [3].", "title": "" } ]
scidocsrr
2ea7a77f8eb02ce84c9672ad99939c50
A Comparison Framework and Review of Service Brokerage Solutions for Cloud Architectures
[ { "docid": "0cae4ea322daaaf33a42427b69e8ba9f", "text": "Background--By leveraging cloud services, organizations can deploy their software systems over a pool of resources. However, organizations heavily depend on their business-critical systems, which have been developed over long periods. These legacy applications are usually deployed on-premise. In recent years, research in cloud migration has been carried out. However, there is no secondary study to consolidate this research. Objective--This paper aims to identify, taxonomically classify, and systematically compare existing research on cloud migration. Method--We conducted a systematic literature review (SLR) of 23 selected studies, published from 2010 to 2013. We classified and compared the selected studies based on a characterization framework that we also introduce in this paper. Results--The research synthesis results in a knowledge base of current solutions for legacy-to-cloud migration. This review also identifies research gaps and directions for future research. Conclusion--This review reveals that cloud migration research is still in early stages of maturity, but is advancing. It identifies the needs for a migration framework to help improving the maturity level and consequently trust into cloud migration. This review shows a lack of tool support to automate migration tasks. This study also identifies needs for architectural adaptation and self-adaptive cloud-enabled systems.", "title": "" }, { "docid": "539d6afe431018b0ac62858ff59caa09", "text": "Cloud computing is a highly discussed topic in the technical and economic world, and many of the big players of the software industry have entered the development of cloud services. Several companies what to explore the possibilities and benefits of incorporating such cloud computing services in their business, as well as the possibilities to offer own cloud services. However, with the amount of cloud computing services increasing quickly, the need for a taxonomy framework rises. This paper examines the available cloud computing services and identifies and explains their main characteristics. Next, this paper organizes these characteristics and proposes a tree-structured taxonomy. This taxonomy allows quick classifications of the different cloud computing services and makes it easier to compare them. Based on existing taxonomies, this taxonomy provides more detailed characteristics and hierarchies. Additionally, the taxonomy offers a common terminology and baseline information for easy communication. Finally, the taxonomy is explained and verified using existing cloud services as examples.", "title": "" } ]
[ { "docid": "66b154f935e66a78895e17318921f36a", "text": "Metaheuristic algorithms have been a very important topic in computer science since the start of evolutionary computing the Genetic Algorithms 1950s. By now these metaheuristic algorithms have become a very large family with successful applications in industry. A challenge which is always pondered on, is finding the suitable metaheuristic algorithm for a certain problem. The choice sometimes may have to be made after trying through many experiments or by the experiences of human experts. As each of the algorithms have their own strengths in solving different kinds of problems, in this paper we propose a framework of metaheuristic brick-up system. The flexibility of brick-up (like Lego) offers users to pick a collection of fundamental functions of metaheuristic algorithms that were known to perform well in the past. In order to verify this brickup concept, in this paper we propose to use the Monte Carlo method with upper confidence bounds applied to a decision tree in selecting appropriate functional pieces. This paper validates the basic concept and discusses the further works.", "title": "" }, { "docid": "5b07f0ec2af3bec3f53f3cff17177490", "text": "In multi-database mining, there can be many local patterns (frequent itemsets or association rules) in each database. At the end of multi-database mining, it is necessary to analyze these local patterns to gain global patterns, when putting all the data from the databases into a single dataset can destroy important information that reflect the distribution of global patterns. This paper develops an algorithm for synthesizing local patterns in multi-database is proposed. This approach is particularly fit to find potentially useful exceptions. The proposed method has been evaluated experimentally. The experimental results have shown that this method is efficient and appropriate to identifying exceptional patterns.", "title": "" }, { "docid": "0af4eddf70691a7bff675d42a39f96ae", "text": "How do we know which grammatical error correction (GEC) system is best? A number of metrics have been proposed over the years, each motivated by weaknesses of previous metrics; however, the metrics themselves have not been compared to an empirical gold standard grounded in human judgments. We conducted the first human evaluation of GEC system outputs, and show that the rankings produced by metrics such as MaxMatch and I-measure do not correlate well with this ground truth. As a step towards better metrics, we also propose GLEU, a simple variant of BLEU, modified to account for both the source and the reference, and show that it hews much more closely to human judgments.", "title": "" }, { "docid": "eea39002b723aaa9617c63c1249ef9a6", "text": "Generative Adversarial Networks (GAN) [1] are an effective method for training generative models of complex data such as natural images. However, they are notoriously hard to train and can suffer from the problem of missing modes where the model is not able to produce examples in certain regions of the space. We propose an iterative procedure, called AdaGAN, where at every step we add a new component into a mixture model by running a GAN algorithm on a reweighted sample. This is inspired by boosting algorithms, where many potentially weak individual predictors are greedily aggregated to form a strong composite predictor. We prove that such an incremental procedure leads to convergence to the true distribution in a finite number of steps if each step is optimal, and convergence at an exponential rate otherwise. We also illustrate experimentally that this procedure addresses the problem of missing modes.", "title": "" }, { "docid": "d2a205f2a6c6deff5d9560af8cf8ff7f", "text": "MIDI files, when paired with corresponding audio recordings, can be used as ground truth for many music information retrieval tasks. We present a system which can efficiently match and align MIDI files to entries in a large corpus of audio content based solely on content, i.e., without using any metadata. The core of our approach is a convolutional network-based cross-modality hashing scheme which transforms feature matrices into sequences of vectors in a common Hamming space. Once represented in this way, we can efficiently perform large-scale dynamic time warping searches to match MIDI data to audio recordings. We evaluate our approach on the task of matching a huge corpus of MIDI files to the Million Song Dataset. 1. TRAINING DATA FOR MIR Central to the task of content-based Music Information Retrieval (MIR) is the curation of ground-truth data for tasks of interest (e.g. timestamped chord labels for automatic chord estimation, beat positions for beat tracking, prominent melody time series for melody extraction, etc.). The quantity and quality of this ground-truth is often instrumental in the success of MIR systems which utilize it as training data. Creating appropriate labels for a recording of a given song by hand typically requires person-hours on the order of the duration of the data, and so training data availability is a frequent bottleneck in content-based MIR tasks. MIDI files that are time-aligned to matching audio can provide ground-truth information [8,25] and can be utilized in score-informed source separation systems [9, 10]. A MIDI file can serve as a timed sequence of note annotations (a “piano roll”). It is much easier to estimate information such as beat locations, chord labels, or predominant melody from these representations than from an audio signal. A number of tools have been developed for inferring this kind of information from MIDI files [6, 7, 17, 19]. Halevy et al. [11] argue that some of the biggest successes in machine learning came about because “...a large training set of the input-output behavior that we seek to automate is available to us in the wild.” The motivation behind c Colin Raffel, Daniel P. W. Ellis. Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Attribution: Colin Raffel, Daniel P. W. Ellis. “LargeScale Content-Based Matching of MIDI and Audio Files”, 16th International Society for Music Information Retrieval Conference, 2015. J/Jerseygi.mid", "title": "" }, { "docid": "1fcdfd02a6ecb12dec5799d6580c67d4", "text": "One of the major problems in developing countries is maintenance of roads. Well maintained roads contribute a major portion to the country's economy. Identification of pavement distress such as potholes and humps not only helps drivers to avoid accidents or vehicle damages, but also helps authorities to maintain roads. This paper discusses previous pothole detection methods that have been developed and proposes a cost-effective solution to identify the potholes and humps on roads and provide timely alerts to drivers to avoid accidents or vehicle damages. Ultrasonic sensors are used to identify the potholes and humps and also to measure their depth and height, respectively. The proposed system captures the geographical location coordinates of the potholes and humps using a global positioning system receiver. The sensed-data includes pothole depth, height of hump, and geographic location, which is stored in the database (cloud). This serves as a valuable source of information to the government authorities and vehicle drivers. An android application is used to alert drivers so that precautionary measures can be taken to evade accidents. Alerts are given in the form of a flash messages with an audio beep.", "title": "" }, { "docid": "f562bd72463945bd35d42894e4815543", "text": "Sound levels in animal shelters regularly exceed 100 dB. Noise is a physical stressor on animals that can lead to behavioral, physiological, and anatomical responses. There are currently no policies regulating noise levels in dog kennels. The objective of this study was to evaluate the noise levels dogs are exposed to in an animal shelter on a continuous basis and to determine the need, if any, for noise regulations. Noise levels at a newly constructed animal shelter were measured using a noise dosimeter in all indoor dog-holding areas. These holding areas included large dog adoptable, large dog stray, small dog adoptable, small dog stray, and front intake. The noise level was highest in the large adoptable area. Sound from the large adoptable area affected some of the noise measurements for the other rooms. Peak noise levels regularly exceeded the measuring capability of the dosimeter (118.9 dBA). Often, in new facility design, there is little attention paid to noise abatement, despite the evidence that noise causes physical and psychological stress on dogs. To meet their behavioral and physical needs, kennel design should also address optimal sound range.", "title": "" }, { "docid": "758692d2c0f1c2232a4c705b0a14c19f", "text": "Process-driven spreadsheet queuing simulation is a better vehicle for understanding queue behavior than queuing theory or dedicated simulation software. Spreadsheet queuing simulation has many pedagogical benefits in a business school end-user modeling course, including developing students' intuition , giving them experience with active modeling skills, and providing access to tools. Spreadsheet queuing simulations are surprisingly easy to program, even for queues with balking and reneging. The ease of prototyping in spreadsheets invites thoughtless design, so careful spreadsheet programming practice is important. Spreadsheet queuing simulation is inferior to dedicated simulation software for analyzing queues but is more likely to be available to managers and students. Q ueuing theory has always been a staple in survey courses on management science. Although it is a powerful tool for computing certain steady-state performance measures, queuing theory is a poor vehicle for teaching students about what transpires in queues. Process-driven spreadsheet queuing simulation is a much better vehicle. Although Evans and Olson [1998, p. 170] state that \" a serious limitation of spreadsheets for waiting-line models is that it is not possible to include behavior such as balking \" and Liberatore and Ny-dick [forthcoming] indicate that a limitation of spreadsheet simulation is the in", "title": "" }, { "docid": "fcd349147673758eedb6dba0cd7af850", "text": "We present VideoLSTM for end-to-end sequence learning of actions in video. Rather than adapting the video to the peculiarities of established recurrent or convolutional architectures, we adapt the architecture to fit the requirements of the video medium. Starting from the soft-Attention LSTM, VideoLSTM makes three novel contributions. First, video has a spatial layout. To exploit the spatial correlation we hardwire convolutions in the soft-Attention LSTM architecture. Second, motion not only informs us about the action content, but also guides better the attention towards the relevant spatio-temporal locations. We introduce motion-based attention. And finally, we demonstrate how the attention from VideoLSTM can be exploited for action localization by relying on the action class label and temporal attention smoothing. Experiments on UCF101, HMDB51 and THUMOS13 reveal the benefit of the video-specific adaptations of VideoLSTM in isolation as well as when integrated in a combined architecture. It compares favorably against other LSTM architectures for action classification and especially action localization.", "title": "" }, { "docid": "f76df1d15ac1171567dc3c107c9fc258", "text": "In this paper, we study user modeling on Twitter. We investigate different strategies for mining user interest profiles from microblogging activities ranging from strategies that analyze the semantic meaning of Twitter messages to strategies that adapt to temporal patterns that can be observed in the microblogging behavior. We evaluate the quality of the user modeling methods in the context of a personalized news recommendation system. Our results reveals that an understanding of the semantic meaning of microposts is key for generating high-quality user profiles.", "title": "" }, { "docid": "fed53c1ca3045afaee7471db301ad7d2", "text": "A fully integrated 0.18 mum DC-DC buck converter using a low-swing ldquostacked driverrdquo configuration is reported in this paper. A high switching frequency of 660 MHz reduces filter components to fit on chip, but this suffers from high switching losses. These losses are reduced using: 1) low-swing drivers; 2) supply stacking; and 3) introducing a charge transfer path to deliver excess charge from the positive metal-oxide semiconductor drive chain to the load, thereby recycling the charge. The working prototype circuit converts 2.2 to 0.75-1.0 V at 40-55 mA. Design and simulation of an improved circuit is also included that further improves the efficiency by enhancing the charge recycling path, providing automated zero voltage switching (ZVS) operation, and synchronizing the half-swing gating signals.", "title": "" }, { "docid": "511486e1b6e87efc1aeec646bb5af52b", "text": "The present study examined the associations between pathological forms of narcissism and responses to scenarios describing private or public negative events. This was accomplished using a randomized twowave experimental design with 600 community participants. The grandiose form of pathological narcissism was associated with increased negative affect and less forgiveness for public offenses, whereas the vulnerable form of pathological narcissism was associated with increased negative affect following private negative events. Concerns about humiliation mediated the association of pathological narcissism with increased negative affect but not the association between grandiose narcissism and lack of forgiveness for public offenses. These findings suggest that pathological narcissism may promote maladaptive responses to negative events that occur in private (vulnerable narcissism) or public (gran-", "title": "" }, { "docid": "9beeee852ce0d077720c212cf17be036", "text": "Spoofing speech detection aims to differentiate spoofing speech from natural speech. Frame-based features are usually used in most of previous works. Although multiple frames or dynamic features are used to form a super-vector to represent the temporal information, the time span covered by these features are not sufficient. Most of the systems failed to detect the non-vocoder or unit selection based spoofing attacks. In this work, we propose to use a temporal convolutional neural network (CNN) based classifier for spoofing speech detection. The temporal CNN first convolves the feature trajectories with a set of filters, then extract the maximum responses of these filters within a time window using a max-pooling layer. Due to the use of max-pooling, we can extract useful information from a long temporal span without concatenating a large number of neighbouring frames, as in feedforward deep neural network (DNN). Five types of feature are employed to access the performance of proposed classifier. Experimental results on ASVspoof 2015 corpus show that the temporal CNN based classifier is effective for synthetic speech detection. Specifically, the proposed method brings a significant performance boost for the unit selection based spoofing speech detection.", "title": "" }, { "docid": "00d14c0c07d04c9bd6995ff0ee065ab9", "text": "The pathways for olfactory learning in the fruitfly Drosophila have been extensively investigated, with mounting evidence that that the mushroom body is the site of the olfactory associative memory trace (Heisenberg, Nature 4:266–275, 2003; Gerber et al., Curr Opin Neurobiol 14:737–744, 2004). Heisenberg’s description of the mushroom body as an associative learning device is a testable hypothesis that relates the mushroom body’s function to its neural structure and input and output pathways. Here, we formalise a relatively complete computational model of the network interactions in the neural circuitry of the insect antennal lobe and mushroom body, to investigate their role in olfactory learning, and specifically, how this might support learning of complex (non-elemental; Giurfa, Curr Opin Neuroethol 13:726–735, 2003) discriminations involving compound stimuli. We find that the circuit is able to learn all tested non-elemental paradigms. This does not crucially depend on the number of Kenyon cells but rather on the connection strength of projection neurons to Kenyon cells, such that the Kenyon cells require a certain number of coincident inputs to fire. As a consequence, the encoding in the mushroom body resembles a unique cue or configural representation of compound stimuli (Pearce, Psychol Rev 101:587–607, 1994). Learning of some conditions, particularly negative patterning, is strongly affected by the assumption of normalisation effects occurring at the level of the antennal lobe. Surprisingly, the learning capacity of this circuit, which is a simplification of the actual circuitry in the fly, seems to be greater than the capacity expressed by the fly in shock-odour association experiments (Young et al. 2010).", "title": "" }, { "docid": "efba71635ca38b4588d3e4200d655fee", "text": "BACKGROUND\nCircumcisions and cesarian sections are common procedures. Although complications to the newborn child fortunately are rare, it is important to emphasize the potential significance of this problem and its frequent iatrogenic etiology. The authors present 7 cases of genitourinary trauma in newborns, including surgical management and follow-up.\n\n\nMETHODS\nThe authors relate 7 recent cases of genitourinary trauma in newborns from a children's hospital in a major metropolitan area.\n\n\nRESULTS\nCase 1 and 2: Two infants suffered degloving injuries to both the prepuce and penile shaft from a Gomco clamp. Successful full-thickness skin grafting using the previously excised foreskin was used in 1 child. Case 3, 4, and 5: A Mogen clamp caused glans injuries in 3 infants. In 2, hemorrhage from the severed glans was controlled with topical epinephrine; the glans healed with a flattened appearance. Another infant sustained a laceration ventrally, requiring a delayed modified meatal advancement glanoplasty to correct the injury. Case 6: A male infant suffered a ventral slit and division of the ventral urethra before placement of a Gomco clamp. Formal hypospadias repair was required. Case 7: An emergent cesarean section resulted in a grade 4-perineal laceration in a female infant. The vaginal tear caused by the surgeon's finger, extended up to the posterior insertion of the cervix and into the rectum. The infant successfully underwent an emergent multilayered repair.\n\n\nCONCLUSIONS\nGenitourinary trauma in the newborn is rare but often necessitates significant surgical intervention. Circumcision often is the causative event. There has been only 1 prior report of a perineal injury similar to case 7, with a fatal outcome.", "title": "" }, { "docid": "2231a663a7985c46a88e65903d7b3fe6", "text": "A novel segmentation algorithm for MRI Brain tumor images is proposed. The proposed algorithm is compared with Thresholding and Region Grow methods. Testing was performed by generating two datasets of real MRI images of brain tumors. Criteria for assessment of the quality of the segmentation results were: the Dice score, sensitivity, specificity and accuracy. Analysis of results obtained using this algorithm to solve the brain tumor MRI image segmentation task showed levels of sensitivity and specificity of 91% to 99%, which is evidence that assessment of the position and boundaries of brain pathology is highly effective.", "title": "" }, { "docid": "03bd81d3c50b81c6cfbae847aa5611f6", "text": "We present a fast, automatic method for accurately capturing full-body motion data using a single depth camera. At the core of our system lies a realtime registration process that accurately reconstructs 3D human poses from single monocular depth images, even in the case of significant occlusions. The idea is to formulate the registration problem in a Maximum A Posteriori (MAP) framework and iteratively register a 3D articulated human body model with monocular depth cues via linear system solvers. We integrate depth data, silhouette information, full-body geometry, temporal pose priors, and occlusion reasoning into a unified MAP estimation framework. Our 3D tracking process, however, requires manual initialization and recovery from failures. We address this challenge by combining 3D tracking with 3D pose detection. This combination not only automates the whole process but also significantly improves the robustness and accuracy of the system. Our whole algorithm is highly parallel and is therefore easily implemented on a GPU. We demonstrate the power of our approach by capturing a wide range of human movements in real time and achieve state-of-the-art accuracy in our comparison against alternative systems such as Kinect [2012].", "title": "" }, { "docid": "e4f31c3e7da3ad547db5fed522774f0e", "text": "Surface reconstruction from oriented points can be cast as a spatial Poisson problem. This Poisson formulation considers all the points at once, without resorting to heuristic spatial partitioning or blending, and is therefore highly resilient to data noise. Unlike radial basis function schemes, the Poisson approach allows a hierarchy of locally supported basis functions, and therefore the solution reduces to a well conditioned sparse linear system. To reconstruct detailed models in limited memory, we solve this Poisson formulation efficiently using a streaming framework. Specifically, we introduce a multilevel streaming representation, which enables efficient traversal of a sparse octree by concurrently advancing through multiple streams, one per octree level. Remarkably, for our reconstruction application, a sufficiently accurate solution to the global linear system is obtained using a single iteration of cascadic multigrid, which can be evaluated within a single multi-stream pass. Finally, we explore the application of Poisson reconstruction to the setting of multi-view stereo, to reconstruct detailed 3D models of outdoor scenes from collections of Internet images.\n This is joint work with Michael Kazhdan, Matthew Bolitho, and Randal Burns (Johns Hopkins University), and Michael Goesele, Noah Snavely, Brian Curless, and Steve Seitz (University of Washington).", "title": "" }, { "docid": "bf784b515ffbf7a9df1217236efe3228", "text": "This paper focuses on the open-center multi-way valve used in loader buckets. To solve the problem of excessive flow force that leads to spool clamping in the reset process, joint simulations adopting MATLAB, AMESim, and FLUENT were carried out. Boundary conditions play a decisive role in the results of computational fluid dynamics (CFD) simulation. However, the boundary conditions of valve ports depend on the hydraulic system’s working condition and are significantly impacted by the port area, which has always been neglected. This paper starts with the port area calculation method, then the port area curves are input into the simulation hydraulic system, obtaining the flow curves of valve port as output, which are then applied as the boundary conditions of the spool valve CFD simulation. Therefore, the steady-state flow force of the spool valve is accurately calculated, and the result verifies the hypothesis that excess flow force causes spool clamping. Based on this, four kinds of structures were introduced in an attempt to improve the situation, and simulating calculation and theoretical analysis were adopted to verify the effects of improvement. Results show that the four structures could reduce the peak value of flow force by 17.8%, 60.6%, 61.6%, and 55.7%, respectively. Of the four, structures II, III, and IV can reduce the peak value of flow force to below reset spring force value, thus successfully avoiding the spool clamping caused by flow force.", "title": "" }, { "docid": "882f2fa1782d530bbc2cbccdd5a194bd", "text": "Shape prior plays an important role in accurate and robust liver segmentation. However, liver shapes have complex variations and accurate modeling of liver shapes is challenging. Using large-scale training data can improve the accuracy but it limits the computational efficiency. In order to obtain accurate liver shape priors without sacrificing the efficiency when dealing with large-scale training data, we investigate effective and scalable shape prior modeling method that is more applicable in clinical liver surgical planning system. We employed the Sparse Shape Composition (SSC) to represent liver shapes by an optimized sparse combination of shapes in the repository, without any assumptions on parametric distributions of liver shapes. To leverage large-scale training data and improve the computational efficiency of SSC, we also introduced a homotopy-based method to quickly solve the L1-norm optimization problem in SSC. This method takes advantage of the sparsity of shape modeling, and solves the original optimization problem in SSC by continuously transforming it into a series of simplified problems whose solution is fast to compute. When new training shapes arrive gradually, the homotopy strategy updates the optimal solution on the fly and avoids re-computing it from scratch. Experiments showed that SSC had a high accuracy and efficiency in dealing with complex liver shape variations, excluding gross errors and preserving local details on the input liver shape. The homotopy-based SSC had a high computational efficiency, and its runtime increased very slowly when repository's capacity and vertex number rose to a large degree. When repository's capacity was 10,000, with 2000 vertices on each shape, homotopy method cost merely about 11.29 s to solve the optimization problem in SSC, nearly 2000 times faster than interior point method. The dice similarity coefficient (DSC), average symmetric surface distance (ASD), and maximum symmetric surface distance measurement was 94.31 ± 3.04%, 1.12 ± 0.69 mm and 3.65 ± 1.40 mm respectively.", "title": "" } ]
scidocsrr
55094537ef66fe4d24cf64585fe9e854
Examining the Role of Social Media in Disaster Management from an Attribution Theory Perspective
[ { "docid": "7641f8f3ed2afd0c16665b44c1216e79", "text": "In this article we explore the behavior of Twitter users under an emergency situation. In particular, we analyze the activity related to the 2010 earthquake in Chile and characterize Twitter in the hours and days following this disaster. Furthermore, we perform a preliminary study of certain social phenomenons, such as the dissemination of false rumors and confirmed news. We analyze how this information propagated through the Twitter network, with the purpose of assessing the reliability of Twitter as an information source under extreme circumstances. Our analysis shows that the propagation of tweets that correspond to rumors differs from tweets that spread news because rumors tend to be questioned more than news by the Twitter community. This result shows that it is posible to detect rumors by using aggregate analysis on tweets.", "title": "" } ]
[ { "docid": "ec5dc7aaa399af3a3db080588df1376f", "text": "Dimensionality reduction plays an important role in many data mining applications involving high-dimensional data. Many existing dimensionality reduction techniques can be formulated as a generalized eigenvalue problem, which does not scale to large-size problems. Prior work transforms the generalized eigenvalue problem into an equivalent least squares formulation, which can then be solved efficiently. However, the equivalence relationship only holds under certain assumptions without regularization, which severely limits their applicability in practice. In this paper, an efficient two-stage approach is proposed to solve a class of dimensionality reduction techniques, including Canonical Correlation Analysis, Orthonormal Partial Least Squares, linear Discriminant Analysis, and Hypergraph Spectral Learning. The proposed two-stage approach scales linearly in terms of both the sample size and data dimensionality. The main contributions of this paper include (1) we rigorously establish the equivalence relationship between the proposed two-stage approach and the original formulation without any assumption; and (2) we show that the equivalence relationship still holds in the regularization setting. We have conducted extensive experiments using both synthetic and real-world data sets. Our experimental results confirm the equivalence relationship established in this paper. Results also demonstrate the scalability of the proposed two-stage approach.", "title": "" }, { "docid": "00b2befc6cfa60d0d7799673de232461", "text": "During the last decade, various machine learning and data mining techniques have been applied to Intrusion Detection Systems (IDSs) which have played an important role in defending critical computer systems and networks from cyber attacks. Unsupervised anomaly detection techniques have received a particularly great amount of attention because they enable construction of intrusion detection models without using labeled training data (i.e., with instances preclassified as being or not being an attack) in an automated manner and offer intrinsic ability to detect unknown attacks; i.e., 0-day attacks. Despite the advantages, it is still not easy to deploy them into a real network environment because they require several parameters during their building process, and thus IDS operators and managers suffer from tuning and optimizing the required parameters based on changes of their network characteristics. In this paper, we propose a new anomaly detection method by which we can automatically tune and optimize the values of parameters without predefining them. We evaluated the proposed method over real traffic data obtained from Kyoto University honeypots. The experimental results show that the performance of the proposed method is superior to that of the previous one. 2011 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "2c2942905010e71cda5f8b0f41cf2dd0", "text": "1 Focus and anaphoric destressing Consider a pronunciation of (1) with prominence on the capitalized noun phrases. In terms of a relational notion of prominence, the subject NP she] is prominent within the clause S she beats me], and NP Sue] is prominent within the clause S Sue beats me]. This prosody seems to have the pragmatic function of putting the two clauses into opposition, with prominences indicating where they diier, and prosodic reduction of the remaining parts indicating where the clauses are invariant. (1) She beats me more often than Sue beats me Car84], Roc86] and Roo92] propose theories of focus interpretation which formalize the idea just outlined. Under my assumptions, the prominences are the correlates of a syntactic focus features on the two prominent NPs, written as F subscripts. Further, the grammatical representation of (1) includes operators which interpret the focus features at the level of the minimal dominating S nodes. In the logical form below, each focus feature is interpreted by an operator written .", "title": "" }, { "docid": "9d45c1deaf429be2a5c33cd44b04290e", "text": "In this paper, a new omni-directional driving system with one spherical wheel is proposed. This system is able to overcome the existing driving systems with structural limitations in vertical, horizontal and diagonal movement. This driving system was composed of two stepping motors, a spherical wheel covered by a ball bearing, a weight balancer for the elimination of eccentricity, and ball plungers for balance. All parts of this structure is located at same distance on the center because the center of gravity of this system must be placed at the center of the system. An own ball bearing was designed for settled rotation and smooth direction change of a spherical wheel. The principle of an own ball bearing is the reversal of the ball mouse. Steel as the material of ball in the own ball bearing, was used for the prevention the slip with ground. One of the stepping motors is used for driving the spherical wheel. This spherical wheel is stable because of the support of ball bearing. And the other enables to move in a wanted direction while it rotates based on the central axis. The ATmega128 chip is used for the control of two stepping motors. To verify the proposed system, driving experiments was executed in variety of environments. Finally, the performance and the validity of the omni-directional driving system were confirmed.", "title": "" }, { "docid": "8df4f7122fbfda9a73e0d124b976b52b", "text": "Reputation systems have been popular in estimating the trustworthiness and predicting the future behavior of nodes in a large-scale distributed system where nodes may transact with one another without prior knowledge or experience. One of the fundamental challenges in distributed reputation management is to understand vulnerabilities and develop mechanisms that can minimize the potential damages to a system by malicious nodes. In this paper, we identify three vulnerabilities that are detrimental to decentralized reputation management and propose TrustGuard - a safeguard framework for providing a highly dependable and yet efficient reputation system. First, we provide a dependable trust model and a set of formal methods to handle strategic malicious nodes that continuously change their behavior to gain unfair advantages in the system. Second, a transaction based reputation system must cope with the vulnerability that malicious nodes may misuse the system by flooding feedbacks with fake transactions. Third, but not least, we identify the importance of filtering out dishonest feedbacks when computing reputation-based trust of a node, including the feedbacks filed by malicious nodes through collusion. Our experiments show that, comparing with existing reputation systems, our framework is highly dependable and effective in countering malicious nodes regarding strategic oscillating behavior, flooding malevolent feedbacks with fake transactions, and dishonest feedbacks.", "title": "" }, { "docid": "7b6d2d261675aa83f53c4e3c5523a81b", "text": "(IV) Intravenous therapy is one of the most commonly performed procedures in hospitalized patients yet phlebitis affects 27% to 70% of all patients receiving IV therapy. The incidence of phlebitis has proved to be a menace in effective care of surgical patients, delaying their recovery and increasing duration of hospital stay and cost. The recommendations for reducing its incidence and severity have been varied and of questionable efficacy. The current study was undertaken to evaluate whether elective change of IV cannula at fixed intervals can have any impact on incidence or severity of phlebitis in surgical patients. All patients admitted to the Department of Surgery, SMIMS undergoing IV cannula insertion, fulfilling the selection criteria and willing to participate in the study, were segregated into two random groups prospectively: Group A wherein cannula was changed electively after 24 hours into a fresh vein preferably on the other upper limb and Group B wherein IV cannula was changed only on development of phlebitis or leak i.e. need-based change. The material/brand and protocol for insertion of IV cannula were standardised for all patients, including skin preparation, insertion, fixation and removal. After cannulation, assessment was made after 6 hours, 12 hours and every 24 hours thereafter at all venepuncture sites. VIP and VAS scales were used to record phlebitis and pain respectively. Upon analysis, though there was a lower VIP score in group A compared to group B (0.89 vs. 1.32), this difference was not statistically significant (p-value = 0.277). Furthermore, the differences in pain, as assessed by VAS, at the site of puncture and along the vein were statistically insignificant (p-value > 0.05). Our results are in contradiction to few other studies which recommend a policy of routine change of cannula. Further we advocate a close and thorough monitoring of the venepuncture site and the length of vein immediately distal to the puncture site, as well as a meticulous standardized protocol for IV access.", "title": "" }, { "docid": "74206eb5f85fd6ab0891c2a7fe9ffef8", "text": "This paper introduces the ArguMed-system. It is an example of a system for computer-mediated defeasible argumentation, a new trend in the field of defeasible argumentation. In this research, computer systems are developed that mediate the process of argumentation by one or more users. Argument-mediation systems should be contrasted with systems for automated reasoning: the latter perform reasoning tasks for users, while the former play the more passive role of a mediator. E.g., mediation systems keep track of the arguments raised and of the justification status of statements. The argumentation theory of the ArguMed-system is an adaptation of Verheij's CumulA-model, a procedural model of argumentation with arguments and counterarguments. In the CumulA-model, the defeat of arguments is determined by the structure of arguments and the attack relation between arguments. It is completely independent of the underlying language. The process-model is free, in the sense that it allows not only inference (i.e., 'forward' argumentation, drawing conclusions form premises), but also justification (i.e., 'backward' argumentation, adducing reasons for issues). The ArguMed-system has been designed in an attempt to enhance the familiarity of the interface and the transparency of the underlying argumentation theory of its precursor, the Argue!-system. The ArguMed-system's user interface is template-based, as is currently common in window-style user interfaces. The user gradually constructs arguments, by fill ing in templates that correspond to common argument patterns. An innovation of the ArguMed-system is that it uses dedicated templates for different types of argument moves. Whereas existing mediation systems are issue-based (in the style of Rittel's well-known Issue-Based Information System), the ArguMed-system allows free argumentation, as in the CumulA-model. In contrast with the CumulA-model, which has a very general notion of defeat, defeat in the ArguMed-system is only of Pollock's undercutter-type. The system allows three types of argument moves, viz. making a statement, adding a reason and its conclusion, and providing an (undercutter-type) exception blocking the connection between a reason and a conclusion. To put the ArguMed-system in context, it is compared with selected existing systems for argument mediation. The differences between the underlying argumentation theories and user interfaces are striking, which is suggested to be a symptom of the early stages of development of argument mediation systems. Given the lack of system evaluation by users in the field, the paper concludes with a discussion of the relevance of current research on computer-mediated defeasible argumentation. It is claimed that the shift of argument mediation systems from theoretical to practical tools is feasible, but can as yet not be made by system developers alone: a strong input from the research community is required.", "title": "" }, { "docid": "c8ec9829957991bfacc4f9faaf0566b9", "text": "Cross lingual projection of linguistic annotation suffers from many sources of bias and noise, leading to unreliable annotations that cannot be used directly. In this paper, we introduce a novel approach to sequence tagging that learns to correct the errors from cross-lingual projection using an explicit debiasing layer. This is framed as joint learning over two corpora, one tagged with gold standard and the other with projected tags. We evaluated with only 1,000 tokens tagged with gold standard tags, along with more plentiful parallel data. Our system equals or exceeds the state-of-the-art on eight simulated lowresource settings, as well as two real lowresource languages, Malagasy and Kinyarwanda.", "title": "" }, { "docid": "f5002cfd1b7b7b547e210d62b8655074", "text": "In this work, various layout options for ESD diodes' PN junction geometry and metal routing are investigated. The current compression point (ICP) is introduced to define the maximum current handling capability of ESD protection devices. The figures-of-merit ICP/C and RON*C are used to compare the performance of the structures investigated herein.", "title": "" }, { "docid": "e43cb8fefc7735aeab0fa40ad44a2e15", "text": "Support vector machine (SVM) is an optimal margin based classification technique in machine learning. SVM is a binary linear classifier which has been extended to non-linear data using Kernels and multi-class data using various techniques like one-versus-one, one-versus-rest, Crammer Singer SVM, Weston Watkins SVM and directed acyclic graph SVM (DAGSVM) etc. SVM with a linear Kernel is called linear SVM and one with a non-linear Kernel is called non-linear SVM. Linear SVM is an efficient technique for high dimensional data applications like document classification, word-sense disambiguation, drug design etc. because under such data applications, test accuracy of linear SVM is closer to non-linear SVM while its training is much faster than non-linear SVM. SVM is continuously evolving since its inception and researchers have proposed many problem formulations, solvers and strategies for solving SVM. Moreover, due to advancements in the technology, data has taken the form of ‘Big Data’ which have posed a challenge for Machine Learning to train a classifier on this large-scale data. In this paper, we have presented a review on evolution of linear support vector machine classification, its solvers, strategies to improve solvers, experimental results, current challenges and research directions.", "title": "" }, { "docid": "bffc44d02edaa8a699c698185e143d22", "text": "Photoplethysmography (PPG) technology has been used to develop small, wearable, pulse rate sensors. These devices, consisting of infrared light-emitting diodes (LEDs) and photodetectors, offer a simple, reliable, low-cost means of monitoring the pulse rate noninvasively. Recent advances in optical technology have facilitated the use of high-intensity green LEDs for PPG, increasing the adoption of this measurement technique. In this review, we briefly present the history of PPG and recent developments in wearable pulse rate sensors with green LEDs. The application of wearable pulse rate monitors is discussed.", "title": "" }, { "docid": "858557b9e2efa6ea18a7094294bedb4f", "text": "Recent advances in technology have made our work easier compare to earlier times. Computer network is growing day by day but while discussing about the security of computers and networks it has always been a major concerns for organizations varying from smaller to larger enterprises. It is true that organizations are aware of the possible threats and attacks so they always prepare for the safer side but due to some loopholes attackers are able to make attacks. Intrusion detection is one of the major fields of research and researchers are trying to find new algorithms for detecting intrusions. Clustering techniques of data mining is an interested area of research for detecting possible intrusions and attacks. This paper presents a new clustering approach for anomaly intrusion detection by using the approach of K-medoids method of clustering and its certain modifications. The proposed algorithm is able to achieve high detection rate and overcomes the disadvantages of K-means algorithm.", "title": "" }, { "docid": "a6fbd3f79105fd5c9edfc4a0292a3729", "text": "The widespread use of templates on the Web is considered harmful for two main reasons. Not only do they compromise the relevance judgment of many web IR and web mining methods such as clustering and classification, but they also negatively impact the performance and resource usage of tools that process web pages. In this paper we present a new method that efficiently and accurately removes templates found in collections of web pages. Our method works in two steps. First, the costly process of template detection is performed over a small set of sample pages. Then, the derived template is removed from the remaining pages in the collection. This leads to substantial performance gains when compared to previous approaches that combine template detection and removal. We show, through an experimental evaluation, that our approach is effective for identifying terms occurring in templates - obtaining F-measure values around 0.9, and that it also boosts the accuracy of web page clustering and classification methods.", "title": "" }, { "docid": "6d091c4be0c954c98caa58b3e9fd7408", "text": "This paper focuses on a field test that locates roof areas with a high solar potential and predicts the solar “harvest” per m. The test analyzes 2.5D LIDAR data provided by official surveying and mapping sources. The primary LIDAR data is prepared by masking the roofs’ contours and afterwards filtering the point cloud by a threshold value. The remaining LIDAR data, which represents the buildings’ roofs, is analyzed according to the slope, the azimuthal exposition and shaded roof areas. The quality assessment of the derived roof areas is carried out by means of a 3D dataset which is semiautomatically acquired from panchromatic stereophotogrammetric aerial photographs.", "title": "" }, { "docid": "ac9f39cedf04028237bb5a5f0cc4fe7a", "text": "A high-efficiency high step-up DC-DC converter is proposed for fuel cell power systems. The proposed system consists of an input-current doubler, an output-voltage doubler, and an active-clamp circuit. The input-current doubler and the output-voltage doubler provide a much higher voltage conversion ratio without using a high turns ratio in the transformer and increase the overall efficiency. A series-resonant circuit of the output-voltage doubler removes the reverse-recovery problem of the rectifying diodes. The active-clamp circuit clamps the surge voltage of switches and recycles the energy stored in the leakage inductance of the transformer. The operation principle of the converter is analyzed and verified. A 1 kW prototype is implemented to show the performance of the proposed converter. The prototype achieved a European efficiency of 96% at an input voltage of 30 V.", "title": "" }, { "docid": "4f848f750cfe4543df43457235ff203a", "text": "The U.S. National Security Agency (NSA) developed the Simon and Speck families of lightweight block ciphers as an aid for securing applications in very constrained environments where AES may not be suitable. This paper sum­ marizes the algorithms, their design rationale, along with current cryptanalysis and implemen­ tation results.", "title": "" }, { "docid": "7bfbcf62f9ff94e80913c73e069ace26", "text": "This paper presents an online highly accurate system for automatic number plate recognition (ANPR) that can be used as a basis for many real-world ITS applications. The system is designed to deal with unclear vehicle plates, variations in weather and lighting conditions, different traffic situations, and high-speed vehicles. This paper addresses various issues by presenting proper hardware platforms along with real-time, robust, and innovative algorithms. We have collected huge and highly inclusive data sets of Persian license plates for evaluations, comparisons, and improvement of various involved algorithms. The data sets include images that were captured from crossroads, streets, and highways, in day and night, various weather conditions, and different plate clarities. Over these data sets, our system achieves 98.7%, 99.2%, and 97.6% accuracies for plate detection, character segmentation, and plate recognition, respectively. The false alarm rate in plate detection is less than 0.5%. The overall accuracy on the dirty plates portion of our data sets is 91.4%. Our ANPR system has been installed in several locations and has been tested extensively for more than a year. The proposed algorithms for each part of the system are highly robust to lighting changes, size variations, plate clarity, and plate skewness. The system is also independent of the number of plates in captured images. This system has been also tested on three other Iranian data sets and has achieved 100% accuracy in both detection and recognition parts. To show that our ANPR is not language dependent, we have tested our system on available English plates data set and achieved 97% overall accuracy.", "title": "" }, { "docid": "5b507508fd3b3808d61e822d2a91eab9", "text": "In this brief, we propose a stand-alone system-on-a-programmable-chip (SOPC)-based cloud system to accelerate massive electrocardiogram (ECG) data analysis. The proposed system tightly couples network I/O handling hardware to data processing pipelines in a single field-programmable gate array (FPGA), offloading both networking operations and ECG data analysis. In this system, we first propose a massive-sessions optimized TCP/IP hardware stack using a macropipeline architecture to accelerate network packet processing. Second, we propose a streaming architecture to accelerate ECG signal processing, including QRS detection, feature extraction, and classification. We verify our design on XC6VLX550T FPGA using real ECG data. Compared to commercial servers, our system shows up to 38× improvement in performance and 142× improvement in energy efficiency.", "title": "" }, { "docid": "d272cf01340c8dcc3c24651eaf876926", "text": "We propose a new method for learning from a single demonstration to solve hard exploration tasks like the Atari game Montezuma’s Revenge. Instead of imitating human demonstrations, as proposed in other recent works, our approach is to maximize rewards directly. Our agent is trained using off-the-shelf reinforcement learning, but starts every episode by resetting to a state from a demonstration. By starting from such demonstration states, the agent requires much less exploration to learn a game compared to when it starts from the beginning of the game at every episode. We analyze reinforcement learning for tasks with sparse rewards in a simple toy environment, where we show that the run-time of standard RL methods scales exponentially in the number of states between rewards. Our method reduces this to quadratic scaling, opening up many tasks that were previously infeasible. We then apply our method to Montezuma’s Revenge, for which we present a trained agent achieving a high-score of 74,500, better than any previously published result.", "title": "" }, { "docid": "6524efda795834105bae7d65caf15c53", "text": "PURPOSE\nThis paper examines respondents' relationship with work following a stroke and explores their experiences including the perceived barriers to and facilitators of a return to employment.\n\n\nMETHOD\nOur qualitative study explored the experiences and recovery of 43 individuals under 60 years who had survived a stroke. Participants, who had experienced a first stroke less than three months before and who could engage in in-depth interviews, were recruited through three stroke services in South East England. Each participant was invited to take part in four interviews over an 18-month period and to complete a diary for one week each month during this period.\n\n\nRESULTS\nAt the time of their stroke a minority of our sample (12, 28% of the original sample) were not actively involved in the labour market and did not return to the work during the period that they were involved in the study. Of the 31 participants working at the time of the stroke, 13 had not returned to work during the period that they were involved in the study, six returned to work after three months and nine returned in under three months and in some cases virtually immediately after their stroke. The participants in our study all valued work and felt that working, especially in paid employment, was more desirable than not working. The participants who were not working at the time of their stroke or who had not returned to work during the period of the study also endorsed these views. However they felt that there were a variety of barriers and practical problems that prevented them working and in some cases had adjusted to a life without paid employment. Participants' relationship with work was influenced by barriers and facilitators. The positive valuations of work were modified by the specific context of stroke, for some participants work was a cause of stress and therefore potentially risky, for others it was a way of demonstrating recovery from stroke. The value and meaning varied between participants and this variation was related to past experience and biography. Participants who wanted to work indicated that their ability to work was influenced by the nature and extent of their residual disabilities. A small group of participants had such severe residual disabilities that managing everyday life was a challenge and that working was not a realistic prospect unless their situation changed radically. The remaining participants all reported residual disabilities. The extent to which these disabilities formed a barrier to work depended on an additional range of factors that acted as either barriers or facilitator to return to work. A flexible working environment and supportive social networks were cited as facilitators of return to paid employment.\n\n\nCONCLUSION\nParticipants in our study viewed return to work as an important indicator of recovery following a stroke. Individuals who had not returned to work felt that paid employment was desirable but they could not overcome the barriers. Individuals who returned to work recognized the barriers but had found ways of managing them.", "title": "" } ]
scidocsrr
2f1a5e3459587e0c087e498679e2b507
How to Combine Homomorphic Encryption and Garbled Circuits Improved Circuits and Computing the Minimum Distance Efficiently
[ { "docid": "3afa5356d956e2a525836b873442aa6b", "text": "The problem of secure data processing by means of a neural network (NN) is addressed. Secure processing refers to the possibility that the NN owner does not get any knowledge about the processed data since they are provided to him in encrypted format. At the same time, the NN itself is protected, given that its owner may not be willing to disclose the knowledge embedded within it. The considered level of protection ensures that the data provided to the network and the network weights and activation functions are kept secret. Particular attention is given to prevent any disclosure of information that could bring a malevolent user to get access to the NN secrets by properly inputting fake data to any point of the proposed protocol. With respect to previous works in this field, the interaction between the user and the NN owner is kept to a minimum with no resort to multiparty computation protocols.", "title": "" } ]
[ { "docid": "2c56891c1c9f128553bab35d061049b8", "text": "RISC vs. CISC wars raged in the 1980s when chip area and processor design complexity were the primary constraints and desktops and servers exclusively dominated the computing landscape. Today, energy and power are the primary design constraints and the computing landscape is significantly different: growth in tablets and smartphones running ARM (a RISC ISA) is surpassing that of desktops and laptops running x86 (a CISC ISA). Further, the traditionally low-power ARM ISA is entering the high-performance server market, while the traditionally high-performance x86 ISA is entering the mobile low-power device market. Thus, the question of whether ISA plays an intrinsic role in performance or energy efficiency is becoming important, and we seek to answer this question through a detailed measurement based study on real hardware running real applications. We analyze measurements on the ARM Cortex-A8 and Cortex-A9 and Intel Atom and Sandybridge i7 microprocessors over workloads spanning mobile, desktop, and server computing. Our methodical investigation demonstrates the role of ISA in modern microprocessors' performance and energy efficiency. We find that ARM and x86 processors are simply engineering design points optimized for different levels of performance, and there is nothing fundamentally more energy efficient in one ISA class or the other. The ISA being RISC or CISC seems irrelevant.", "title": "" }, { "docid": "362b1a5119733eba058d1faab2d23ebf", "text": "§ Mission and structure of the project. § Overview of the Stone Man version of the Guide to the SWEBOK. § Status and development process of the Guide. § Applications of the Guide in the fields of education, human resource management, professional development and licensing and certification. § Class exercise in applying the Guide to defining the competencies needed to support software life cycle process deployment. § Strategy for uptake and promotion of the Guide. § Discussion of promotion, trial usage and experimentation. Workshop Leaders:", "title": "" }, { "docid": "8e3366b6102ad6420972d4daee40d2a8", "text": "Containers are increasingly gaining popularity and becoming one of the major deployment models in cloud environments. To evaluate the performance of scheduling and allocation policies in containerized cloud data centers, there is a need for evaluation environments that support scalable and repeatable experiments. Simulation techniques provide repeatable and controllable environments, and hence, they serve as a powerful tool for such purpose. This paper introduces ContainerCloudSim, which provides support for modeling and simulation of containerized cloud computing environments. We developed a simulation architecture for containerized clouds and implemented it as an extension of CloudSim. We described a number of use cases to demonstrate how one can plug in and compare their container scheduling and provisioning policies in terms of energy efficiency and SLA compliance. Our system is highly scalable as it supports simulation of large number of containers, given that there are more containers than virtual machines in a data center. Copyright © 2016 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "ed77ce10f448cb58568a63089903a4a8", "text": "Sentence representation at the semantic level is a challenging task for Natural Language Processing and Artificial Intelligence. Despite the advances in word embeddings (i.e. word vector representations), capturing sentence meaning is an open question due to complexities of semantic interactions among words. In this paper, we present an embedding method, which is aimed at learning unsupervised sentence representations from unlabeled text. We propose an unsupervised method that models a sentence as a weighted series of word embeddings. The weights of the word embeddings are fitted by using Shannon’s word entropies provided by the Term Frequency–Inverse Document Frequency (TF–IDF) transform. The hyperparameters of the model can be selected according to the properties of data (e.g. sentence length and textual gender). Hyperparameter selection involves word embedding methods and dimensionalities, as well as weighting schemata. Our method offers advantages over existing methods: identifiable modules, short-term training, online inference of (unseen) sentence representations, as well as independence from domain, external knowledge and language resources. Results showed that our model outperformed the state of the art in well-known Semantic Textual Similarity (STS) benchmarks. Moreover, our model reached state-of-the-art performance when compared to supervised and knowledge-based STS systems. Corresponding author Email addresses: iaf@ciencias.unam.mx (Ignacio Arroyo-Fernández), cmendezc@ccg.unam.mx (Carlos-Francisco Méndez-Cruz), gsierram@ii.unam.mx (Gerardo Sierra), juan-manuel.torres@univ-avignon.fr (Juan-Manuel Torres-Moreno), sidorov@cic.ipn.mx (Grigori Sidorov) 1Av. Universidad s/n Col. Chamilpa 62210, Cuernavaca, Morelos 2AV. Universidad No. 3000, Ciudad universitaria, Coyoacán 04510, Ciudad de México 3Université d’Avignon et des Pays de Vaucluse. 339 chemin des Meinajaries 84911, Avignon cedex 9, France 4Instituto Politécnico Nacional. Av. Juan de Dios Bátiz, Esq. Miguel Othón de Mendizábal, Col. Nueva Industrial Vallejo, Gustavo A. Madero 07738, Ciudad de México Preprint submitted to Journal October 23, 2017", "title": "" }, { "docid": "6f4e5448f956017c39c1727e0eb5de7b", "text": "Recently, community search over graphs has attracted significant attention and many algorithms have been developed for finding dense subgraphs from large graphs that contain given query nodes. In applications such as analysis of protein protein interaction (PPI) networks, citation graphs, and collaboration networks, nodes tend to have attributes. Unfortunately, most previously developed community search algorithms ignore these attributes and result in communities with poor cohesion w.r.t. their node attributes. In this paper, we study the problem of attribute-driven community search, that is, given an undirected graph G where nodes are associated with attributes, and an input query Q consisting of nodes Vq and attributes Wq , find the communities containing Vq , in which most community members are densely inter-connected and have similar attributes. We formulate our problem of finding attributed truss communities (ATC), as finding all connected and close k-truss subgraphs containing Vq, that are locally maximal and have the largest attribute relevance score among such subgraphs. We design a novel attribute relevance score function and establish its desirable properties. The problem is shown to be NP-hard. However, we develop an efficient greedy algorithmic framework, which finds a maximal k-truss containing Vq, and then iteratively removes the nodes with the least popular attributes and shrinks the graph so as to satisfy community constraints. We also build an elegant index to maintain the known k-truss structure and attribute information, and propose efficient query processing algorithms. Extensive experiments on large real-world networks with ground-truth communities shows the efficiency and effectiveness of our proposed methods.", "title": "" }, { "docid": "a973ed3011d9c07ddab4c15ef82fe408", "text": "OBJECTIVES\nTo assess the efficacy of a 6-week interdisciplinary treatment that combines coordinated psychological, medical, educational, and physiotherapeutic components (PSYMEPHY) over time compared to standard pharmacologic care.\n\n\nMETHODS\nRandomised controlled trial with follow-up at 6 months for the PSYMEPHY and control groups and 12 months for the PSYMEPHY group. Participants were 153 outpatients with FM recruited from a hospital pain management unit. Patients randomly allocated to the control group (CG) received standard pharmacologic therapy. The experimental group (EG) received an interdisciplinary treatment (12 sessions). The main outcome was changes in quality of life, and secondary outcomes were pain, physical function, anxiety, depression, use of pain coping strategies, and satisfaction with treatment as measured by the Fibromyalgia Impact Questionnaire, the Hospital Anxiety and Depression Scale, the Coping with Chronic Pain Questionnaire, and a question regarding satisfaction with the treatment.\n\n\nRESULTS\nSix months after the intervention, significant improvements in quality of life (p=0.04), physical function (p=0.01), and pain (p=0.03) were seen in the PSYMEPHY group (n=54) compared with controls (n=56). Patients receiving the intervention reported greater satisfaction with treatment. Twelve months after the intervention, patients in the PSYMEPHY group (n=58) maintained statistically significant improvements in quality of life, physical functioning, pain, and symptoms of anxiety and depression, and were less likely to use maladaptive passive coping strategies compared to baseline.\n\n\nCONCLUSIONS\nAn interdisciplinary treatment for FM was associated with improvements in quality of life, pain, physical function, anxiety and depression, and pain coping strategies up to 12 months after the intervention.", "title": "" }, { "docid": "e2817500683f4eea7e4ed9e0484b303a", "text": "This paper presents the Transport Disruption ontology, a formal framework for modelling travel and transport related events that have a disruptive impact on traveller’s journeys. We discuss related models, describe how transport events and their impacts are captured, and outline use of the ontology within an interlinked repository of the travel information to support intelligent transport systems.", "title": "" }, { "docid": "e7d955c48e5bdd86ae21a61fcd130ae2", "text": "We present Deep Graph Infomax (DGI), a general approach for learning node representations within graph-structured data in an unsupervised manner. DGI relies on maximizing mutual information between patch representations and corresponding high-level summaries of graphs—both derived using established graph convolutional network architectures. The learnt patch representations summarize subgraphs centered around nodes of interest, and can thus be reused for downstream node-wise learning tasks. In contrast to most prior approaches to unsupervised learning with GCNs, DGI does not rely on random walk objectives, and is readily applicable to both transductive and inductive learning setups. We demonstrate competitive performance on a variety of node classification benchmarks, which at times even exceeds the performance of supervised learning.", "title": "" }, { "docid": "9d5ba6f0beb2c9f03ea29f8fc35d51bb", "text": "Independent component analysis (ICA) is a promising analysis method that is being increasingly applied to fMRI data. A principal advantage of this approach is its applicability to cognitive paradigms for which detailed models of brain activity are not available. Independent component analysis has been successfully utilized to analyze single-subject fMRI data sets, and an extension of this work would be to provide for group inferences. However, unlike univariate methods (e.g., regression analysis, Kolmogorov-Smirnov statistics), ICA does not naturally generalize to a method suitable for drawing inferences about groups of subjects. We introduce a novel approach for drawing group inferences using ICA of fMRI data, and present its application to a simple visual paradigm that alternately stimulates the left or right visual field. Our group ICA analysis revealed task-related components in left and right visual cortex, a transiently task-related component in bilateral occipital/parietal cortex, and a non-task-related component in bilateral visual association cortex. We address issues involved in the use of ICA as an fMRI analysis method such as: (1) How many components should be calculated? (2) How are these components to be combined across subjects? (3) How should the final results be thresholded and/or presented? We show that the methodology we present provides answers to these questions and lay out a process for making group inferences from fMRI data using independent component analysis.", "title": "" }, { "docid": "868f6c927cf500aed70cfb921b0564b2", "text": "The battery management system (BMS) is a critical component of electric and hybrid electric vehicles. The purpose of the BMS is to guarantee safe and reliable battery operation. To maintain the safety and reliability of the battery, state monitoring and evaluation, charge control, and cell balancing are functionalities that have been implemented in BMS. As an electrochemical product, a battery acts differently under different operational and environmental conditions. The uncertainty of a battery’s performance poses a challenge to the implementation of these functions. This paper addresses concerns for current BMSs. State evaluation of a battery, including state of charge, state of health, and state of life, is a critical task for a BMS. Through reviewing the latest methodologies for the state evaluation of batteries, the future challenges for BMSs are presented and possible solutions are proposed as well.", "title": "" }, { "docid": "c64cfef80a4d49870894cd5f910896b6", "text": "Digital music has become prolific in the web in recent decades. Automated recommendation systems are essential for users to discover music they love and for artists to reach appropriate audience. When manual annotations and user preference data is lacking (e.g. for new artists) these systems must rely on content based methods. Besides powerful machine learning tools for classification and retrieval, a key component for successful recommendation is the audio content representation. Good representations should capture informative musical patterns in the audio signal of songs. These representations should be concise, to enable efficient (low storage, easy indexing, fast search) management of huge music repositories, and should also be easy and fast to compute, to enable real-time interaction with a user supplying new songs to the system. Before designing new audio features, we explore the usage of traditional local features, while adding a stage of encoding with a pre-computed codebook and a stage of pooling to get compact vectorial representations. We experiment with different encoding methods, namely the LASSO, vector quantization (VQ) and cosine similarity (CS). We evaluate the representations' quality in two music information retrieval applications: query-by-tag and query-by-example. Our results show that concise representations can be used for successful performance in both applications. We recommend using top-τ VQ encoding, which consistently performs well in both applications, and requires much less computation time than the LASSO.", "title": "" }, { "docid": "6b6790a92cb4dafb816648cdd5f51aa1", "text": "An algebraic nonlinear analysis of the switched reluctance drive system is described. The analysis is intended to provide an understanding of the factors that determine the kVA requirements of the electronic power converter and to determine the fundamental nature of the torque/speed characteristics. The effect of saturation is given special attention. It is shown that saturation has the two main effects of increasing the motor size required for a given torque, and at the same time decreasing the kVA per horsepower (i.e., increasing the effective power factor by analogy with an ac machine). The kVA per horsepower is lower than predicted by simple linear analysis that neglects saturation. Necessary conditions are also developed for a flat-topped current waveform by correctly determining the motor back-EMF. The reason why it is desirable to allow the phase current to continue (though with much reduced magnitude) even after the poles have passed the aligned position is explained. The theory provides a formula for determining the required commutation angle for the phase current. The basis is provided for an estimation of the kVA requirements of the switched reluctance (SR) drive. These requirements have been measured and also calculated by a computer simulation program.", "title": "" }, { "docid": "0df2ca944dcdf79369ef5a7424bf3ffe", "text": "This article first presents two theories representing distinct approaches to the field of stress research: Selye's theory of `systemic stress' based in physiology and psychobiology, and the `psychological stress' model developed by Lazarus. In the second part, the concept of coping is described. Coping theories may be classified according to two independent parameters: traitoriented versus state-oriented, and microanalytic versus macroanalytic approaches. The multitude of theoretical conceptions is based on the macroanalytic, trait-oriented approach. Examples of this approach that are presented in this article are `repression–sensitization,' `monitoringblunting,' and the `model of coping modes.' The article closes with a brief outline of future perspectives in stress and coping research.", "title": "" }, { "docid": "043b51b50f17840508b0dfb92c895fc9", "text": "Over the years, several security measures have been employed to combat the menace of insecurity of lives and property. This is done by preventing unauthorized entrance into buildings through entrance doors using conventional and electronic locks, discrete access code, and biometric methods such as the finger prints, thumb prints, the iris and facial recognition. In this paper, a prototyped door security system is designed to allow a privileged user to access a secure keyless door where valid smart card authentication guarantees an entry. The model consists of hardware module and software which provides a functionality to allow the door to be controlled through the authentication of smart card by the microcontroller unit. (", "title": "" }, { "docid": "f60426bdd66154a7d2cb6415abd8f233", "text": "In the rapidly expanding field of parallel processing, job schedulers are the “operating systems” of modern big data architectures and supercomputing systems. Job schedulers allocate computing resources and control the execution of processes on those resources. Historically, job schedulers were the domain of supercomputers, and job schedulers were designed to run massive, long-running computations over days and weeks. More recently, big data workloads have created a need for a new class of computations consisting of many short computations taking seconds or minutes that process enormous quantities of data. For both supercomputers and big data systems, the efficiency of the job scheduler represents a fundamental limit on the efficiency of the system. Detailed measurement and modeling of the performance of schedulers are critical for maximizing the performance of a large-scale computing system. This paper presents a detailed feature analysis of 15 supercomputing and big data schedulers. For big data workloads, the scheduler latency is the most important performance characteristic of the scheduler. A theoretical model of the latency of these schedulers is developed and used to design experiments targeted at measuring scheduler latency. Detailed benchmarking of four of the most popular schedulers (Slurm, Son of Grid Engine, Mesos, and Hadoop YARN) are conducted. The theoretical model is compared with data and demonstrates that scheduler performance can be characterized by two key parameters: the marginal latency of the scheduler ts and a nonlinear exponent αs. For all four schedulers, the utilization of the computing system decreases to <10% for computations lasting only a few seconds. Multi-level schedulers (such as LLMapReduce) that transparently aggregate short computations can improve utilization for these short computations to >90% for all four of the schedulers that were tested.", "title": "" }, { "docid": "75246f1ef21d4ce739e8b27753c52ee1", "text": "The ship control system for the U.S. Navy's newest attack submarine, Seawolf; incorporates hardware modular redundancy both in its core processing and its input/output system. This paper provides a practical experience report on the redundancy management software services developed for this system. Introductory material is presented to provide contextual information regarding the overall ship control system. An overview of the system's processing platform is presented in sufficient detail to define the problems associated with redundancy management and to describe hardware functionality which supports the software services. Policies and procedures for detection and isolation of faults are discussed as are reconfiguration responses to faults.", "title": "" }, { "docid": "e7d36dc01a3e20c3fb6d2b5245e46705", "text": "A gender gap in mathematics achievement persists in some nations but not in others. In light of the underrepresentation of women in careers in science, technology, mathematics, and engineering, increasing research attention is being devoted to understanding gender differences in mathematics achievement, attitudes, and affect. The gender stratification hypothesis maintains that such gender differences are closely related to cultural variations in opportunity structures for girls and women. We meta-analyzed 2 major international data sets, the 2003 Trends in International Mathematics and Science Study and the Programme for International Student Assessment, representing 493,495 students 14-16 years of age, to estimate the magnitude of gender differences in mathematics achievement, attitudes, and affect across 69 nations throughout the world. Consistent with the gender similarities hypothesis, all of the mean effect sizes in mathematics achievement were very small (d < 0.15); however, national effect sizes showed considerable variability (ds = -0.42 to 0.40). Despite gender similarities in achievement, boys reported more positive math attitudes and affect (ds = 0.10 to 0.33); national effect sizes ranged from d = -0.61 to 0.89. In contrast to those of previous tests of the gender stratification hypothesis, our results point to specific domains of gender equity responsible for gender gaps in math. Gender equity in school enrollment, women's share of research jobs, and women's parliamentary representation were the most powerful predictors of cross-national variability in gender gaps in math. Results are situated within the context of existing research demonstrating apparently paradoxical effects of societal gender equity and highlight the significance of increasing girls' and women's agency cross-nationally.", "title": "" }, { "docid": "b249fe89bcfc985fcb4f9128d12c28b3", "text": "Prevalent matrix completion methods capture only the low-rank property which gives merely a constraint that the data points lie on some low-dimensional subspace, but generally ignore the extra structures (beyond low-rank) that specify in more detail how the data points lie on the subspace. Whenever the data points are not uniformly distributed on the low-dimensional subspace, the row-coherence of the target matrix to recover could be considerably high and, accordingly, prevalent methods might fail even if the target matrix is fairly low-rank. To relieve this challenge, we suggest to consider a model termed low-rank factor decomposition (LRFD), which imposes an additional restriction that the data points must be represented as linear, compressive combinations of the bases in a given dictionary. We show that LRFD can effectively mitigate the challenges of high row-coherence, provided that its dictionary is configured properly. Namely, it is mathematically proven that if the dictionary is well-conditioned and low-rank, then LRFD can weaken the dependence on the row-coherence. In particular, if the dictionary itself is low-rank, then the dependence on the row-coherence can be entirely removed. Subsequently, we devise two practical algorithms to obtain proper dictionaries in unsupervised environments: one uses the existing matrix completion methods to construct the dictionary in LRFD, and the other tries to learn a proper dictionary from the data given. Experiments on randomly generated matrices and motion datasets show superior performance of our proposed algorithms.", "title": "" }, { "docid": "ed351364658a99d4d9c10dd2b9be3c92", "text": "Information technology continues to provide opportunities to alter the decisionmaking behavior of individuals, groups and organizations. Two related changes that are emerging are social media and Web 2.0 technologies. These technologies can positively and negatively impact the rationality and effectiveness of decision-making. For example, changes that help marketing managers alter consumer decision behavior may result in poorer decisions by consumers. Also, managers who heavily rely on a social network rather than expert opinion and facts may make biased decisions. A number of theories can help explain how social media may impact decision-making and the consequences.", "title": "" }, { "docid": "16f5686c1675d0cf2025cf812247ab45", "text": "This paper presents the system analysis and implementation of a soft switching Sepic-Cuk converter to achieve zero voltage switching (ZVS). In the proposed converter, the Sepic and Cuk topologies are combined together in the output side. The features of the proposed converter are to reduce the circuit components (share the power components in the transformer primary side) and to share the load current. Active snubber is connected in parallel with the primary side of transformer to release the energy stored in the leakage inductor of transformer and to limit the peak voltage stress of switching devices when the main switch is turned off. The active snubber can achieve ZVS turn-on for power switches. Experimental results, taken from a laboratory prototype rated at 300W, are presented to verify the effectiveness of the proposed converter. I. Introduction Modern", "title": "" } ]
scidocsrr
7af09346ef65e84023a5305304485eec
An actuator with physically variable stiffness for highly dynamic legged locomotion
[ { "docid": "cc4c028027c1761428d5f80e07b1b614", "text": "When humans and other mammals run, the body's complex system of muscle, tendon and ligament springs behaves like a single linear spring ('leg spring'). A simple spring-mass model, consisting of a single linear leg spring and a mass equivalent to the animal's mass, has been shown to describe the mechanics of running remarkably well. Force platform measurements from running animals, including humans, have shown that the stiffness of the leg spring remains nearly the same at all speeds and that the spring-mass system is adjusted for higher speeds by increasing the angle swept by the leg spring. The goal of the present study is to determine the relative importance of changes to the leg spring stiffness and the angle swept by the leg spring when humans alter their stride frequency at a given running speed. Human subjects ran on treadmill-mounted force platform at 2.5ms-1 while using a range of stride frequencies from 26% below to 36% above the preferred stride frequency. Force platform measurements revealed that the stiffness of the leg spring increased by 2.3-fold from 7.0 to 16.3 kNm-1 between the lowest and highest stride frequencies. The angle swept by the leg spring decreased at higher stride frequencies, partially offsetting the effect of the increased leg spring stiffness on the mechanical behavior of the spring-mass system. We conclude that the most important adjustment to the body's spring system to accommodate higher stride frequencies is that leg spring becomes stiffer.", "title": "" } ]
[ { "docid": "b0ca102c19bde55bddff695222a64423", "text": "We present a testbed for exploring novel smart refrigerator interactions, and identify three key adoption-limiting interaction shortcomings of state-of-the-art smart fridges: lack of 1) user experience focus, 2) low-intrusion object recognition and 2) automatic item position detection. Our testbed system addresses these limitations by a combination of sensors, software filters, architectural components and a RESTful API to track interaction events in real-time, and retrieve current state and historical data to learn patterns and recommend user actions. We evaluate the accuracy and overhead of our system in a realistic interaction flow. The accuracy was measured to 8388% and the overhead compared to a representative state-ofthe-art barcode scanner improved by 27%. We also showcase two applications built on top of our testbed, one for finding expired items and ingredients of dishes; and one to monitor your health. The pattern that these applications have in common is that they cast the interaction as an item-recommendation problem triggered when the user takes something out. Our testbed could help reveal further user-experience centric interaction patterns and new classes of applications for smart fridges that inherently, by relying on our testbed primitives, mitigate the issues with existing approaches.", "title": "" }, { "docid": "d55aae728991060ed4ba1f9a6b59e2fe", "text": "Evolutionary algorithms have become robust tool in data processing and modeling of dynamic, complex and non-linear processes due to their flexible mathematical structure to yield optimal results even with imprecise, ambiguity and noise at its input. The study investigates evolutionary algorithms for solving Sudoku task. Various hybrids are presented here as veritable algorithm for computing dynamic and discrete states in multipoint search in CSPs optimization with application areas to include image and video analysis, communication and network design/reconstruction, control, OS resource allocation and scheduling, multiprocessor load balancing, parallel processing, medicine, finance, security and military, fault diagnosis/recovery, cloud and clustering computing to mention a few. Solution space representation and fitness functions (as common to all algorithms) were discussed. For support and confidence model adopted π1=0.2 and π2=0.8 respectively yields better convergence rates – as other suggested value combinations led to either a slower or non-convergence. CGA found an optimal solution in 32 seconds after 188 iterations in 25runs; while GSAGA found its optimal solution in 18seconds after 402 iterations with a fitness progression achieved in 25runs and consequently, GASA found an optimal solution 2.112seconds after 391 iterations with fitness progression after 25runs respectively.", "title": "" }, { "docid": "4b7ffae0dfa7e43b5456ec08fbd0824e", "text": "METHODS\nIn this study of patients who underwent internal fixation without fusion for a burst thoracolumbar or lumbar fracture, we compared the serial changes in the injured disc height (DH), and the fractured vertebral body height (VBH) and kyphotic angle between patients in whom the implants were removed and those in whom they were not. Radiological parameters such as injured DH, fractured VBH and kyphotic angle were measured. Functional outcomes were evaluated using the Greenough low back outcome scale and a VAS scale for pain.\n\n\nRESULTS\nBetween June 1996 and May 2012, 69 patients were analysed retrospectively; 47 were included in the implant removal group and 22 in the implant retention group. After a mean follow-up of 66 months (48 to 107), eight patients (36.3%) in the implant retention group had screw breakage. There was no screw breakage in the implant removal group. All radiological and functional outcomes were similar between these two groups. Although solid union of the fractured vertebrae was achieved, the kyphotic angle and the anterior third of the injured DH changed significantly with time (p < 0.05).\n\n\nDISCUSSION\nThe radiological and functional outcomes of both implant removal and retention were similar. Although screw breakage may occur, the implants may not need to be removed.\n\n\nTAKE HOME MESSAGE\nImplant removal may not be needed for patients with burst fractures of the thoracolumbar and lumbar spine after fixation without fusion. However, information should be provided beforehand regarding the possibility of screw breakage.", "title": "" }, { "docid": "6b5599f9041ca5dab06620ce9ee9e2fb", "text": "Poor nutrition can lead to reduced immunity, increased susceptibility to disease, impaired physical and mental development, and reduced productivity. A conversational agent can support people as a virtual coach, however building such systems still have its associated challenges and limitations. This paper describes the background and motivation for chatbot systems in the context of healthy nutrition recommendation. We discuss current challenges associated with chatbot application, we tackled technical, theoretical, behavioural, and social aspects of the challenges. We then propose a pipeline to be used as guidelines by developers to implement theoretically and technically robust chatbot systems. Keywords-Health, Conversational agent, Recommender systems, HCI, Behaviour Change, Artificial intelligence", "title": "" }, { "docid": "6ac996c20f036308f36c7b667babe876", "text": "Patents are a very useful source of technical information. The public availability of patents over the Internet, with for some databases (eg. Espacenet) the assurance of a constant format, allows the development of high value added products using this information source and provides an easy way to analyze patent information. This simple and powerful tool facilitates the use of patents in academic research, in SMEs and in developing countries providing a way to use patents as a ideas resource thus improving technological innovation.", "title": "" }, { "docid": "ed16247afd56d561aabe8bb8f3e0c6fe", "text": "By combining a horizontal planar dipole and a vertically oriented folded shorted patch antenna, a new low-profile magneto-electric dipole antenna is presented. The antenna is simply excited by a coaxial feed that works as a balun. A prototype was fabricated and measured. Simulated and measured results agree well. An impedance bandwidth of 45.6% for <formula formulatype=\"inline\"><tex Notation=\"TeX\">${\\rm SWR}\\leq 1.5$</tex></formula> from 1.86 to 2.96 GHz was achieved. Stable radiation pattern with low cross polarization, low back radiation, and an antenna gain of 8.1 <formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\pm$</tex></formula> 0.8 dBi was found over the operating frequencies. The height of the antenna is only <formula formulatype=\"inline\"><tex Notation=\"TeX\">$0.169\\lambda$</tex> </formula> (where <formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\lambda$</tex> </formula> is the free-space wavelength at the center frequency). In addition, the antenna is dc grounded, which satisfies the requirement of many outdoor antennas.", "title": "" }, { "docid": "69f3c2dbffe44c7da113798a1f528d72", "text": "Behavior modification in health is difficult, as habitual behaviors are extremely well-learned, by definition. This research is focused on building a persuasive system for behavior modification around emotional eating. In this paper, we make strides towards building a just-in-time support system for emotional eating in three user studies. The first two studies involved participants using a custom mobile phone application for tracking emotions, food, and receiving interventions. We found lots of individual differences in emotional eating behaviors and that most participants wanted personalized interventions, rather than a pre-determined intervention. Finally, we also designed a novel, wearable sensor system for detecting emotions using a machine learning approach. This system consisted of physiological sensors which were placed into women's brassieres. We tested the sensing system and found positive results for emotion detection in this mobile, wearable system.", "title": "" }, { "docid": "f0d8d6d1adaa765153f2ec93266889a3", "text": "We present a new approach to localize extensive facial landmarks with a coarse-to-fine convolutional network cascade. Deep convolutional neural networks (DCNN) have been successfully utilized in facial landmark localization for two-fold advantages: 1) geometric constraints among facial points are implicitly utilized, 2) huge amount of training data can be leveraged. However, in the task of extensive facial landmark localization, a large number of facial landmarks (more than 50 points) are required to be located in a unified system, which poses great difficulty in the structure design and training process of traditional convolutional networks. In this paper, we design a four-level convolutional network cascade, which tackles the problem in a coarse-to-fine manner. In our system, each network level is trained to locally refine a subset of facial landmarks generated by previous network levels. In addition, each level predicts explicit geometric constraints (the position and rotation angles of a specific facial component) to rectify the inputs of the current network level. The combination of coarse-to-fine cascade and geometric refinement enables our system to locate extensive facial landmarks (68 points) accurately in the 300-W facial landmark localization challenge.", "title": "" }, { "docid": "a74081f7108e62fadb48446255dd246b", "text": "Existing fuzzy neural networks (FNNs) are mostly developed under a shallow network configuration having lower generalization power than those of deep structures. This paper proposes a novel self-organizing deep fuzzy neural network, namely deep evolving fuzzy neural networks (DEVFNN). Fuzzy rules can be automatically extracted from data streams or removed if they play little role during their lifespan. The structure of the network can be deepened on demand by stacking additional layers using a drift detection method which not only detects the covariate drift, variations of input space, but also accurately identifies the real drift, dynamic changes of both feature space and target space. DEVFNN is developed under the stacked generalization principle via the feature augmentation concept where a recently developed algorithm, namely Generic Classifier (gClass), drives the hidden layer. It is equipped by an automatic feature selection method which controls activation and deactivation of input attributes to induce varying subsets of input features. A deep network simplification procedure is put forward using the concept of hidden layer merging to prevent uncontrollable growth of input space dimension due to the nature of feature augmentation approach in building a deep network structure. DEVFNN works in the sample-wise fashion and is compatible for data stream applications. The efficacy of DEVFNN has been thoroughly evaluated using six datasets with non-stationary properties under the prequential test-then-train protocol. It has been compared with four state-ofthe art data stream methods and its shallow counterpart where DEVFNN demonstrates improvement of classification accuracy. Moreover, it is also shown that the concept drift detection method is an effective tool to control the depth of network structure while the hidden layer merging scenario is capable of simplifying the network complexity of a deep network with negligible compromise of generalization performance.", "title": "" }, { "docid": "3072c5458a075e6643a7679ccceb1417", "text": "A novel interleaved flyback converter with leakage energy recycled is proposed. The proposed converter is combined with dual-switch dual-transformer flyback topology. Two clamping diodes are used to reduce the voltage stress on power switches to the input voltage level and also to recycle leakage inductance energy to the input voltage and capacitor. Besides, the interleaved control is implemented to reduce the output current ripple. In addition, the voltage on the primary windings is reduced to the half of the input voltage and thus reducing the turns ratio of transformers to improve efficiency. The operating principle and the steady state analysis of the proposed converter are discussed in detail. Finally, an experimental prototype is implemented with 400V input voltage, 24V/300W output to verify the feasibility of the proposed converter. The experimental results reveals that the highest efficiency of the proposed converter is 94.42%, the full load efficiency is 92.7%, and the 10% load efficiency is 92.61%.", "title": "" }, { "docid": "836a5f20cc1765e664e0d4386735efdb", "text": "Although a software application always executes within a particular environment, current testing methods have largely ignored these environmental factors. Many applications execute in an environment that contains a database. In this paper, we propose a family of test adequacy criteria that can be used to assess the quality of test suites for database-driven applications. Our test adequacy criteria use dataflow information that is associated with the entities in a relational database. Furthermore, we develop a unique representation of a database-driven application that facilitates the enumeration of database interaction associations. These associations can reflect an application's definition and use of database entities at multiple levels of granularity. The usage of a tool to calculate intraprocedural database interaction associations for two case study applications indicates that our adequacy criteria can be computed with an acceptable time and space overhead.", "title": "" }, { "docid": "6381c10a963b709c4af88047f38cc08c", "text": "A great deal of research has been focused on solving the job-shop problem (ΠJ), over the last forty years, resulting in a wide variety of approaches. Recently, much effort has been concentrated on hybrid methods to solve ΠJ as a single technique cannot solve this stubborn problem. As a result much effort has recently been concentrated on techniques that combine myopic problem specific methods and a meta-strategy which guides the search out of local optima. These approaches currently provide the best results. Such hybrid techniques are known as iterated local search algorithms or meta-heuristics. In this paper we seek to assess the work done in the job-shop domain by providing a review of many of the techniques used. The impact of the major contributions is indicated by applying these techniques to a set of standard benchmark problems. It is established that methods such as Tabu Search, Genetic Algorithms, Simulated Annealing should be considered complementary rather than competitive. In addition this work suggests guide-lines on features that should be incorporated to create a good ΠJ system. Finally the possible direction for future work is highlighted so that current barriers within ΠJ maybe surmounted as we approach the 21st Century.", "title": "" }, { "docid": "8b9143a6345b38fd8a15b86756f75a1f", "text": "A 6.78 MHz resonant wireless power transfer (WPT) system with a 5 W fully integrated power receiver is presented. A conventional low-dropout (LDO) linear regulator supplies power for operating the circuit in the power receiver. However, as the required operating current increases, the power consumption of the LDO regulator increases, which degrades the power efficiency. In order to increase the power efficiency of the receiver, this work proposes a power supply switching circuit (PSSC). When operation starts, the PSSC changes the power source from the low-efficiency LDO regulator to the high-efficiency step-down DC–DC converter. The LDO regulator operates only for initialization. This chip has been fabricated using 0.18 μm high-voltage bipolar– CMOS–DMOS (double-diffused metal–oxide–semiconductor) (BCD) technology with a die area of 2.5 mm x 2.5 mm. A maximum power transfer efficiency of 81% is measured.", "title": "" }, { "docid": "0830abcb23d763c1298bf4605f81eb72", "text": "A key technical challenge in performing 6D object pose estimation from RGB-D image is to fully leverage the two complementary data sources. Prior works either extract information from the RGB image and depth separately or use costly post-processing steps, limiting their performances in highly cluttered scenes and real-time applications. In this work, we present DenseFusion, a generic framework for estimating 6D pose of a set of known objects from RGBD images. DenseFusion is a heterogeneous architecture that processes the two data sources individually and uses a novel dense fusion network to extract pixel-wise dense feature embedding, from which the pose is estimated. Furthermore, we integrate an end-to-end iterative pose refinement procedure that further improves the pose estimation while achieving near real-time inference. Our experiments show that our method outperforms state-of-the-art approaches in two datasets, YCB-Video and LineMOD. We also deploy our proposed method to a real robot to grasp and manipulate objects based on the estimated pose. Our code and video are available at https://sites.google.com/view/densefusion/.", "title": "" }, { "docid": "3a1b9a47a7fe51ab19f53ae6aaa18d6d", "text": "The overall context proposed in this paper is part of our long-standing goal to contribute to a group of community that suffers from Autism Spectrum Disorder (ASD); a lifelong developmental disability. The objective of this paper is to present the development of our pilot experiment protocol where children with ASD will be exposed to the humanoid robot NAO. This fully programmable humanoid offers an ideal research platform for human-robot interaction (HRI). This study serves as the platform for fundamental investigation to observe the initial response and behavior of the children in the said environment. The system utilizes external cameras, besides the robot's own visual system. Anticipated results are the real initial response and reaction of ASD children during the HRI with the humanoid robot. This shall leads to adaptation of new procedures in ASD therapy based on HRI, especially for a non-technical-expert person to be involved in the robotics intervention during the therapy session.", "title": "" }, { "docid": "a5f926bc15c7b3dd75b3e67c8537c3fb", "text": "Practical and theoretical issues are presented concerning the design, implementation, and use of a good, minimal standard random number generator that will port to virtually all systems.", "title": "" }, { "docid": "7a1f409eea5e0ff89b51fe0a26d6db8d", "text": "A multi-agent system consisting of <inline-formula><tex-math notation=\"LaTeX\">$N$</tex-math></inline-formula> agents is considered. The problem of steering each agent from its initial position to a desired goal while avoiding collisions with obstacles and other agents is studied. This problem, referred to as the <italic>multi-agent collision avoidance problem</italic>, is formulated as a differential game. Dynamic feedback strategies that approximate the feedback Nash equilibrium solutions of the differential game are constructed and it is shown that, provided certain assumptions are satisfied, these guarantee that the agents reach their targets while avoiding collisions.", "title": "" }, { "docid": "5392e45840929b05b549a64a250774e5", "text": "Faces in natural images are often occluded by a variety of objects. We propose a fully automated, probabilistic and occlusion-aware 3D morphable face model adaptation framework following an analysis-by-synthesis setup. The key idea is to segment the image into regions explained by separate models. Our framework includes a 3D morphable face model, a prototype-based beard model and a simple model for occlusions and background regions. The segmentation and all the model parameters have to be inferred from the single target image. Face model adaptation and segmentation are solved jointly using an expectation–maximization-like procedure. During the E-step, we update the segmentation and in the M-step the face model parameters are updated. For face model adaptation we apply a stochastic sampling strategy based on the Metropolis–Hastings algorithm. For segmentation, we apply loopy belief propagation for inference in a Markov random field. Illumination estimation is critical for occlusion handling. Our combined segmentation and model adaptation needs a proper initialization of the illumination parameters. We propose a RANSAC-based robust illumination estimation technique. By applying this method to a large face image database we obtain a first empirical distribution of real-world illumination conditions. The obtained empirical distribution is made publicly available and can be used as prior in probabilistic frameworks, for regularization or to synthesize data for deep learning methods.", "title": "" }, { "docid": "a9ee07074aabe3f30ca0b667ab7cf6ab", "text": "Several recently proposed stochastic optimization methods that have been successfully used in training deep networks such as RMSPROP, ADAM, ADADELTA, NADAM are based on using gradient updates scaled by square roots of exponential moving averages of squared past gradients. In many applications, e.g. learning with large output spaces, it has been empirically observed that these algorithms fail to converge to an optimal solution (or a critical point in nonconvex settings). We show that one cause for such failures is the exponential moving average used in the algorithms. We provide an explicit example of a simple convex optimization setting where ADAM does not converge to the optimal solution, and describe the precise problems with the previous analysis of ADAM algorithm. Our analysis suggests that the convergence issues can be fixed by endowing such algorithms with “long-term memory” of past gradients, and propose new variants of the ADAM algorithm which not only fix the convergence issues but often also lead to improved empirical performance.", "title": "" } ]
scidocsrr
93fa1fa51345165911d9d7f47acff2c6
A Framework for Detection of Video Spam on YouTube
[ { "docid": "7834f32e3d6259f92f5e0beb3a53cc04", "text": "An educational institution needs to have an approximate prior knowledge of enrolled students to predict their performance in future academics. This helps them to identify promising students and also provides them an opportunity to pay attention to and improve those who would probably get lower grades. As a solution, we have developed a system which can predict the performance of students from their previous performances using concepts of data mining techniques under Classification. We have analyzed the data set containing information about students, such as gender, marks scored in the board examinations of classes X and XII, marks and rank in entrance examinations and results in first year of the previous batch of students. By applying the ID3 (Iterative Dichotomiser 3) and C4.5 classification algorithms on this data, we have predicted the general and individual performance of freshly admitted students in future examinations.", "title": "" } ]
[ { "docid": "56a4b5052e4d745e7939e2799a40bfd8", "text": "The evolution of software defined networking (SDN) has played a significant role in the development of next-generation networks (NGN). SDN as a programmable network having “service provisioning on the fly” has induced a keen interest both in academic world and industry. In this article, a comprehensive survey is presented on SDN advancement over conventional network. The paper covers historical evolution in relation to SDN, functional architecture of the SDN and its related technologies, and OpenFlow standards/protocols, including the basic concept of interfacing of OpenFlow with network elements (NEs) such as optical switches. In addition a selective architecture survey has been conducted. Our proposed architecture on software defined heterogeneous network, points towards new technology enabling the opening of new vistas in the domain of network technology, which will facilitate in handling of huge internet traffic and helps infrastructure and service providers to customize their resources dynamically. Besides, current research projects and various activities as being carried out to standardize SDN as NGN by different standard development organizations (SODs) have been duly elaborated to judge how this technology moves towards standardization.", "title": "" }, { "docid": "90eb392765c01b6166daa2a7a62944d1", "text": "Recent studies have demonstrated the potential for reducing energy consumption in integrated circuits by allowing errors during computation. While most proposed techniques for achieving this rely on voltage overscaling (VOS), this paper shows that Imprecise Hardware (IHW) with design-time structural parameters can achieve orthogonal energy-quality tradeoffs. Two IHW adders are improved and two IHW multipliers are introduced in this paper. In addition, a simulation-free error estimation technique is proposed to rapidly and accurately estimate the impact of IHW on output quality. Finally, a quality-aware energy minimization methodology is presented. To validate this methodology, experiments are conducted on two computational kernels: DOT-PRODUCT and L2-NORM -- used in three applications -- Leukocyte Tracker, SVM classification and K-means clustering. Results show that the Hellinger distance between estimated and simulated error distribution is within 0.05 and that the methodology enables designers to explore energy-quality tradeoffs with significant reduction in simulation complexity.", "title": "" }, { "docid": "b4874b03c639ee105f76266d37540a54", "text": "We tested the validity and reliability of the BioSpace InBody 320, Omron and Bod-eComm body composition devices in men and women (n 254; 21-80 years) and boys and girls (n 117; 10-17 years). We analysed percentage body fat (%BF) and compared the results with dual-energy X-ray absorptiometry (DEXA) in adults and compared the results of the InBody with underwater weighing (UW) in children. All body composition devices were correlated (r 0.54-0.97; P< or =0.010) to DEXA except the Bod-eComm in women aged 71-80 years (r 0.54; P=0.106). In girls, the InBody %BF was correlated with UW (r 0.79; P< or =0.010); however, a more moderate correlation (r 0.69; P< or =0.010) existed in boys. Bland-Altman plots indicated that all body composition devices underestimated %BF in adults (1.0-4.8 %) and overestimated %BF in children (0.3-2.3 %). Lastly, independent t tests revealed that the mean %BF assessed by the Bod-eComm in women (aged 51-60 and 71-80 years) and in the Omron (age 18-35 years) were significantly different compared with DEXA (P< or =0.010). In men, the Omron (aged 18-35 years), and the InBody (aged 36-50 years) were significantly different compared with DEXA (P=0.025; P=0.040 respectively). In addition, independent t tests indicated that the InBody mean %BF in girls aged 10-17 years was significantly different from UW (P=0.001). Pearson's correlation analyses demonstrated that the Bod-eComm (men and women) and Omron (women) had significant mean differences compared with the reference criterion; therefore, the %BF output from these two devices should be interpreted with caution. The repeatability of each body composition device was supported by small CV (<3.0 %).", "title": "" }, { "docid": "3627ee0e7be9c6d664dea1912c0b91d4", "text": "Given a set of texts discussing a particular entity (e.g., customer reviews of a smartphone), aspect based sentiment analysis (ABSA) identifies prominent aspects of the entity (e.g., battery, screen) and an average sentiment score per aspect. We focus on aspect term extraction (ATE), one of the core processing stages of ABSA that extracts terms naming aspects. We make publicly available three new ATE datasets, arguing that they are better than previously available ones. We also introduce new evaluation measures for ATE, again arguing that they are better than previously used ones. Finally, we show how a popular unsupervised ATE method can be improved by using continuous space vector representations of words and phrases.", "title": "" }, { "docid": "fca00f3dc82a45357de1e2082138a589", "text": "Preservation of food and beverages resulting from fermentation has been an effective form of extending the shelf-life of foods for millennia. Traditionally, foods were preserved through naturally occurring fermentations, however, modern large scale production generally now exploits the use of defined strain starter systems to ensure consistency and quality in the final product. This review will mainly focus on the use of lactic acid bacteria (LAB) for food improvement, given their extensive application in a wide range of fermented foods. These microorganisms can produce a wide variety of antagonistic primary and secondary metabolites including organic acids, diacetyl, CO2 and even antibiotics such as reuterocyclin produced by Lactobacillus reuteri. In addition, members of the group can also produce a wide range of bacteriocins, some of which have activity against food pathogens such as Listeria monocytogenes and Clostridium botulinum. Indeed, the bacteriocin nisin has been used as an effective biopreservative in some dairy products for decades, while a number of more recently discovered bacteriocins, such as lacticin 3147, demonstrate increasing potential in a number of food applications. Both of these lactococcal bacteriocins belong to the lantibiotic family of posttranslationally modified bacteriocins that contain lanthionine, beta-methyllanthionine and dehydrated amino acids. The exploitation of such naturally produced antagonists holds tremendous potential for extension of shelf-life and improvement of safety of a variety of foods.", "title": "" }, { "docid": "5208762a8142de095c21824b0a395b52", "text": "Battery storage (BS) systems are static energy conversion units that convert the chemical energy directly into electrical energy. They exist in our cars, laptops, electronic appliances, micro electricity generation systems and in many other mobile to stationary power supply systems. The economic advantages, partial sustainability and the portability of these units pose promising substitutes for backup power systems for hybrid vehicles and hybrid electricity generation systems. Dynamic behaviour of these systems can be analysed by using mathematical modeling and simulation software programs. Though, there have been many mathematical models presented in the literature and proved to be successful, dynamic simulation of these systems are still very exhaustive and time consuming as they do not behave according to specific mathematical models or functions. The charging and discharging of battery functions are a combination of exponential and non-linear nature. The aim of this research paper is to present a suitable convenient, dynamic battery model that can be used to model a general BS system. Proposed model is a new modified dynamic Lead-Acid battery model considering the effect of temperature and cyclic charging and discharging effects. Simulink has been used to study the characteristics of the system and the proposed system has proved to be very successful as the simulation results have been very good. Keywords—Simulink Matlab, Battery Model, Simulation, BS Lead-Acid, Dynamic modeling, Temperature effect, Hybrid Vehicles.", "title": "" }, { "docid": "ff3b0b89e05c7e2cd50c0e29c0e557f7", "text": "This paper compares two types of physical unclonable function (PUF) circuits in terms of reliability, mismatch-based PUF vs. physical-based PUF. Most previous PUF circuits utilize device mismatches for generating random responses. Although they have sufficient random features, there is a reliability issue that some portions of bits are changed over time during operation or under noisy environments. To overcome this issue, we previously proposed the differential amplifier PUF (DA-PUF) which improves the reliability by amplifying the small mismatches of the transistors and rejecting the power supply noise through differential operation. In this paper, we first report the experimental results with the fabricated chips in a 0.35 μm CMOS process. The DA-PUF shows 51.30% uniformity, 50.05% uniqueness, and 0.43% maximum BER. For 0% BER, we proposed the physical-based VIA-PUF which is based on the probability of physical connection between the electrical layers. From the experimental results with the fabricated chips in a 0.18 μm CMOS process, we found the VIA-PUF has 51.12% uniformity and 49.64% uniqueness, and 0% BER throughout 1,000-time repeated measurements. Especially, we have no bit change after the stress test at 25 and 125 °C for 96 hours.", "title": "" }, { "docid": "e790824ac08ceb82000c3cda024dc329", "text": "Cellulolytic bacteria were isolated from manure wastes (cow dung) and degrading soil (municipal solid waste). Nine bacterial strains were screened the cellulolytic activities. Six strains showed clear zone formation on Berg’s medium. CMC (carboxyl methyl cellulose) and cellulose were used as substrates for cellulase activities. Among six strains, cd3 and mw7 were observed in quantitative measurement determined by dinitrosalicylic acid (DNS) method. Maximum enzyme producing activity showed 1.702mg/ml and 1.677mg/ml from cd3 and mw7 for 1% CMC substrate. On the other hand, it was expressed 0.563mg/ml and 0.415mg/ml for 1% cellulose substrate respectively. It was also studied for cellulase enzyme producing activity optimizing with kinetic growth parameters such as different carbon source including various concentration of cellulose, incubation time, temperature, and pH. Starch substrate showed 0.909mg/ml and 0.851mg/ml in enzyme producing activity. The optimum substrate concentration of cellulose was 0.25% for cd3 but 1% for mw7 showing the amount of reducing sugar formation 0.628mg/ml and 0.669mg/ml. The optimum incubation parameters for cd3 were 84 hours, 40C and pH 6. Mw7 also had optimum parameters 60 hours, 40 C and pH6.", "title": "" }, { "docid": "861c2fed42d2e2ec53dec8a6e9812bc9", "text": "Materials and methods: An experimental in vivo study was conducted at a dermatology clinic in Riyadh in January 2016. The study included 23 female patients who ranged from 20 to 50 years and were treated with Botox injections due to excessive maxillary gingival display. The patients with short clinical crowns or long maxilla, those who were pregnant or breastfeeding, and patients with neuromuscular disorders were excluded. Patients received Botox type I, injected 3 mm lateral to the alar-fascial groove at the level of the nostril opening at the insertion of the levator labii superioris alaeque nasi muscle. Photos were taken of the patient’s smile before and after the treatment and were then uploaded to the SketchUp program to calculate improvements in gingival display. The distance from the lower margin of the upper lip to the gingival margin was calculated preand posttreatment. The amount of improvement was calculated as (pre-Botox treatment – post-Botox treatment/pre-Botox treatment × 100). The mean percentage of the total improvement was analyzed.", "title": "" }, { "docid": "1d98b5bd0c7178b39b7da0e0f9586615", "text": "TDMA has been proposed as a MAC protocol for wireless sensor networks (WSNs) due to its efficiency in high WSN load. However, TDMA is plagued with shortcomings; we present modifications to TDMA that will allow for the same efficiency of TDMA, while allowing the network to conserve energy during times of low load (when there is no activity being detected). Recognizing that aggregation plays an essential role in WSNs, TDMA-ASAP adds to TDMA: (a) transmission parallelism based on a level-by-level localized graph-coloring, (b) appropriate sleeping between transmissions (\"napping\"), (c) judicious and controlled TDMA slot stealing to avoid empty slots to be unused and (d) intelligent scheduling/ordering transmissions. Our results show that TDMA-ASAP's unique combination of TDMA, slot-stealing, napping, and message aggregation significantly outperforms other hybrid WSN MAC algorithms and has a performance that is close to optimal in terms of energy consumption and overall delay.", "title": "" }, { "docid": "e5e3cbe942723ef8e3524baf56121bf5", "text": "Requirements prioritization is recognized as an important activity in product development. In this paper, we describe the current state of requirements prioritization practices in two case companies and present the practical challenges involved. Our study showed that requirements prioritization is an ambiguous concept and current practices in the companies are informal. Requirements prioritization requires complex context-specific decision-making and must be performed iteratively in many phases during development work. Practitioners are seeking more systematic ways to prioritize requirements but they find it difficult to pay attention to all the relevant factors that have an effect on priorities and explicitly to draw different stakeholder views together. In addition, practitioners need more information about real customer preferences.", "title": "" }, { "docid": "2e8d81ba0b09bc657964d20eb17c976c", "text": "The “Internet of things” (IoT) concept nowadays is one of the hottest trends for research in any given field; since IoT is about interactions between multiple devices, things, and objects. This interaction opens different directions of enhancement and development in many fields, such as architecture, dependencies, communications, protocols, security, applications and big data. The results will be outstanding and we will be able to reach the desired change and improvements we seek in the fields that affect our lives. The critical goal of Internet of things (IoT) is to ensure effective communication between objects and build a sustained bond among them using different types of applications. The application layer is responsible for providing services and determines a set of protocols for message passing at the application level. This survey addresses a set of application layer protocols that are being used today for IoT, to affirm a reliable tie among objects and things.", "title": "" }, { "docid": "55c02b425633062f7d6dc6e3a5afff8e", "text": "This review argues for the development of a Positive Clinical Psychology, which has an integrated and equally weighted focus on both positive and negative functioning in all areas of research and practice. Positive characteristics (such as gratitude, flexibility, and positive emotions) can uniquely predict disorder beyond the predictive power of the presence of negative characteristics, and buffer the impact of negative life events, potentially preventing the development of disorder. Increased study of these characteristics can rapidly expand the knowledge base of clinical psychology and utilize the promising new interventions to treat disorder through promoting the positive. Further, positive and negative characteristics cannot logically be studied or changed in isolation as (a) they interact to predict clinical outcomes, (b) characteristics are neither \"positive\" or \"negative\", with outcomes depending on specific situation and concomitant goals and motivations, and (c) positive and negative well-being often exist on the same continuum. Responding to criticisms of the Positive Psychology movement, we do not suggest the study of positive functioning as a separate field of clinical psychology, but rather that clinical psychology itself changes to become a more integrative discipline. An agenda for research and practice is proposed including reconceptualizing well-being, forming stronger collaborations with allied disciplines, rigorously evaluating the new positive interventions, and considering a role for clinical psychologists in promoting well-being as well as treating distress.", "title": "" }, { "docid": "0f10aa71d58858ea1d8d7571a7cbfe22", "text": "We study hierarchical classification in the general case when an instance could belong to more than one class node in the underlying taxonomy. Experiments done in previous work showed that a simple hierarchy of Support Vectors Machines (SVM) with a top-down evaluation scheme has a surprisingly good performance on this kind of task. In this paper, we introduce a refined evaluation scheme which turns the hierarchical SVM classifier into an approximator of the Bayes optimal classifier with respect to a simple stochastic model for the labels. Experiments on synthetic datasets, generated according to this stochastic model, show that our refined algorithm outperforms the simple hierarchical SVM. On real-world data, however, the advantage brought by our approach is a bit less clear. We conjecture this is due to a higher noise rate for the training labels in the low levels of the taxonomy.", "title": "" }, { "docid": "8f4c4c2157623bb6e9ed91c84ef57618", "text": "Bitcoin’s innovative and distributedly maintained blockchain data structure hinges on the adequate degree of difficulty of so-called “proofs of work,” which miners have to produce in order for transactions to be inserted. Importantly, these proofs of work have to be hard enough so that miners have an opportunity to unify their views in the presence of an adversary who interferes but has bounded computational power, but easy enough to be solvable regularly and enable the miners to make progress. As such, as the miners’ population evolves over time, so should the difficulty of these proofs. Bitcoin provides this adjustment mechanism, with empirical evidence of a constant block generation rate against such population changes. In this paper we provide the first (to our knowledge) formal analysis of Bitcoin’s target (re)calculation function in the cryptographic setting, i.e., against all possible adversaries aiming to subvert the protocol’s properties. We extend the q-bounded synchronous model of the Bitcoin backbone protocol [Eurocrypt 2015], which posed the basic properties of Bitcoin’s underlying blockchain data structure and shows how a robust public transaction ledger can be built on top of them, to environments that may introduce or suspend parties in each round. We provide a set of necessary conditions with respect to the way the population evolves under which the “Bitcoin backbone with chains of variable difficulty” provides a robust transaction ledger in the presence of an actively malicious adversary controlling a fraction of the miners strictly below 50% at each instant of the execution. Our work introduces new analysis techniques and tools to the area of blockchain systems that may prove useful in analyzing other blockchain protocols. Part of this work was done while the authors were visiting the Simons Institute for the Theory of Computing, supported by the Simons Foundation and by the DIMACS/Simons Collaboration in Cryptography through NSF grant #CNS-1523467. Research partly supported by ERC project CODAMODA, No. 259152, and Horizon 2020 project PANORAMIX, No. 653497.", "title": "" }, { "docid": "151b3f80fe443b8f9b5f17c0531e0679", "text": "Pattern recognition methods using neuroimaging data for the diagnosis of Alzheimer’s disease have been the subject of extensive research in recent years. In this paper, we use deep learning methods, and in particular sparse autoencoders and 3D convolutional neural networks, to build an algorithm that can predict the disease status of a patient, based on an MRI scan of the brain. We report on experiments using the ADNI data set involving 2,265 historical scans. We demonstrate that 3D convolutional neural networks outperform several other classifiers reported in the literature and produce state-of-art results.", "title": "" }, { "docid": "44985e59d8b169b10a7c56fd31e8b199", "text": "Recently it became a hot topic to protect VMs from a compromised or even malicious hypervisor. However, most previous systems are vulnerable to rollback attack, since it is hard to distinguish from normal suspend/resume and migration operations that an IaaS platform usually offers. Some of the previous systems simply disable these features to defend rollback attack, while others heavily need user involvement. In this paper, we propose a new solution to make a balance between security and functionality. By securely logging all the suspend/resume and migration operation inside a small trusted computing base, a user can audit the log to check malicious rollback and constrain the operations on the VMs. The solution considers several practical issues including hardware limitations and minimizing user's interaction, and has been implemented on a recent VM protection system.", "title": "" }, { "docid": "39539ad490065e2a81b6c07dd11643e5", "text": "Stock prices are formed based on short and/or long-term commercial and trading activities that reflect different frequencies of trading patterns. However, these patterns are often elusive as they are affected by many uncertain political-economic factors in the real world, such as corporate performances, government policies, and even breaking news circulated across markets. Moreover, time series of stock prices are non-stationary and non-linear, making the prediction of future price trends much challenging. To address them, we propose a novel State Frequency Memory (SFM) recurrent network to capture the multi-frequency trading patterns from past market data to make long and short term predictions over time. Inspired by Discrete Fourier Transform (DFT), the SFM decomposes the hidden states of memory cells into multiple frequency components, each of which models a particular frequency of latent trading pattern underlying the fluctuation of stock price. Then the future stock prices are predicted as a nonlinear mapping of the combination of these components in an Inverse Fourier Transform (IFT) fashion. Modeling multi-frequency trading patterns can enable more accurate predictions for various time ranges: while a short-term prediction usually depends on high frequency trading patterns, a long-term prediction should focus more on the low frequency trading patterns targeting at long-term return. Unfortunately, no existing model explicitly distinguishes between various frequencies of trading patterns to make dynamic predictions in literature. The experiments on the real market data also demonstrate more competitive performance by the SFM as compared with the state-of-the-art methods.", "title": "" }, { "docid": "f9e857f9eac802b5874b583d0fcf32c0", "text": "o This paper examines the enormous pressure Chinese students must bear at home and in school in order to obtain high academic achievemento The authors look at students' lives from their own perspective and study the impact of home and school pressures on students' intellectral, psychological, and physical development. Cultural, political, and economic factors are analyzed to provide an explanation of the situation. The paper raises questions as to what is the purpose of education and argues for the importance of balancing educational goals with other aspects of students' lives. RtSUMto Cet article s'intéresse aux pressions considérables dont font l'objet les étudiants chinois à la maison et à l'école en vue de réussir sur le plan scolaire. Les auteurs étudient la vie des étudiants à la lumière de leurs propres points de vue de même que l'impact des pressions familliales et scolaires sur leur développement intellectuel, psychologique et physique. Les facteurs culturels, politiques et économiques sont analysés en vue d'expliquer la situation. L'article soulève des questions sur le but de l'éducation et insiste sur l'importance d'équilibrer les objetifs pédagogiques avec les autres aspects de la vie des", "title": "" }, { "docid": "1bb7c5d71db582329ad8e721fdddb0b3", "text": "The sharing economy is spreading rapidly worldwide in a number of industries and markets. The disruptive nature of this phenomenon has drawn mixed responses ranging from active conflict to adoption and assimilation. Yet, in spite of the growing attention to the sharing economy, we still do not know much about it. With the abundant enthusiasm about the benefits that the sharing economy can unleash and the weekly reminders about its dark side, further examination is required to determine the potential of the sharing economy while mitigating its undesirable side effects. The panel will join the ongoing debate about the sharing economy and contribute to the discourse with insights about how digital technologies are critical in shaping this turbulent ecosystem. Furthermore, we will define an agenda for future research on the sharing economy as it becomes part of the mainstream society as well as part of the IS research", "title": "" } ]
scidocsrr
5e737b16de0ad8d1b04abe746ac3d658
A 22nm ±0.95V CMOS OTA-C front-end with 50/60 Hz notch for biomedical signal acquisition
[ { "docid": "73aa720bebc5f2fa1930930fb4185490", "text": "A CMOS OTA-C notch filter for 50Hz interference was presented in this paper. The OTAs were working in weak inversion region in order to achieve ultra low transconductance and power consumptions. The circuits were designed using SMIC mixed-signal 0.18nm 1P6M process. The post-annotated simulation indicated that an attenuation of 47.2dB for power line interference and a 120pW consumption. The design achieved a dynamic range of 75.8dB and a THD of 0.1%, whilst the input signal was a 1 Hz 20mVpp sine wave.", "title": "" } ]
[ { "docid": "ef62b0e14f835a36c3157c1ae0f858e5", "text": "Algorithms based on Convolutional Neural Network (CNN) have recently been applied to object detection applications, greatly improving their performance. However, many devices intended for these algorithms have limited computation resources and strict power consumption constraints, and are not suitable for algorithms designed for GPU workstations. This paper presents a novel method to optimise CNN-based object detection algorithms targeting embedded FPGA platforms. Given parameterised CNN hardware modules, an optimisation flow takes network architectures and resource constraints as input, and tunes hardware parameters with algorithm-specific information to explore the design space and achieve high performance. The evaluation shows that our design model accuracy is above 85% and, with optimised configuration, our design can achieve 49.6 times speed-up compared with software implementation.", "title": "" }, { "docid": "9a46e35fae0b3b7bdbb935b20ca9516b", "text": "Though quite challenging, leveraging large-scale unlabeled or partially labeled data in learning systems (e.g., model/classifier training) has attracted increasing attentions due to its fundamental importance. To address this problem, many active learning (AL) methods have been proposed that employ up-to-date detectors to retrieve representative minority samples according to predefined confidence or uncertainty thresholds. However, these AL methods cause the detectors to ignore the remaining majority samples (i.e., those with low uncertainty or high prediction confidence). In this paper, by developing a principled active sample mining (ASM) framework, we demonstrate that cost-effective mining samples from these unlabeled majority data are a key to train more powerful object detectors while minimizing user effort. Specifically, our ASM framework involves a switchable sample selection mechanism for determining whether an unlabeled sample should be manually annotated via AL or automatically pseudolabeled via a novel self-learning process. The proposed process can be compatible with mini-batch-based training (i.e., using a batch of unlabeled or partially labeled data as a one-time input) for object detection. In this process, the detector, such as a deep neural network, is first applied to the unlabeled samples (i.e., object proposals) to estimate their labels and output the corresponding prediction confidences. Then, our ASM framework is used to select a number of samples and assign pseudolabels to them. These labels are specific to each learning batch based on the confidence levels and additional constraints introduced by the AL process and will be discarded afterward. Then, these temporarily labeled samples are employed for network fine-tuning. In addition, a few samples with low-confidence predictions are selected and annotated via AL. Notably, our method is suitable for object categories that are not seen in the unlabeled data during the learning process. Extensive experiments on two public benchmarks (i.e., the PASCAL VOC 2007/2012 data sets) clearly demonstrate that our ASM framework can achieve performance comparable to that of the alternative methods but with significantly fewer annotations.", "title": "" }, { "docid": "4177fc3fa7c5abe25e4e144e6c079c1f", "text": "A wideband noise-cancelling low-noise amplifier (LNA) without the use of inductors is designed for low-voltage and low-power applications. Based on the common-gate-common-source (CG-CS) topology, a new approach employing local negative feedback is introduced between the parallel CG and CS stages. The moderate gain at the source of the cascode transistor in the CS stage is utilized to boost the transconductance of the CG transistor. This leads to an LNA with higher gain and lower noise figure (NF) compared with the conventional CG-CS LNA, particularly under low power and voltage constraints. By adjusting the local open-loop gain, the NF can be optimized by distributing the power consumption among transistors and resistors based on their contribution to the NF. The optimal value of the local open-loop gain can be obtained by taking into account the effect of phase shift at high frequency. The linearity is improved by employing two types of distortion-cancelling techniques. Fabricated in a 0.13-μm RF CMOS process, the LNA achieves a voltage gain of 19 dB and an NF of 2.8-3.4 dB over a 3-dB bandwidth of 0.2-3.8 GHz. It consumes 5.7 mA from a 1-V supply and occupies an active area of only 0.025 mm2.", "title": "" }, { "docid": "10b8aa3bc47a05d2e0eddc83f6922005", "text": "Bluetooth Low Energy (BLE), a low-power wireless protocol, is widely used in industrial automation for monitoring field devices. Although the BLE standard defines advanced security mechanisms, there are known security attacks for BLE and BLE-enabled field devices must be tested thoroughly against these attacks. This article identifies the possible attacks for BLE-enabled field devices relevant for industrial automation. It also presents a framework for defining and executing BLE security attacks and evaluates it on three BLE devices. All tested devices are vulnerable and this confirms that there is a need for better security testing tools as well as for additional defense mechanisms for BLE devices.", "title": "" }, { "docid": "0737e99613b83104bc9390a46fbc4aeb", "text": "Natural language text exhibits hierarchical structure in a variety of respects. Ideally, we could incorporate our prior knowledge of this hierarchical structure into unsupervised learning algorithms that work on text data. Recent work by Nickel and Kiela (2017) proposed using hyperbolic instead of Euclidean embedding spaces to represent hierarchical data and demonstrated encouraging results when embedding graphs. In this work, we extend their method with a re-parameterization technique that allows us to learn hyperbolic embeddings of arbitrarily parameterized objects. We apply this framework to learn word and sentence embeddings in hyperbolic space in an unsupervised manner from text corpora. The resulting embeddings seem to encode certain intuitive notions of hierarchy, such as wordcontext frequency and phrase constituency. However, the implicit continuous hierarchy in the learned hyperbolic space makes interrogating the model’s learned hierarchies more difficult than for models that learn explicit edges between items. The learned hyperbolic embeddings show improvements over Euclidean embeddings in some – but not all – downstream tasks, suggesting that hierarchical organization is more useful for some tasks than others.", "title": "" }, { "docid": "3d0a6b490a80e79690157a9ed690fdcc", "text": "In this paper we introduce a novel Depth-Aware Video Saliency approach to predict human focus of attention when viewing videos that contain a depth map (RGBD) on a 2D screen. Saliency estimation in this scenario is highly important since in the near future 3D video content will be easily acquired yet hard to display. Despite considerable progress in 3D display technologies, most are still expensive and require special glasses for viewing, so RGBD content is primarily viewed on 2D screens, removing the depth channel from the final viewing experience. We train a generative convolutional neural network that predicts the 2D viewing saliency map for a given frame using the RGBD pixel values and previous fixation estimates in the video. To evaluate the performance of our approach, we present a new comprehensive database of 2D viewing eye-fixation ground-truth for RGBD videos. Our experiments indicate that it is beneficial to integrate depth into video saliency estimates for content that is viewed on a 2D display. We demonstrate that our approach outperforms state-of-the-art methods for video saliency, achieving 15% relative improvement.", "title": "" }, { "docid": "4cae8749b6d12f38ddf8e4c26bb15b53", "text": "The developments in monitor technology have accelerated in recent years, acquiring a new dimension. The use of liquid crystal display (LCD) and light emitting diode (LED) monitors has rapidly reduced the use of cathode ray tube (CRT) technology in computers and televisions (TVs). As a result, such devices have accumulated as electronic waste and constitute a new problem. Large parts of electronic waste can be recycled for reuse. However, some types of waste, such as CRT TVs and computer monitors, form hazardous waste piles due to the toxic components (lead, barium, strontium) they contain. CRT monitors contain different types of glass constructions and they can therefore be recycled. However, the toxic substances they contain prevent them from being transformed into glass for everyday use. Furthermore, because CRT technology is obsolete, it is not profitable to use CRT as a raw material again. For this reason, poisonous components in glass ceramic structures found in CRT monitors can be confined and used in closed-loop recycling for various sectors.", "title": "" }, { "docid": "0e4722012aeed8dc356aa8c49da8c74f", "text": "The Android software stack for mobile devices defines and enforces its own security model for apps through its application-layer permissions model. However, at its foundation, Android relies upon the Linux kernel to protect the system from malicious or flawed apps and to isolate apps from one another. At present, Android leverages Linux discretionary access control (DAC) to enforce these guarantees, despite the known shortcomings of DAC. In this paper, we motivate and describe our work to bring flexible mandatory access control (MAC) to Android by enabling the effective use of Security Enhanced Linux (SELinux) for kernel-level MAC and by developing a set of middleware MAC extensions to the Android permissions model. We then demonstrate the benefits of our security enhancements for Android through a detailed analysis of how they mitigate a number of previously published exploits and vulnerabilities for Android. Finally, we evaluate the overheads imposed by our security enhancements.", "title": "" }, { "docid": "64bd2fc0d1b41574046340833144dabe", "text": "Probe-based confocal laser endomicroscopy (pCLE) provides high-resolution in vivo imaging for intraoperative tissue characterization. Maintaining a desired contact force between target tissue and the pCLE probe is important for image consistency, allowing large area surveillance to be performed. A hand-held instrument that can provide a predetermined contact force to obtain consistent images has been developed. The main components of the instrument include a linear voice coil actuator, a donut load-cell, and a pCLE probe. In this paper, detailed mechanical design of the instrument is presented and system level modeling of closed-loop force control of the actuator is provided. The performance of the instrument has been evaluated in bench tests as well as in hand-held experiments. Results demonstrate that the instrument ensures a consistent predetermined contact force between pCLE probe tip and tissue. Furthermore, it compensates for both simulated physiological movement of the tissue and involuntary movements of the operator's hand. Using pCLE video feature tracking of large colonic crypts within the mucosal surface, the steadiness of the tissue images obtained using the instrument force control is demonstrated by confirming minimal crypt translation.", "title": "" }, { "docid": "b987f831f4174ad5d06882040769b1ac", "text": "Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. 1 Summary Application trends, device technologies and the architecture of systems drive progress in information technologies. However,", "title": "" }, { "docid": "ff5700d97ad00fcfb908d90b56f6033f", "text": "How to design a secure steganography method is the problem that researchers have always been concerned about. Traditionally, the steganography method is designed in a heuristic way which does not take into account the detection side (steganalysis) fully and automatically. In this paper, we propose a new strategy that generates more suitable and secure covers for steganography with adversarial learning scheme, named SSGAN. The proposed architecture has one generative network called G, and two discriminative networks called D and S, among which the former evaluates the visual quality of the generated images for steganography and the latter assesses their suitableness for information hiding. Different from the existing work, we use WGAN instead of GAN for the sake of faster convergence speed, more stable training, and higher quality images, and also re-design the S net with more sophisticated steganalysis network. The experimental results prove the effectiveness of the proposed method.", "title": "" }, { "docid": "2de8df231b5af77cfd141e26fb7a3ace", "text": "A significant challenge for the practical application of reinforcement learning in the real world is the need to specify an oracle reward function that correctly defines a task. Inverse reinforcement learning (IRL) seeks to avoid this challenge by instead inferring a reward function from expert behavior. While appealing, it can be impractically expensive to collect datasets of demonstrations that cover the variation common in the real world (e.g. opening any type of door). Thus in practice, IRL must commonly be performed with only a limited set of demonstrations where it can be exceedingly difficult to unambiguously recover a reward function. In this work, we exploit the insight that demonstrations from other tasks can be used to constrain the set of possible reward functions by learning a “prior” that is specifically optimized for the ability to infer expressive reward functions from limited numbers of demonstrations. We demonstrate that our method can efficiently recover rewards from images for novel tasks and provide intuition as to how our approach is analogous to learning a prior.", "title": "" }, { "docid": "c4a74726ac56b0127e5920098e6f0258", "text": "BACKGROUND\nAttention Deficit Hyperactivity disorder (ADHD) is one of the most common and challenging childhood neurobehavioral disorders. ADHD is known to negatively impact children, their families, and their community. About one-third to one-half of patients with ADHD will have persistent symptoms into adulthood. The prevalence in the United States is estimated at 5-11%, representing 6.4 million children nationwide. The variability in the prevalence of ADHD worldwide and within the US may be due to the wide range of factors that affect accurate assessment of children and youth. Because of these obstacles to assessment, ADHD is under-diagnosed, misdiagnosed, and undertreated.\n\n\nOBJECTIVES\nWe examined factors associated with making and receiving the diagnosis of ADHD. We sought to review the consequences of a lack of diagnosis and treatment for ADHD on children's and adolescent's lives and how their families and the community may be involved in these consequences.\n\n\nMETHODS\nWe reviewed scientific articles looking for factors that impact the identification and diagnosis of ADHD and articles that demonstrate naturalistic outcomes of diagnosis and treatment. The data bases PubMed and Google scholar were searched from the year 1995 to 2015 using the search terms \"ADHD, diagnosis, outcomes.\" We then reviewed abstracts and reference lists within those articles to rule out or rule in these or other articles.\n\n\nRESULTS\nMultiple factors have significant impact in the identification and diagnosis of ADHD including parents, healthcare providers, teachers, and aspects of the environment. Only a few studies detailed the impact of not diagnosing ADHD, with unclear consequences independent of treatment. A more significant number of studies have examined the impact of untreated ADHD. The experience around receiving a diagnosis described by individuals with ADHD provides some additional insights.\n\n\nCONCLUSION\nADHD diagnosis is influenced by perceptions of many different members of a child's community. A lack of clear understanding of ADHD and the importance of its diagnosis and treatment still exists among many members of the community including parents, teachers, and healthcare providers. More basic and clinical research will improve methods of diagnosis and information dissemination. Even before further advancements in science, strong partnerships between clinicians and patients with ADHD may be the best way to reduce the negative impacts of this disorder.", "title": "" }, { "docid": "082630a33c0cc0de0e60a549fc57d8e8", "text": "Agricultural monitoring, especially in developing countries, can help prevent famine and support humanitarian efforts. A central challenge is yield estimation, i.e., predicting crop yields before harvest. We introduce a scalable, accurate, and inexpensive method to predict crop yields using publicly available remote sensing data. Our approach improves existing techniques in three ways. First, we forego hand-crafted features traditionally used in the remote sensing community and propose an approach based on modern representation learning ideas. We also introduce a novel dimensionality reduction technique that allows us to train a Convolutional Neural Network or Long-short Term Memory network and automatically learn useful features even when labeled training data are scarce. Finally, we incorporate a Gaussian Process component to explicitly model the spatio-temporal structure of the data and further improve accuracy. We evaluate our approach on county-level soybean yield prediction in the U.S. and show that it outperforms competing techniques.", "title": "" }, { "docid": "46a55d7a3349f7228acb226ed7875dc9", "text": "Previous research on driver drowsiness detection has focused primarily on lane deviation metrics and high levels of fatigue. The present research sought to develop a method for detecting driver drowsiness at more moderate levels of fatigue, well before accident risk is imminent. Eighty-seven different driver drowsiness detection metrics proposed in the literature were evaluated in two simulated shift work studies with high-fidelity simulator driving in a controlled laboratory environment. Twenty-nine participants were subjected to a night shift condition, which resulted in moderate levels of fatigue; 12 participants were in a day shift condition, which served as control. Ten simulated work days in the study design each included four 30-min driving sessions, during which participants drove a standardized scenario of rural highways. Ten straight and uneventful road segments in each driving session were designated to extract the 87 different driving metrics being evaluated. The dimensionality of the overall data set across all participants, all driving sessions and all road segments was reduced with principal component analysis, which revealed that there were two dominant dimensions: measures of steering wheel variability and measures of lateral lane position variability. The latter correlated most with an independent measure of fatigue, namely performance on a psychomotor vigilance test administered prior to each drive. We replicated our findings across eight curved road segments used for validation in each driving session. Furthermore, we showed that lateral lane position variability could be derived from measured changes in steering wheel angle through a transfer function, reflecting how steering wheel movements change vehicle heading in accordance with the forces acting on the vehicle and the road. This is important given that traditional video-based lane tracking technology is prone to data loss when lane markers are missing, when weather conditions are bad, or in darkness. Our research findings indicated that steering wheel variability provides a basis for developing a cost-effective and easy-to-install alternative technology for in-vehicle driver drowsiness detection at moderate levels of fatigue.", "title": "" }, { "docid": "34bf7fb014f5b511943526c28407cb4b", "text": "Mobile devices can be maliciously exploited to violate the privacy of people. In most attack scenarios, the adversary takes the local or remote control of the mobile device, by leveraging a vulnerability of the system, hence sending back the collected information to some remote web service. In this paper, we consider a different adversary, who does not interact actively with the mobile device, but he is able to eavesdrop the network traffic of the device from the network side (e.g., controlling a Wi-Fi access point). The fact that the network traffic is often encrypted makes the attack even more challenging. In this paper, we investigate to what extent such an external attacker can identify the specific actions that a user is performing on her mobile apps. We design a system that achieves this goal using advanced machine learning techniques. We built a complete implementation of this system, and we also run a thorough set of experiments, which show that our attack can achieve accuracy and precision higher than 95%, for most of the considered actions. We compared our solution with the three state-of-the-art algorithms, and confirming that our system outperforms all these direct competitors.", "title": "" }, { "docid": "65eb604a2d45f29923ba24976130adc1", "text": "The recognition of boundaries, e.g., between chorus and verse, is an important task in music structure analysis. The goal is to automatically detect such boundaries in audio signals so that the results are close to human annotation. In this work, we apply Convolutional Neural Networks to the task, trained directly on mel-scaled magnitude spectrograms. On a representative subset of the SALAMI structural annotation dataset, our method outperforms current techniques in terms of boundary retrieval F -measure at different temporal tolerances: We advance the state-of-the-art from 0.33 to 0.46 for tolerances of±0.5 seconds, and from 0.52 to 0.62 for tolerances of ±3 seconds. As the algorithm is trained on annotated audio data without the need of expert knowledge, we expect it to be easily adaptable to changed annotation guidelines and also to related tasks such as the detection of song transitions.", "title": "" }, { "docid": "6ae9da259125e0173f41fa3506641ca4", "text": "We study the Maximum Weighted Matching problem in a partial information setting where the agents’ utilities for being matched to other agents are hidden and the mechanism only has access to ordinal preference information. Our model is motivated by the fact that in many settings, agents cannot express the numerical values of their utility for different outcomes, but are still able to rank the outcomes in their order of preference. Specifically, we study problems where the ground truth exists in the form of a weighted graph, and look to design algorithms that approximate the true optimum matching using only the preference orderings for each agent (induced by the hidden weights) as input. If no restrictions are placed on the weights, then one cannot hope to do better than the simple greedy algorithm, which yields a half optimal matching. Perhaps surprisingly, we show that by imposing a little structure on the weights, we can improve upon the trivial algorithm significantly: we design a 1.6-approximation algorithm for instances where the hidden weights obey the metric inequality. Our algorithm is obtained using a simple but powerful framework that allows us to combine greedy and random techniques in unconventional ways. These results are the first non-trivial ordinal approximation algorithms for such problems, and indicate that we can design robust matchings even when we are agnostic to the precise agent utilities.", "title": "" }, { "docid": "b358f6c5813fa10c76e1f04827a2696e", "text": "Information dispersal addresses the question of storing a file by distributing it among a set of servers in a storage-efficient way. We introduce the problem of verifiable information dispersal in an asynchronous network, where up to one third of the servers as well as an arbitrary number of clients might exhibit Byzantine faults. Verifiability ensures that the stored information is consistent despite such faults. We present a storage and communication-efficient scheme for asynchronous verifiable information dispersal that achieves an asymptotically optimal storage blow-up. Additionally, we show how to guarantee the secrecy of the stored data with respect to an adversary that may mount adaptive attacks. Our technique also yields a new protocol for asynchronous reliable broadcast that improves the communication complexity by an order of magnitude on large inputs.", "title": "" }, { "docid": "1758a09dd2653145a21eb318a4029b3c", "text": "This work describes our solution in the second edition of the ChaLearn LAP competition on Apparent Age Estimation. Starting from a pretrained version of the VGG-16 convolutional neural network for face recognition, we train it on the huge IMDB-Wiki dataset for biological age estimation and then fine-tune it for apparent age estimation using the relatively small competition dataset. We show that the precise age estimation of children is the cornerstone of the competition. Therefore, we integrate a separate \"children\" VGG-16 network for apparent age estimation of children between 0 and 12 years old in our final solution. The \"children\" network is fine-tuned from the \"general\" one. We employ different age encoding strategies for training \"general\" and \"children\" networks: the soft one (label distribution encoding) for the \"general\" network and the strict one (0/1 classification encoding) for the \"children\" network. Finally, we highlight the importance of the state-of-the-art face detection and face alignment for the final apparent age estimation. Our resulting solution wins the 1st place in the competition significantly outperforming the runner-up.", "title": "" } ]
scidocsrr
3e3245e4472042e11325e56f1119c801
Analyzing the Blogosphere for Predicting the Success of Music and Movie Products
[ { "docid": "e033eddbc92ee813ffcc69724e55aa84", "text": "Over the past few years, weblogs have emerged as a new communication and publication medium on the Internet. In this paper, we describe the application of data mining, information extraction and NLP algorithms for discovering trends across our subset of approximately 100,000 weblogs. We publish daily lists of key persons, key phrases, and key paragraphs to a public web site, BlogPulse.com. In addition, we maintain a searchable index of weblog entries. On top of the search index, we have implemented trend search, which graphs the normalized trend line over time for a search query and provides a way to estimate the relative buzz of word of mouth for given topics over time.", "title": "" } ]
[ { "docid": "55fcc765be689166b0a44eef1a8f26b6", "text": "A key goal of computer vision researchers is to create automated face recognition systems that can equal, and eventually surpass, human performance. To this end, it is imperative that computational researchers know of the key findings from experimental studies of face recognition by humans. These findings provide insights into the nature of cues that the human visual system relies upon for achieving its impressive performance and serve as the building blocks for efforts to artificially emulate these abilities. In this paper, we present what we believe are 19 basic results, with implications for the design of computational systems. Each result is described briefly and appropriate pointers are provided to permit an in-depth study of any particular result", "title": "" }, { "docid": "2c92d42311f9708b7cb40f34551315e0", "text": "This work characterizes electromagnetic excitation forces in interior permanent-magnet (IPM) brushless direct current (BLDC) motors and investigates their effects on noise and vibration. First, the electromagnetic excitations are classified into three sources: 1) so-called cogging torque, for which we propose an efficient technique of computation that takes into account saturation effects as a function of rotor position; 2) ripples of mutual and reluctance torque, for which we develop an equation to characterize the combination of space harmonics of inductances and flux linkages related to permanent magnets and time harmonics of current; and 3) fluctuation of attractive forces in the radial direction between the stator and rotor, for which we analyze contributions of electric currents as well as permanent magnets by the finite-element method. Then, the paper reports on an experimental investigation of influences of structural dynamic characteristics such as natural frequencies and mode shapes, as well as electromagnetic excitation forces, on noise and vibration in an IPM motor used in washing machines.", "title": "" }, { "docid": "cefabe1b4193483d258739674b53f773", "text": "This paper describes design and development of omnidirectional magnetic climbing robots with high maneuverability for inspection of ferromagnetic 3D human made structures. The main focus of this article is design, analysis and implementation of magnetic omnidirectional wheels for climbing robots. We discuss the effect of the associated problems of such wheels, e.g. vibration, on climbing robots. This paper also describes the evolution of magnetic omnidirectional wheels throughout the design and development of several solutions, resulting in lighter and smaller wheels which have less vibration and adapt better to smaller radius structures. These wheels are installed on a chassis which adapts passively to flat and curved structures, enabling the robot to climb and navigate on such structures.", "title": "" }, { "docid": "b3d915b4ff4d86b8c987b760fcf7d525", "text": "We examine how exercising control over a technology platform can increase profits and innovation. Benefits depend on using a platform as a governance mechanism to influence ecosystem parters. Results can inform innovation strategy, antitrust and intellectual property law, and management of competition.", "title": "" }, { "docid": "26a599c22c173f061b5d9579f90fd888", "text": "markov logic an interface layer for artificial markov logic an interface layer for artificial shinichi tsukada in size 22 syyjdjbook.buncivy yumina ooba in size 24 ajfy7sbook.ztoroy okimi in size 15 edemembookkey.16mb markov logic an interface layer for artificial intelligent systems (ai-2) ubc computer science interface layer for artificial intelligence daniel lowd essential principles for autonomous robotics markovlogic: aninterfacelayerfor arti?cialintelligence official encyclopaedia of sheffield united football club hot car hot car firext answers || 2007 acura tsx hitch manual course syllabus university of texas at dallas jump frog jump cafebr 1994 chevy silverado 1500 engine ekpbs readings in earth science alongs johnson owners manual pdf firext thomas rescues the diesels cafebr dead sea scrolls and the jewish origins of christianity install gimp help manual by iitsuka asao vox diccionario abreviado english spanis mdmtv nobutaka in size 26 bc13xqbookog.xxuz mechanisms in b cell neoplasia 1992 workshop at the spocks world diane duane nabbit treasury of saints fiores reasoning with probabilistic university of texas at austin gp1300r yamaha waverunner service manua by takisawa tomohide repair manual haier hpr10xc6 air conditioner birdz mexico icons mexico icons oobags asus z53 manual by hatsutori yoshino industrial level measurement by haruyuki morimoto", "title": "" }, { "docid": "1de568efbb57cc4e5d5ffbbfaf8d39ae", "text": "The Insider Threat Study, conducted by the U.S. Secret Service and Carnegie Mellon University’s Software Engineering Institute CERT Program, analyzed insider cyber crimes across U.S. critical infrastructure sectors. The study indicates that management decisions related to organizational and employee performance sometimes yield unintended consequences magnifying risk of insider attack. Lack of tools for understanding insider threat, analyzing risk mitigation alternatives, and communicating results exacerbates the problem. The goal of Carnegie Mellon University’s MERIT (Management and Education of the Risk of Insider Threat) project is to develop such tools. MERIT uses system dynamics to model and analyze insider threats and produce interactive learning environments. These tools can be used by policy makers, security officers, information technology, human resources, and management to understand the problem and assess risk from insiders based on simulations of policies, cultural, technical, and procedural factors. This paper describes the MERIT insider threat model and simulation results.", "title": "" }, { "docid": "013e96c212f7f58698acdae0adfcf374", "text": "Since our ability to engineer biological systems is directly related to our ability to control gene expression, a central focus of synthetic biology has been to develop programmable genetic regulatory systems. Researchers are increasingly turning to RNA regulators for this task because of their versatility, and the emergence of new powerful RNA design principles. Here we review advances that are transforming the way we use RNAs to engineer biological systems. First, we examine new designable RNA mechanisms that are enabling large libraries of regulators with protein-like dynamic ranges. Next, we review emerging applications, from RNA genetic circuits to molecular diagnostics. Finally, we describe new experimental and computational tools that promise to accelerate our understanding of RNA folding, function and design.", "title": "" }, { "docid": "a41bb1fe5670cc865bf540b34848f45f", "text": "The general idea of discovering knowledge in large amounts of data is both appealing and intuitive. Typically we focus our attention on learning algorithms, which provide the core capability of generalizing from large numbers of small, very specific facts to useful high-level rules; these learning techniques seem to hold the most excitement and perhaps the most substantive scientific content in the knowledge discovery in databases (KDD) enterprise. However, when we engage in real-world discovery tasks, we find that they can be extremely complex, and that induction of rules is only one small part of the overall process. While others have written overviews of \"the concept of KDD, and even provided block diagrams for \"knowledge discovery systems,\" no one has begun to identify all of the building blocks in a realistic KDD process. This is what we attempt to do here. Besides bringing into the discussion several parts of the process that have received inadequate attention in the KDD community, a careful elucidation of the steps in a realistic knowledge discovery process can provide a framework for comparison of different technologies and tools that are almost impossible to compare without a clean model.", "title": "" }, { "docid": "906ef2b4130ff5c264835ff3c15918e5", "text": "Exploratory big data applications often run on raw unstructured or semi-structured data formats, such as JSON files or text logs. These applications can spend 80–90% of their execution time parsing the data. In this paper, we propose a new approach for reducing this overhead: apply filters on the data’s raw bytestream before parsing. This technique, which we call raw filtering, leverages the features of modern hardware and the high selectivity of queries found in many exploratory applications. With raw filtering, a user-specified query predicate is compiled into a set of filtering primitives called raw filters (RFs). RFs are fast, SIMD-based operators that occasionally yield false positives, but never false negatives. We combine multiple RFs into an RF cascade to decrease the false positive rate and maximize parsing throughput. Because the best RF cascade is datadependent, we propose an optimizer that dynamically selects the combination of RFs with the best expected throughput, achieving within 10% of the global optimum cascade while adding less than 1.2% overhead. We implement these techniques in a system called Sparser, which automatically manages a parsing cascade given a data stream in a supported format (e.g., JSON, Avro, Parquet) and a user query. We show that many real-world applications are highly selective and benefit from Sparser. Across diverse workloads, Sparser accelerates state-of-the-art parsers such as Mison by up to 22× and improves end-to-end application performance by up to 9×. PVLDB Reference Format: S. Palkar, F. Abuzaid, P. Bailis, M. Zaharia. Filter Before You Parse: Faster Analytics on Raw Data with Sparser. PVLDB, 11(11): xxxx-yyyy, 2018. DOI: https://doi.org/10.14778/3236187.3236207", "title": "" }, { "docid": "6cf4994b5ed0e17885f229856b7cd58d", "text": "Recently Neural Architecture Search (NAS) has aroused great interest in both academia and industry, however it remains challenging because of its huge and non-continuous search space. Instead of applying evolutionary algorithm or reinforcement learning as previous works, this paper proposes a Direct Sparse Optimization NAS (DSO-NAS) method. In DSO-NAS, we provide a novel model pruning view to NAS problem. In specific, we start from a completely connected block, and then introduce scaling factors to scale the information flow between operations. Next, we impose sparse regularizations to prune useless connections in the architecture. Lastly, we derive an efficient and theoretically sound optimization method to solve it. Our method enjoys both advantages of differentiability and efficiency, therefore can be directly applied to large datasets like ImageNet. Particularly, On CIFAR-10 dataset, DSO-NAS achieves an average test error 2.84%, while on the ImageNet dataset DSO-NAS achieves 25.4% test error under 600M FLOPs with 8 GPUs in 18 hours.", "title": "" }, { "docid": "a74081f7108e62fadb48446255dd246b", "text": "Existing fuzzy neural networks (FNNs) are mostly developed under a shallow network configuration having lower generalization power than those of deep structures. This paper proposes a novel self-organizing deep fuzzy neural network, namely deep evolving fuzzy neural networks (DEVFNN). Fuzzy rules can be automatically extracted from data streams or removed if they play little role during their lifespan. The structure of the network can be deepened on demand by stacking additional layers using a drift detection method which not only detects the covariate drift, variations of input space, but also accurately identifies the real drift, dynamic changes of both feature space and target space. DEVFNN is developed under the stacked generalization principle via the feature augmentation concept where a recently developed algorithm, namely Generic Classifier (gClass), drives the hidden layer. It is equipped by an automatic feature selection method which controls activation and deactivation of input attributes to induce varying subsets of input features. A deep network simplification procedure is put forward using the concept of hidden layer merging to prevent uncontrollable growth of input space dimension due to the nature of feature augmentation approach in building a deep network structure. DEVFNN works in the sample-wise fashion and is compatible for data stream applications. The efficacy of DEVFNN has been thoroughly evaluated using six datasets with non-stationary properties under the prequential test-then-train protocol. It has been compared with four state-ofthe art data stream methods and its shallow counterpart where DEVFNN demonstrates improvement of classification accuracy. Moreover, it is also shown that the concept drift detection method is an effective tool to control the depth of network structure while the hidden layer merging scenario is capable of simplifying the network complexity of a deep network with negligible compromise of generalization performance.", "title": "" }, { "docid": "dfa62c69b1ab26e7e160100b69794674", "text": "Canonical correlation analysis (CCA) is a well established technique for identifying linear relationships among two variable sets. Kernel CCA (KCCA) is the most notable nonlinear extension but it lacks interpretability and robustness against irrelevant features. The aim of this article is to introduce two nonlinear CCA extensions that rely on the recently proposed Hilbert-Schmidt independence criterion and the centered kernel target alignment. These extensions determine linear projections that provide maximally dependent projected data pairs. The paper demonstrates that the use of linear projections allows removing irrelevant features, whilst extracting combinations of strongly associated features. This is exemplified through a simulation and the analysis of recorded data that are available in the literature.", "title": "" }, { "docid": "498a4b526633c06d6eac9aa52ff5e1d2", "text": "This talk surveys three challenge areas for mechanism design and describes the role approximation plays in resolving them. Challenge 1: optimal mechanisms are parameterized by knowledge of the distribution of agent's private types. Challenge 2: optimal mechanisms require precise distributional information. Challenge 3: in multi-dimensional settings economic analysis has failed to characterize optimal mechanisms. The theory of approximation is well suited to address these challenges. While the optimal mechanism may be parameterized by the distribution of agent's private types, there may be a single mechanism that approximates the optimal mechanism for any distribution. While the optimal mechanism may require precise distributional assumptions, there may be approximately optimal mechanism that depends only on natural characteristics of the distribution. While the multi-dimensional optimal mechanism may resist precise economic characterization, there may be simple description of approximately optimal mechanisms. Finally, these approximately optimal mechanisms, because of their simplicity and tractability, may be much more likely to arise in practice, thus making the theory of approximately optimal mechanism more descriptive than that of (precisely) optimal mechanisms. The talk will cover positive resolutions to these challenges with emphasis on basic techniques, relevance to practice, and future research directions.", "title": "" }, { "docid": "f175e9c17aa38a17253de2663c4999f1", "text": "As we increasingly rely on computers to process and manage our personal data, safeguarding sensitive information from malicious hackers is a fast growing concern. Among many forms of information leakage, covert timing channels operate by establishing an illegitimate communication channel between two processes and through transmitting information via timing modulation, thereby violating the underlying system's security policy. Recent studies have shown the vulnerability of popular computing environments, such as cloud computing, to these covert timing channels. In this work, we propose a new micro architecture-level framework, CC-Hunter, that detects the possible presence of covert timing channels on shared hardware. Our experiments demonstrate that Chanter is able to successfully detect different types of covert timing channels at varying bandwidths and message patterns.", "title": "" }, { "docid": "1f4ff9d732b3512ee9b105f084edd3d2", "text": "Today, as Network environments become more complex and cyber and Network threats increase, Organizations use wide variety of security solutions against today's threats. For proper and centralized control and management, range of security features need to be integrated into unified security package. Unified threat management (UTM) as a comprehensive network security solution, integrates all of security services such as firewall, URL filtering, virtual private networking, etc. in a single appliance. PfSense is a variant of UTM, and a customized FreeBSD (Unix-like operating system). Specially is used as a router and statefull firewall. It has many packages extend it's capabilities such as Squid3 package as a as a proxy server that cache data and SquidGuard, redirector and access controller plugin for squid3 proxy server. In this paper, with implementing UTM based on PfSense platform we use Squid3 proxy server and SquidGuard proxy filter to avoid extreme amount of unwanted uploading/ downloading over the internet by users in order to optimize our organization's bandwidth consumption. We begin by defining UTM and types of it, PfSense platform with it's key services and introduce a simple and operational solution for security stability and reducing the cost. Finally, results and statistics derived from this approach compared with the prior condition without PfSense platform.", "title": "" }, { "docid": "074d4a552c82511d942a58b93d51c38a", "text": "This is a survey of neural network applications in the real-world scenario. It provides a taxonomy of artificial neural networks (ANNs) and furnish the reader with knowledge of current and emerging trends in ANN applications research and area of focus for researchers. Additionally, the study presents ANN application challenges, contributions, compare performances and critiques methods. The study covers many applications of ANN techniques in various disciplines which include computing, science, engineering, medicine, environmental, agriculture, mining, technology, climate, business, arts, and nanotechnology, etc. The study assesses ANN contributions, compare performances and critiques methods. The study found that neural-network models such as feedforward and feedback propagation artificial neural networks are performing better in its application to human problems. Therefore, we proposed feedforward and feedback propagation ANN models for research focus based on data analysis factors like accuracy, processing speed, latency, fault tolerance, volume, scalability, convergence, and performance. Moreover, we recommend that instead of applying a single method, future research can focus on combining ANN models into one network-wide application.", "title": "" }, { "docid": "ec5d4c571f8cd85bf94784199ab10884", "text": "Researchers have shown that a wordnet for a new language, possibly resource-poor, can be constructed automatically by translating wordnets of resource-rich languages. The quality of these constructed wordnets is affected by the quality of the resources used such as dictionaries and translation methods in the construction process. Recent work shows that vector representation of words (word embeddings) can be used to discover related words in text. In this paper, we propose a method that performs such similarity computation using word embeddings to improve the quality of automatically constructed wordnets.", "title": "" }, { "docid": "6773b060fd16b6630f581eb65c5c6488", "text": "Proximity detection is one of the most common location-based applications in daily life when users intent to find their friends who get into their proximity. Studies on protecting user privacy information during the detection process have been widely concerned. In this paper, we first analyze a theoretical and experimental analysis of existing solutions for proximity detection, and then demonstrate that these solutions either provide a weak privacy preserving or result in a high communication and computational complexity. Accordingly, a location difference-based proximity detection protocol is proposed based on the Paillier cryptosystem for the purpose of dealing with the above shortcomings. The analysis results through an extensive simulation illustrate that our protocol outperforms traditional protocols in terms of communication and computation cost.", "title": "" }, { "docid": "3e28cbfc53f6c42bb0de2baf5c1544aa", "text": "Cloud computing is an emerging paradigm which allows the on-demand delivering of software, hardware, and data as services. As cloud-based services are more numerous and dynamic, the development of efficient service provisioning policies become increasingly challenging. Game theoretic approaches have shown to gain a thorough analytical understanding of the service provisioning problem.\n In this paper we take the perspective of Software as a Service (SaaS) providers which host their applications at an Infrastructure as a Service (IaaS) provider. Each SaaS needs to comply with quality of service requirements, specified in Service Level Agreement (SLA) contracts with the end-users, which determine the revenues and penalties on the basis of the achieved performance level. SaaS providers want to maximize their revenues from SLAs, while minimizing the cost of use of resources supplied by the IaaS provider. Moreover, SaaS providers compete and bid for the use of infrastructural resources. On the other hand, the IaaS wants to maximize the revenues obtained providing virtualized resources. In this paper we model the service provisioning problem as a Generalized Nash game, and we propose an efficient algorithm for the run time management and allocation of IaaS resources to competing SaaSs.", "title": "" }, { "docid": "d67e0fa20185e248a18277e381c9d42d", "text": "Smartphone security research has produced many useful tools to analyze the privacy-related behaviors of mobile apps. However, these automated tools cannot assess people's perceptions of whether a given action is legitimate, or how that action makes them feel with respect to privacy. For example, automated tools might detect that a blackjack game and a map app both use one's location information, but people would likely view the map's use of that data as more legitimate than the game. Our work introduces a new model for privacy, namely privacy as expectations. We report on the results of using crowdsourcing to capture users' expectations of what sensitive resources mobile apps use. We also report on a new privacy summary interface that prioritizes and highlights places where mobile apps break people's expectations. We conclude with a discussion of implications for employing crowdsourcing as a privacy evaluation technique.", "title": "" } ]
scidocsrr
6027fd277b7dc39ad47d1727c4b1f443
Study on the MFCC similarity-based voice activity detection algorithm
[ { "docid": "a24e18e7cf76420be77188435dd8542e", "text": "In this paper, we present a robust algorithm for audio classification that is capable of segmenting and classifying an audio stream into speech, music, environment sound and silence. Audio classification is processed in two steps, which makes it suitable for different applications. The first step of the classification is speech and non-speech discrimination. In this step, a novel algorithm based on KNN and LSP VQ is presented. The second step further divides non-speech class into music, environment sounds and silence with a rule based classification scheme. Some new features such as the noise frame ratio and band periodicity are introduced and discussed in detail. Our experiments in the context of video structure parsing have shown the algorithms produce very satisfactory results.", "title": "" } ]
[ { "docid": "f79e5a2b19bb51e8dc0017342a153fee", "text": "Decentralized ledger-based cryptocurrencies like Bitcoin present a way to construct payment systems without trusted banks. However, the anonymity of Bitcoin is fragile. Many altcoins and protocols are designed to improve Bitcoin on this issue, among which Zerocash is the first fullfledged anonymous ledger-based currency, using zero-knowledge proof, specifically zk-SNARK, to protect privacy. However, Zerocash suffers two problems: poor scalability and low efficiency. In this paper, we address the above issues by constructing a micropayment system in Zerocash called Z-Channel. First, we improve Zerocash to support multisignature and time lock functionalities, and prove that the reconstructed scheme is secure. Then we construct Z-Channel based on the improved Zerocash scheme. Our experiments demonstrate that Z-Channel significantly improves the scalability and reduces the confirmation time for Zerocash payments.", "title": "" }, { "docid": "5b8cb0c530daef4e267a8572349f1118", "text": "I enjoy doing research in Computer Security and Software Engineering and specifically in mobile security and adversarial machine learning. A primary goal of my research is to build adversarial-resilient intelligent security systems. I have been developing such security systems for the mobile device ecosystem that serves billions of users, millions of apps, and hundreds of thousands of app developers. For an ecosystem of this magnitude, manual inspection or rule-based security systems are costly and error-prone. There is a strong need for intelligent security systems that can learn from experiences, solve problems, and use knowledge to adapt to new situations. However, achieving intelligence in security systems is challenging. In the cat-and-mouse game between security analysts and adversaries, the intelligence of adversaries also increases. In this never-ending game, the adversaries continuously evolve their attacks to be specifically adversarial to newly proposed intelligent security techniques. To address this challenge, I have been pursuing two lines of research: (1) enhancing intelligence of existing security systems to automate the security-decision making by techniques such as program analysis [11, 8, 10, 6, U6] , natural language processing (NLP) [9, 7, U7, 1] , and machine learning [8, 4, 3, 2] ; (2) guarding against emerging attacks specifically adversarial to these newly-proposed intelligent security techniques by developing corresponding defenses [13, U1, U2] and testing methodologies [12, 5] . Throughout these research efforts, my general research methodology is to extract insightful data for security systems (through program analysis and NLP techniques), to enable intelligent decision making in security systems (through machine learning techniques that learn from the extracted data), and to strengthen robustness of the security systems by generating adversarial-testing inputs to check these intelligent security techniques and building defense to prevent the adversarial attacks. With this methodology, my research has derived solutions that have high impact on real-world systems. For instance, my work on analysis and testing of mobile applications (apps) [11, 10] in collaboration with Tencent Ltd. has been deployed and adopted in daily testing of a mobile app named WeChat, a popular messenger app with over 900 million monthly active users. A number of tools grown out of my research have been adopted by companies such as Fujitsu [P1, P2, 13, 6] , Samsung [12, 5] , and IBM.", "title": "" }, { "docid": "82e95fce635dd86f593c57fd74ffd357", "text": "We introduce a gcd-sum function involving regular integers (mod n) and prove results giving its minimal order, maximal order and average order.", "title": "" }, { "docid": "c7db2522973605850cb97e55b32100a1", "text": "Purpose – Based on stimulus-organism-response model, the purpose of this paper is to develop an integrated model to explore the effects of six marketing-mix components (stimuli) on consumer loyalty (response) through consumer value (organism) in social commerce (SC). Design/methodology/approach – In order to target online social buyers, a web-based survey was employed. Structural equation modeling with partial least squares (PLS) is used to analyze valid data from 599 consumers who have repurchase experience via Facebook. Findings – The results from PLS analysis show that all components of SC marketing mix (SCMM) have significant effects on SC consumer value. Moreover, SC customer value positively influences SC customer loyalty (CL). Research limitations/implications – The data for this study are collected from Facebook only and the sample size is limited; thus, replication studies are needed to improve generalizability and data representativeness of the study. Moreover, longitudinal studies are needed to verify the causality among the constructs in the proposed research model. Practical implications – SC sellers should implement more effective SCMM strategies to foster SC CL through better SCMM decisions. Social implications – The SCMM components represent the collective benefits of social interaction, exemplifying the importance of effective communication and interaction among SC customers. Originality/value – This study develops a parsimonious model to explain the over-arching effects of SCMM components on CL in SC mediated by customer value. It confirms that utilitarian, hedonic, and social values can be applied to online SC and that SCMM can be leveraged to achieve these values.", "title": "" }, { "docid": "61d29b80bcea073665f454444a3b0262", "text": "Nitric oxide (NO) is the principal mediator of penile erection. NO is synthesized by nitric oxide synthase (NOS). It has been well documented that the major causative factor contributing to erectile dysfunction in diabetic patients is the reduction in the amount of NO synthesis in the corpora cavernosa of the penis resulting in alterations of normal penile homeostasis. Arginase is an enzyme that shares a common substrate with NOS, thus arginase may downregulate NO production by competing with NOS for this substrate, l-arginine. The purpose of the present study was to compare arginase gene expression, protein levels, and enzyme activity in diabetic human cavernosal tissue. When compared to normal human cavernosal tissue, diabetic corpus cavernosum from humans with erectile dysfunction had higher levels of arginase II protein, gene expression, and enzyme activity. In contrast, gene expression and protein levels of arginase I were not significantly different in diabetic cavernosal tissue when compared to control tissue. The reduced ability of diabetic tissue to convert l-arginine to l-citrulline via nitric oxide synthase was reversed by the selective inhibition of arginase by 2(S)-amino-6-boronohexanoic acid (ABH). These data suggest that the increased expression of arginase II in diabetic cavernosal tissue may contribute to the erectile dysfunction associated with this common disease process and may play a role in other manifestations of diabetic disease in which nitric oxide production is decreased.", "title": "" }, { "docid": "2272325860332d5d41c02f317ab2389e", "text": "For a developing nation, deploying big data (BD) technology and introducing data science in higher education is a challenge. A pessimistic scenario is: Mis-use of data in many possible ways, waste of trained manpower, poor BD certifications from institutes, under-utilization of resources, disgruntled management staff, unhealthy competition in the market, poor integration with existing technical infrastructures. Also, the questions in the minds of students, scientists, engineers, teachers and managers deserve wider attention. Besides the stated perceptions and analyses perhaps ignoring socio-political and scientific temperaments in developing nations, the following questions arise: How did the BD phenomenon naturally occur, post technological developments in Computer and Communications Technology and how did different experts react to it? Are academicians elsewhere agreeing on the fact that BD is a new science? Granted that big data science is a new science what are its foundations as compared to conventional topics in Physics, Chemistry or Biology? Or, is it similar in an esoteric sense to astronomy or nuclear science? What are the technological and engineering implications locally and globally and how these can be advantageously used to augment business intelligence, for example? In other words, will the industry adopt the changes due to tactical advantages? How can BD success stories be faithfully carried over elsewhere? How will BD affect the Computer Science and other curricula? How will BD benefit different segments of our society on a large scale? To answer these, an appreciation of the BD as a science and as a technology is necessary. This paper presents a quick BD overview, relying on the contemporary literature; it addresses: characterizations of BD and the BD people, the background required for the students and teachers to join the BD bandwagon, the management challenges in embracing BD so that the bottomline is clear.", "title": "" }, { "docid": "476c85c8325b1781586646625a313cd1", "text": "This paper describes a data driven approach to studying the science of cyber security (SoS). It argues that science is driven by data. It then describes issues and approaches towards the following three aspects: (i) Data Driven Science for Attack Detection and Mitigation, (ii) Foundations for Data Trustworthiness and Policy-based Sharing, and (iii) A Risk-based Approach to Security Metrics. We believe that the three aspects addressed in this paper will form the basis for studying the Science of Cyber Security.", "title": "" }, { "docid": "e4f4fe27fff75bd7ed079f3094deaedb", "text": "This paper considers the scenario that multiple data owners wish to apply a machine learning method over the combined dataset of all owners to obtain the best possible learning output but do not want to share the local datasets owing to privacy concerns. We design systems for the scenario that the stochastic gradient descent (SGD) algorithm is used as the machine learning method because SGD (or its variants) is at the heart of recent deep learning techniques over neural networks. Our systems differ from existing systems in the following features: (1) any activation function can be used, meaning that no privacy-preserving-friendly approximation is required; (2) gradients computed by SGD are not shared but the weight parameters are shared instead; and (3) robustness against colluding parties even in the extreme case that only one honest party exists. We prove that our systems, while privacy-preserving, achieve the same learning accuracy as SGD and hence retain the merit of deep learning with respect to accuracy. Finally, we conduct several experiments using benchmark datasets, and show that our systems outperform previous system in terms of learning accuracies. keywords: privacy preservation, stochastic gradient descent, distributed trainers, neural networks.", "title": "" }, { "docid": "31d14e88b7c1aa953c1efac75da26d24", "text": "This session will focus on ways that Python is being used to successfully facilitate introductory computer science courses. After a brief introduction, we will present three different models for CS1 and CS2 using Python. Attendees will then participate in a discussion/question-answer session considering the advantages and challenges of using Python in the introductory courses. The presenters will focus on common issues, both positive and negative, that have arisen from the inclusion of Python in the introductory computer science curriculum as well as the impact that this can have on the entire computer science curriculum.", "title": "" }, { "docid": "211cf327b65cbd89cf635bbeb5fa9552", "text": "BACKGROUND\nAdvanced mobile communications and portable computation are now combined in handheld devices called \"smartphones\", which are also capable of running third-party software. The number of smartphone users is growing rapidly, including among healthcare professionals. The purpose of this study was to classify smartphone-based healthcare technologies as discussed in academic literature according to their functionalities, and summarize articles in each category.\n\n\nMETHODS\nIn April 2011, MEDLINE was searched to identify articles that discussed the design, development, evaluation, or use of smartphone-based software for healthcare professionals, medical or nursing students, or patients. A total of 55 articles discussing 83 applications were selected for this study from 2,894 articles initially obtained from the MEDLINE searches.\n\n\nRESULTS\nA total of 83 applications were documented: 57 applications for healthcare professionals focusing on disease diagnosis (21), drug reference (6), medical calculators (8), literature search (6), clinical communication (3), Hospital Information System (HIS) client applications (4), medical training (2) and general healthcare applications (7); 11 applications for medical or nursing students focusing on medical education; and 15 applications for patients focusing on disease management with chronic illness (6), ENT-related (4), fall-related (3), and two other conditions (2). The disease diagnosis, drug reference, and medical calculator applications were reported as most useful by healthcare professionals and medical or nursing students.\n\n\nCONCLUSIONS\nMany medical applications for smartphones have been developed and widely used by health professionals and patients. The use of smartphones is getting more attention in healthcare day by day. Medical applications make smartphones useful tools in the practice of evidence-based medicine at the point of care, in addition to their use in mobile clinical communication. Also, smartphones can play a very important role in patient education, disease self-management, and remote monitoring of patients.", "title": "" }, { "docid": "130ce0eb9bdd6de16f3c3249cfd56890", "text": "Malicious URLs host unsolicited content and are used to perpetrate cybercrimes. It is imperative to detect them in a timely manner. Traditionally, this is done through the usage of blacklists, which cannot be exhaustive, and cannot detect newly generated malicious URLs. To address this, recent years have witnessed several efforts to perform Malicious URL Detection using Machine Learning. The most popular and scalable approaches use lexical properties of the URL string by extracting Bag-of-words like features, followed by applying machine learning models such as SVMs. There are also other features designed by experts to improve the prediction performance of the model. These approaches suffer from several limitations: (i) Inability to effectively capture semantic meaning and sequential patterns in URL strings; (ii) Requiring substantial manual feature engineering; and (iii) Inability to handle unseen features and generalize to test data. To address these challenges, we propose URLNet, an end-to-end deep learning framework to learn a nonlinear URL embedding for Malicious URL Detection directly from the URL. Specifically, we apply Convolutional Neural Networks to both characters and words of the URL String to learn the URL embedding in a jointly optimized framework. This approach allows the model to capture several types of semantic information, which was not possible by the existing models. We also propose advanced word-embeddings to solve the problem of too many rare words observed in this task. We conduct extensive experiments on a large-scale dataset and show a significant performance gain over existing methods. We also conduct ablation studies to evaluate the performance of various components of URLNet.", "title": "" }, { "docid": "76c0ffd6c6ca1c5e5e04e67f6d061b46", "text": "The minimum-energy translational trajectory planning algorithm is proposed for battery-powered three-wheeled omni-directional mobile robots (TOMRs). We have chosen a practical cost function as the total energy drawn from the batteries, in order to lengthen the operational time of a mobile robot with given batteries. After establishing the dynamic equations of TOMRs, the optimal control theory is used to solve the minimum-energy trajectory, which gives the velocity profile in analytic form. Various simulations are performed and the consumed energy is compared to other velocity trajectories. Simulation results reveal that the energy saving is achieved of up to 2.4% compared with loss-minimization control, and up to 4.3% compared with conventional trapezoidal velocity profile.", "title": "" }, { "docid": "4445f128f31d6f42750049002cb86a29", "text": "Convolutional neural networks are a popular choice for current object detection and classification systems. Their performance improves constantly but for effective training, large, hand-labeled datasets are required. We address the problem of obtaining customized, yet large enough datasets for CNN training by synthesizing them in a virtual world, thus eliminating the need for tedious human interaction for ground truth creation. We developed a CNN-based multi-class detection system that was trained solely on virtual world data and achieves competitive results compared to state-of-the-art detection systems.", "title": "" }, { "docid": "31bf58e44a2c6747a79fc4bb549e1465", "text": "Today's WiFi access points (APs) are ubiquitous, and provide critical connectivity for a wide range of mobile networking devices. Many management tasks, e.g. optimizing AP placement and detecting rogue APs, require a user to efficiently determine the location of wireless APs. Unlike prior localization techniques that require either specialized equipment or extensive outdoor measurements, we propose a way to locate APs in real-time using commodity smartphones. Our insight is that by rotating a wireless receiver (smartphone) around a signal-blocking obstacle (the user's body), we can effectively emulate the sensitivity and functionality of a directional antenna. Our measurements show that we can detect these signal strength artifacts on multiple smartphone platforms for a variety of outdoor environments. We develop a model for detecting signal dips caused by blocking obstacles, and use it to produce a directional analysis technique that accurately predicts the direction of the AP, along with an associated confidence value. The result is Borealis, a system that provides accurate directional guidance and leads users to a desired AP after a few measurements. Detailed measurements show that Borealis is significantly more accurate than other real-time localization systems, and is nearly as accurate as offline approaches using extensive wireless measurements.", "title": "" }, { "docid": "4790a2dfcdf74d5c9ae5ae8c9f42eb0b", "text": "Inspired by the success of deploying deep learning in the fields of Computer Vision and Natural Language Processing, this learning paradigm has also found its way into the field of Music Information Retrieval. In order to benefit from deep learning in an effective, but also efficient manner, deep transfer learning has become a common approach. In this approach, it is possible to reuse the output of a pre-trained neural network as the basis for a new learning task. The underlying hypothesis is that if the initial and new learning tasks show commonalities and are applied to the same type of input data (e.g., music audio), the generated deep representation of the data is also informative for the new task. Since, however, most of the networks used to generate deep representations are trained using a single initial learning source, their representation is unlikely to be informative for all possible future tasks. In this paper, we present the results of our investigation of what are the most important factors to generate deep representations for the data and learning tasks in the music domain. We conducted this investigation via an extensive empirical study that involves multiple learning sources, as well as multiple deep learning architectures with varying levels of information sharing between sources, in order to learn music representations. We then validate these representations considering multiple target datasets for evaluation. The results of our experiments yield several insights into how to approach the design of methods for learning widely deployable deep data representations in the music domain.", "title": "" }, { "docid": "7ba37f2dcf95f36727e1cd0f06e31cc0", "text": "The neonate receiving parenteral nutrition (PN) therapy requires a physiologically appropriate solution in quantity and quality given according to a timely, cost-effective strategy. Maintaining tissue integrity, metabolism, and growth in a neonate is challenging. To support infant growth and influence subsequent development requires critical timing for nutrition assessment and intervention. Providing amino acids to neonates has been shown to improve nitrogen balance, glucose metabolism, and amino acid profiles. In contrast, supplying the lipid emulsions (currently available in the United States) to provide essential fatty acids is not the optimal composition to help attenuate inflammation. Recent investigations with an omega-3 fish oil IV emulsion are promising, but there is need for further research and development. Complications from PN, however, remain problematic and include infection, hepatic dysfunction, and cholestasis. These complications in the neonate can affect morbidity and mortality, thus emphasizing the preference to provide early enteral feedings, as well as medication therapy to improve liver health and outcome. Potential strategies aimed at enhancing PN therapy in the neonate are highlighted in this review, and a summary of guidelines for practical management is included.", "title": "" }, { "docid": "589396a7c9dae0567f0bcd4d83461a6f", "text": "The risk of inadequate hand hygiene in food handling settings is exacerbated when water is limited or unavailable, thereby making washing with soap and water difficult. The SaniTwice method involves application of excess alcohol-based hand sanitizer (ABHS), hand \"washing\" for 15 s, and thorough cleaning with paper towels while hands are still wet, followed by a standard application of ABHS. This study investigated the effectiveness of the SaniTwice methodology as an alternative to hand washing for cleaning and removal of microorganisms. On hands moderately soiled with beef broth containing Escherichia coli (ATCC 11229), washing with a nonantimicrobial hand washing product achieved a 2.86 (±0.64)-log reduction in microbial contamination compared with the baseline, whereas the SaniTwice method with 62 % ethanol (EtOH) gel, 62 % EtOH foam, and 70 % EtOH advanced formula gel achieved reductions of 2.64 ± 0.89, 3.64 ± 0.57, and 4.61 ± 0.33 log units, respectively. When hands were heavily soiled from handling raw hamburger containing E. coli, washing with nonantimicrobial hand washing product and antimicrobial hand washing product achieved reductions of 2.65 ± 0.33 and 2.69 ± 0.32 log units, respectively, whereas SaniTwice with 62 % EtOH foam, 70 % EtOH gel, and 70 % EtOH advanced formula gel achieved reductions of 2.87 ± 0.42, 2.99 ± 0.51, and 3.92 ± 0.65 log units, respectively. These results clearly demonstrate that the in vivo antibacterial efficacy of the SaniTwice regimen with various ABHS is equivalent to or exceeds that of the standard hand washing approach as specified in the U.S. Food and Drug Administration Food Code. Implementation of the SaniTwice regimen in food handling settings with limited water availability should significantly reduce the risk of foodborne infections resulting from inadequate hand hygiene.", "title": "" }, { "docid": "6844deb3346756b1858778a4cec26098", "text": "Deep Learning has recently been introduced as a new alternative to perform Side-Channel analysis [1]. Until now, studies have been focused on applying Deep Learning techniques to perform Profiled SideChannel attacks where an attacker has a full control of a profiling device and is able to collect a large amount of traces for different key values in order to characterize the device leakage prior to the attack. In this paper we introduce a new method to apply Deep Learning techniques in a Non-Profiled context, where an attacker can only collect a limited number of side-channel traces for a fixed unknown key value from a closed device. We show that by combining key guesses with observations of Deep Learning metrics, it is possible to recover information about the secret key. The main interest of this method, is that it is possible to use the power of Deep Learning and Neural Networks in a Non-Profiled scenario. We show that it is possible to exploit the translation-invariance property of Convolutional Neural Networks [2] against de-synchronized traces and use Data Augmentation techniques also during Non-Profiled side-channel attacks. Additionally, the present work shows that in some conditions, this method can outperform classic Non-Profiled attacks as Correlation Power Analysis. We also highlight that it is possible to target masked implementations without leakages combination pre-preprocessing and with less assumptions than classic high-order attacks. To illustrate these properties, we present a series of experiments performed on simulated data and real traces collected from the ChipWhisperer board and from the ASCAD database [3]. The results of our experiments demonstrate the interests of this new method and show that this attack can be performed in practice.", "title": "" }, { "docid": "510504cec355ec68a92fad8f10527beb", "text": "This paper presents a 1.2V/2.5V tolerant I/O buffer design with only thin gate-oxide devices. The novel floating N-well and gate-tracking circuits in mixed-voltage I/O buffer are proposed to overcome the problem of leakage current, which will occur in the conventional CMOS I/O buffer when using in the mixedvoltage I/O interfaces. The new proposed 1.2V/2.5V tolerant I/O buffer design has been successfully verified in a 0.13-μm salicided CMOS process, which can be also applied in other CMOS processes to serve different mixed-voltage I/O interfaces.", "title": "" }, { "docid": "170cd125882865150428b521d6220929", "text": "In this paper, we propose a novel approach for action classification in soccer videos using a recurrent neural network scheme. Thereby, we extract from each video action at each timestep a set of features which describe both the visual content (by the mean of a BoW approach) and the dominant motion (with a key point based approach). A Long Short-Term Memory-based Recurrent Neural Network is then trained to classify each video sequence considering the temporal evolution of the features for each timestep. Experimental results on the MICC-Soccer-Actions-4 database show that the proposed approach outperforms classification methods of related works (with a classification rate of 77 %), and that the combination of the two features (BoW and dominant motion) leads to a classification rate of 92 %.", "title": "" } ]
scidocsrr
fcefc579d2dc466c358a72842a49889a
Three-Dimensional Face Recognition in the Presence of Facial Expressions: An Annotated Deformable Model Approach
[ { "docid": "ee97c467539a3e08cd3cfe7a8f7ee3e2", "text": "The problem of geometric alignment of two roughly pre-registered, partially overlapping, rigid, noisy 3D point sets is considered. A new natural and simple, robustified extension of the popular Iterative Closest Point (ICP) algorithm [1] is presented, called Trimmed ICP. The new algorithm is based on the consistent use of the Least Trimmed Squares approach in all phases of the operation. Convergence is proved and an efficient implementation is discussed. TrICP is fast, applicable to overlaps under 50%, robust to erroneous and incomplete measurements, and has easy-to-set parameters. ICP is a special case of TrICP when the overlap parameter is 100%. Results of a performance evaluation study on the SQUID database of 1100 shapes are presented. The tests compare TrICP and the Iterative Closest Reciprocal Point algorithm [2].", "title": "" } ]
[ { "docid": "97444c5b944beb30697dfad626a5b5a4", "text": "While eye tracking is becoming more and more relevant as a promising input channel, diverse applications using gaze control in a more natural way are still rather limited. Though several researchers have indicated the particular high potential of gaze-based interaction for pointing tasks, often gaze-only approaches are investigated. However, time-consuming dwell-time activations limit this potential. To overcome this, we present a gaze-supported fisheye lens in combination with (1) a keyboard and (2) and a tilt-sensitive mobile multi-touch device. In a user-centered design approach, we elicited how users would use the aforementioned input combinations. Based on the received feedback we designed a prototype system for the interaction with a remote display using gaze and a touch-and-tilt device. This eliminates gaze dwell-time activations and the well-known Midas Touch problem (unintentionally issuing an action via gaze). A formative user study testing our prototype provided further insights into how well the elaborated gaze-supported interaction techniques were experienced by users.", "title": "" }, { "docid": "4e37f91af78d1c275bcf69685ebde914", "text": "OBJECTIVES\nThis narrative literature review aims to consider the impact of removable partial dentures (RPDs) on oral and systemic health.\n\n\nDATA AND SOURCES\nA review of the literature was performed using Medline/PubMed database resources up to July 2011 to identify appropriate articles that addressed the objectives of this review. This was followed by extensive hand searching using reference lists from relevant articles.\n\n\nCONCLUSIONS\nThe proportion of partially dentate adults who wear RPDs is increasing in many populations. A major public health challenge is to plan oral healthcare for this group of patients in whom avoidance of further tooth loss is of particular importance. RPDs have the potential to negatively impact on different aspects of oral health. There is clear evidence that RPDs increase plaque and gingivitis. However, RPDs have not clearly been shown to increase the risk for periodontitis. The risk for caries, particularly root caries, appears to be higher in wearers of RPDs. Regular recall is therefore essential to minimise the risk for dental caries, as well as periodontitis. There is no evidence to support a negative impact on nutritional status, though research in this area is particularly deficient. Furthermore, there are very few studies that have investigated whether RPDs have any impact on general health. From the limited literature available, it appears that RPDs can possibly improve quality of life, and this is relevant in the era of patient-centred care. Overall, further research is required to investigate the impact of RPDs on all aspects of oral and general health, nutritional status and quality of life.", "title": "" }, { "docid": "65d938eee5da61f27510b334312afe41", "text": "This paper reviews the actual and potential use of social media in emergency, disaster and crisis situations. This is a field that has generated intense interest. It is characterised by a burgeoning but small and very recent literature. In the emergencies field, social media (blogs, messaging, sites such as Facebook, wikis and so on) are used in seven different ways: listening to public debate, monitoring situations, extending emergency response and management, crowd-sourcing and collaborative development, creating social cohesion, furthering causes (including charitable donation) and enhancing research. Appreciation of the positive side of social media is balanced by their potential for negative developments, such as disseminating rumours, undermining authority and promoting terrorist acts. This leads to an examination of the ethics of social media usage in crisis situations. Despite some clearly identifiable risks, for example regarding the violation of privacy, it appears that public consensus on ethics will tend to override unscrupulous attempts to subvert the media. Moreover, social media are a robust means of exposing corruption and malpractice. In synthesis, the widespread adoption and use of social media by members of the public throughout the world heralds a new age in which it is imperative that emergency managers adapt their working practices to the challenge and potential of this development. At the same time, they must heed the ethical warnings and ensure that social media are not abused or misused when crises and emergencies occur.", "title": "" }, { "docid": "0867eb365ca19f664bd265a9adaa44e5", "text": "We present VI-DSO, a novel approach for visual-inertial odometry, which jointly estimates camera poses and sparse scene geometry by minimizing photometric and IMU measurement errors in a combined energy functional. The visual part of the system performs a bundle-adjustment like optimization on a sparse set of points, but unlike key-point based systems it directly minimizes a photometric error. This makes it possible for the system to track not only corners, but any pixels with large enough intensity gradients. IMU information is accumulated between several frames using measurement preintegration, and is inserted into the optimization as an additional constraint between keyframes. We explicitly include scale and gravity direction into our model and jointly optimize them together with other variables such as poses. As the scale is often not immediately observable using IMU data this allows us to initialize our visual-inertial system with an arbitrary scale instead of having to delay the initialization until everything is observable. We perform partial marginalization of old variables so that updates can be computed in a reasonable time. In order to keep the system consistent we propose a novel strategy which we call “dynamic marginalization”. This technique allows us to use partial marginalization even in cases where the initial scale estimate is far from the optimum. We evaluate our method on the challenging EuRoC dataset, showing that VI-DSO outperforms the state of the art.", "title": "" }, { "docid": "2db8aee20badadc39f0fa089e8deb2d0", "text": "Detecting people remains a popular and challenging problem in computer vision. In this paper, we analyze parts-based models for person detection to determine which components of their pipeline could benefit the most if improved. We accomplish this task by studying numerous detectors formed from combinations of components performed by human subjects and machines. The parts-based model we study can be roughly broken into four components: feature detection, part detection, spatial part scoring and contextual reasoning including non-maximal suppression. Our experiments conclude that part detection is the weakest link for challenging person detection datasets. Non-maximal suppression and context can also significantly boost performance. However, the use of human or machine spatial models does not significantly or consistently affect detection accuracy.", "title": "" }, { "docid": "768240033185f6464d2274181370843a", "text": "Most of today's commercial companies heavily rely on social media and community management tools to interact with their clients and analyze their online behaviour. Nonetheless, these tools still lack evolved data mining and visualization features to tailor the analysis in order to support useful marketing decisions. We present an original methodology that aims at formalizing the marketing need of the company and develop a tool that can support it. The methodology is derived from the Cross-Industry Standard Process for Data Mining (CRISP-DM) and includes additional steps dedicated to the design and development of visualizations of mined data. We followed the methodology in two use cases with Swiss companies. First, we developed a prototype that aims at understanding the needs of tourists based on Flickr and Instagram data. In that use case, we extend the existing literature by enriching hashtags analysis methods with a semantic network based on Linked Data. Second, we analyzed internal customer data of an online discount retailer to help them define guerilla marketing measures. We report on the challenges of integrating Facebook data in the process. Informal feedback from domain experts confirms the strong potential of such advanced analytic features based on social data to inform marketing decisions.", "title": "" }, { "docid": "3e1b4fb4ac5222c70b871ebb7ea43408", "text": "Modern graph embedding procedures can efficiently extract features of nodes from graphs with millions of nodes. The features are later used as inputs for downstream predictive tasks. In this paper we propose GEMSEC a graph embedding algorithm which learns a clustering of the nodes simultaneously with computing their features. The procedure places nodes in an abstract feature space where the vertex features minimize the negative log likelihood of preserving sampled vertex neighborhoods, while the nodes are clustered into a fixed number of groups in this space. GEMSEC is a general extension of earlier work in the domain as it is an augmentation of the core optimization problem of sequence based graph embedding procedures and is agnostic of the neighborhood sampling strategy. We show that GEMSEC extracts high quality clusters on real world social networks and is competitive with other community detection algorithms. We demonstrate that the clustering constraint has a positive effect on representation quality and also that our procedure learns to embed and cluster graphs jointly in a robust and scalable manner.", "title": "" }, { "docid": "d32887dfac583ed851f607807c2f624e", "text": "For a through-wall ultrawideband (UWB) random noise radar using array antennas, subtraction of successive frames of the cross-correlation signals between each received element signal and the transmitted signal is able to isolate moving targets in heavy clutter. Images of moving targets are subsequently obtained using the back projection (BP) algorithm. This technique is not constrained to noise radar, but can also be applied to other kinds of radar systems. Different models based on the finite-difference time-domain (FDTD) algorithm are set up to simulate different through-wall scenarios of moving targets. Simulation results show that the heavy clutter is suppressed, and the signal-to-clutter ratio (SCR) is greatly enhanced using this approach. Multiple moving targets can be detected, localized, and tracked for any random movement.", "title": "" }, { "docid": "419116a3660f1c1f7127de31f311bd1e", "text": "Unlike dimensionality reduction (DR) tools for single-view data, e.g., principal component analysis (PCA), canonical correlation analysis (CCA) and generalized CCA (GCCA) are able to integrate information from multiple feature spaces of data. This is critical in multi-modal data fusion and analytics, where samples from a single view may not be enough for meaningful DR. In this work, we focus on a popular formulation of GCCA, namely, MAX-VAR GCCA. The classic MAX-VAR problem is optimally solvable via eigen-decomposition, but this solution has serious scalability issues. In addition, how to impose regularizers on the sought canonical components was unclear - while structure-promoting regularizers are often desired in practice. We propose an algorithm that can easily handle datasets whose sample and feature dimensions are both large by exploiting data sparsity. The algorithm is also flexible in incorporating regularizers on the canonical components. Convergence properties of the proposed algorithm are carefully analyzed. Numerical experiments are presented to showcase its effectiveness.", "title": "" }, { "docid": "5174b54a546002863a50362c70921176", "text": "The intrinsic complexity of the brain can lead one to set aside issues related to its relationships with the body, but the field of embodied cognition emphasizes that understanding brain function at the system level requires one to address the role of the brain-body interface. It has only recently been appreciated that this interface performs huge amounts of computation that does not have to be repeated by the brain, and thus affords the brain great simplifications in its representations. In effect the brain's abstract states can refer to coded representations of the world created by the body. But even if the brain can communicate with the world through abstractions, the severe speed limitations in its neural circuitry mean that vast amounts of indexing must be performed during development so that appropriate behavioral responses can be rapidly accessed. One way this could happen would be if the brain used a decomposition whereby behavioral primitives could be quickly accessed and combined. This realization motivates our study of independent sensorimotor task solvers, which we call modules, in directing behavior. The issue we focus on herein is how an embodied agent can learn to calibrate such individual visuomotor modules while pursuing multiple goals. The biologically plausible standard for module programming is that of reinforcement given during exploration of the environment. However this formulation contains a substantial issue when sensorimotor modules are used in combination: The credit for their overall performance must be divided amongst them. We show that this problem can be solved and that diverse task combinations are beneficial in learning and not a complication, as usually assumed. Our simulations show that fast algorithms are available that allot credit correctly and are insensitive to measurement noise.", "title": "" }, { "docid": "96f42b3a653964cffa15d9b3bebf0086", "text": "The brain processes information through many layers of neurons. This deep architecture is representationally powerful1,2,3,4, but it complicates learning by making it hard to identify the responsible neurons when a mistake is made1,5. In machine learning, the backpropagation algorithm1 assigns blame to a neuron by computing exactly how it contributed to an error. To do this, it multiplies error signals by matrices consisting of all the synaptic weights on the neuron’s axon and farther downstream. This operation requires a precisely choreographed transport of synaptic weight information, which is thought to be impossible in the brain1,6,7,8,9,10,11,12,13,14. Here we present a surprisingly simple algorithm for deep learning, which assigns blame by multiplying error signals by random synaptic weights. We show that a network can learn to extract useful information from signals sent through these random feedback connections. In essence, the network learns to learn. We demonstrate that this new mechanism performs as quickly and accurately as backpropagation on a variety of problems and describe the principles which underlie its function. Our demonstration provides a plausible basis for how a neuron can be adapted using error signals generated at distal locations in the brain, and thus dispels long-held assumptions about the algorithmic constraints on learning in neural circuits. 1 ar X iv :1 41 1. 02 47 v1 [ qbi o. N C ] 2 N ov 2 01 4 Networks in the brain compute via many layers of interconnected neurons15,16. To work properly neurons must adjust their synapses so that the network’s outputs are appropriate for its tasks. A longstanding mystery is how upstream synapses (e.g. the synapse between α and β in Fig. 1a) are adjusted on the basis of downstream errors (e.g. e in Fig. 1a). In artificial intelligence this problem is solved by an algorithm called backpropagation of error1. Backprop works well in real-world applications17,18,19, and networks trained with it can account for cell response properties in some areas of cortex20,21. But it is biologically implausible because it requires that neurons send each other precise information about large numbers of synaptic weights — i.e. it needs weight transport1,6,7,8,12,14,22 (Fig. 1a, b). Specifically, backprop multiplies error signals e by the matrix W T , the transpose of the forward synaptic connections, W (Fig. 1b). This implies that feedback is computed using knowledge of all the synaptic weights W in the forward path. For this reason, current theories of biological learning have turned to simpler schemes such as reinforcement learning23, and “shallow” mechanisms which use errors to adjust only the final layer of a network4,11. But reinforcement learning, which delivers the same reward signal to each neuron, is slow and scales poorly with network size5,13,24. And shallow mechanisms waste the representational power of deep networks3,4,25. Here we describe a new deep-learning algorithm that is as fast and accurate as backprop, but much simpler, avoiding all transport of synaptic weight information. This makes it a mechanism the brain could easily exploit. It is based on three insights: (i) The feedback weights need not be exactly W T . In fact, any matrix B will suffice, so long as on average,", "title": "" }, { "docid": "25ccaa5a71d0a3f46296c59328e0b9b5", "text": "Real-world social networks from a variety of domains can naturally be modelled as dynamic graphs. However, approaches to detecting communities have largely focused on identifying communities in static graphs. Recently, researchers have begun to consider the problem of tracking the evolution of groups of users in dynamic scenarios. Here we describe a model for tracking the progress of communities over time in a dynamic network, where each community is characterised by a series of significant evolutionary events. This model is used to motivate a community-matching strategy for efficiently identifying and tracking dynamic communities. Evaluations on synthetic graphs containing embedded events demonstrate that this strategy can successfully track communities over time in volatile networks. In addition, we describe experiments exploring the dynamic communities detected in a real mobile operator network containing millions of users.", "title": "" }, { "docid": "4edb705f4e60421327a77e9d7624f708", "text": "We introduce a new neural architecture and an unsupervised a lgorithm for learning invariant representations from temporal sequence of images. The system uses two groups of complex cells whose outputs are combined multiplicative ly: one that represents the content of the image, constrained to be constant over severa l consecutive frames, and one that represents the precise location of features, which is allowed to vary over time but constrained to be sparse. The architecture uses an encod er to extract features, and a decoder to reconstruct the input from the features. The meth od was applied to patches extracted from consecutive movie frames and produces orien tat o and frequency selective units analogous to the complex cells in V1. An extension of the method is proposed to train a network composed of units with local receptive fiel d spread over a large image of arbitrary size. A layer of complex cells, subject to spars ity constraints, pool feature units over overlapping local neighborhoods, which causes t h feature units to organize themselves into pinwheel patterns of orientation-selecti v receptive fields, similar to those observed in the mammalian visual cortex. A feed-forwa rd encoder efficiently computes the feature representation of full images.", "title": "" }, { "docid": "a21f04b6c8af0b38b3b41f79f2661fa6", "text": "While Enterprise Architecture Management is an established and widely discussed field of interest in the context of information systems research, we identify a lack of work regarding quality assessment of enterprise architecture models in general and frameworks or methods on that account in particular. By analyzing related work by dint of a literature review in a design science research setting, we provide twofold contributions. We (i) suggest an Enterprise Architecture Model Quality Framework (EAQF) and (ii) apply it to a real world scenario. Keywords—Enterprise Architecture, model quality, quality framework, EA modeling.", "title": "" }, { "docid": "0e866292de7de9b478e1facc2b042eda", "text": "The fitness landscape of the graph bipartitioning problem is investigated by performing a search space analysis for several types of graphs. The analysis shows that the structure of the search space is significantly different for the types of instances studied. Moreover, with increasing epistasis, the amount of gene interactions in the representation of a solution in an evolutionary algorithm, the number of local minima for one type of instance decreases and, thus, the search becomes easier. We suggest that other characteristics besides high epistasis might have greater influence on the hardness of a problem. To understand these characteristics, the notion of a dependency graph describing gene interactions is introduced. In particular, the local structure and the regularity of the dependency graph seems to be important for the performance of an algorithm, and in fact, algorithms that exploit these properties perform significantly better than others which do not. It will be shown that a simple hybrid multi-start local search exploiting locality in the structure of the graphs is able to find optimum or near optimum solutions very quickly. However, if the problem size increases or the graphs become unstructured, a memetic algorithm (a genetic algorithm incorporating local search) is shown to be much more effective.", "title": "" }, { "docid": "5048a090adfdd3ebe9d9253ca4f72644", "text": "Movement disorders or extrapyramidal symptoms (EPS) associated with selective serotonin reuptake inhibitors (SSRIs) have been reported. Although akathisia was found to be the most common EPS, and fluoxetine was implicated in the majority of the adverse reactions, there were also cases with EPS due to sertraline treatment. We present a child and an adolescent who developed torticollis (cervical dystonia) after using sertraline. To our knowledge, the child case is the first such report of sertraline-induced torticollis, and the adolescent case is the third in the literature.", "title": "" }, { "docid": "b112b59ff092255faf98314562eff7b0", "text": "The state of the art in computer vision has rapidly advanced over the past decade largely aided by shared image datasets. However, most of these datasets tend to consist of assorted collections of images from the web that do not include 3D information or pose information. Furthermore, they target the problem of object category recognition - whereas solving the problem of object instance recognition might be sufficient for many robotic tasks. To address these issues, we present a high-quality, large-scale dataset of 3D object instances, with accurate calibration information for every image. We anticipate that “solving” this dataset will effectively remove many perception-related problems for mobile, sensing-based robots. The contributions of this work consist of: (1) BigBIRD, a dataset of 100 objects (and growing), composed of, for each object, 600 3D point clouds and 600 high-resolution (12 MP) images spanning all views, (2) a method for jointly calibrating a multi-camera system, (3) details of our data collection system, which collects all required data for a single object in under 6 minutes with minimal human effort, and (4) multiple software components (made available in open source), used to automate multi-sensor calibration and the data collection process. All code and data are available at http://rll.eecs.berkeley.edu/bigbird.", "title": "" }, { "docid": "bda980d41e0b64ec7ec41502cada6e7f", "text": "In this paper, we address semantic parsing in a multilingual context. We train one multilingual model that is capable of parsing natural language sentences from multiple different languages into their corresponding formal semantic representations. We extend an existing sequence-to-tree model to a multi-task learning framework which shares the decoder for generating semantic representations. We report evaluation results on the multilingual GeoQuery corpus and introduce a new multilingual version of the ATIS corpus.", "title": "" }, { "docid": "5daeccb1a01df4f68f23c775828be41d", "text": "This article surveys the research and development of Engineered Cementitious Composites (ECC) over the last decade since its invention in the early 1990’s. The importance of micromechanics in the materials design strategy is emphasized. Observations of unique characteristics of ECC based on a broad range of theoretical and experimental research are examined. The advantageous use of ECC in certain categories of structural, and repair and retrofit applications is reviewed. While reflecting on past advances, future challenges for continued development and deployment of ECC are noted. This article is based on a keynote address given at the International Workshop on Ductile Fiber Reinforced Cementitious Composites (DFRCC) – Applications and Evaluations, sponsored by the Japan Concrete Institute, and held in October 2002 at Takayama, Japan.", "title": "" }, { "docid": "cf4070e227334632eb4386e6f48a9adb", "text": "Increased usage of mobile devices, such as smartphones and tablets, has led to widespread popularity and usage of mobile apps. If not carefully developed, such apps may demonstrate energy-inefficient behaviour, where one or more energy-intensive hardware components (such as Wifi, GPS, etc) are left in a high-power state, even when no apps are using these components. We refer to such kind of energy-inefficiencies as energy bugs. Executing an app with an energy bug causes the mobile device to exhibit poor energy consumption behaviour and a drastically shortened battery life. Since mobiles apps can have huge input domains, therefore exhaustive exploration is often impractical. We believe that there is a need for a framework that can systematically detect and fix energy bugs in mobile apps in a scalable fashion. To address this need, we have developed EnergyPatch, a framework that uses a combination of static and dynamic analysis techniques to detect, validate and repair energy bugs in Android apps. The use of a light-weight, static analysis technique enables EnergyPatch to quickly narrow down to the potential program paths along which energy bugs may occur. Subsequent exploration of these potentially buggy program paths using a dynamic analysis technique helps in validations of the reported bugs and to generate test cases. Finally, EnergyPatch generates repair expressions to fix the validated energy bugs. Evaluation with real-life apps from repositories such as F-droid and Github, shows that EnergyPatch is scalable and can produce results in reasonable amount of time. Additionally, we observed that the repair expressions generated by EnergyPatch could bring down the energy consumption on tested apps up to 60 percent.", "title": "" } ]
scidocsrr
a8d2322b0f3a8dd104d43c405cfd02ca
Using restricted transactional memory to build a scalable in-memory database
[ { "docid": "f10660b168700e38e24110a575b5aafa", "text": "While the use of MapReduce systems (such as Hadoop) for large scale data analysis has been widely recognized and studied, we have recently seen an explosion in the number of systems developed for cloud data serving. These newer systems address \"cloud OLTP\" applications, though they typically do not support ACID transactions. Examples of systems proposed for cloud serving use include BigTable, PNUTS, Cassandra, HBase, Azure, CouchDB, SimpleDB, Voldemort, and many others. Further, they are being applied to a diverse range of applications that differ considerably from traditional (e.g., TPC-C like) serving workloads. The number of emerging cloud serving systems and the wide range of proposed applications, coupled with a lack of apples-to-apples performance comparisons, makes it difficult to understand the tradeoffs between systems and the workloads for which they are suited. We present the \"Yahoo! Cloud Serving Benchmark\" (YCSB) framework, with the goal of facilitating performance comparisons of the new generation of cloud data serving systems. We define a core set of benchmarks and report results for four widely used systems: Cassandra, HBase, Yahoo!'s PNUTS, and a simple sharded MySQL implementation. We also hope to foster the development of additional cloud benchmark suites that represent other classes of applications by making our benchmark tool available via open source. In this regard, a key feature of the YCSB framework/tool is that it is extensible--it supports easy definition of new workloads, in addition to making it easy to benchmark new systems.", "title": "" } ]
[ { "docid": "e9f7bf5eb9bf3c2c3ff7820ffb34cb93", "text": "BACKGROUND\nThe transconjunctival lower eyelid blepharoplasty is advantageous for its quick recovery and low complication rates. Conventional techniques rely on fat removal to contour the lower eyelid. This article describes the authors' extended transconjunctival lower eyelid blepharoplasty technique that takes dissection beyond the orbital rim to address aging changes on the midcheek.\n\n\nMETHODS\nFrom December of 2012 to December of 2015, 54 patients underwent this procedure. Through a transconjunctival incision, the preseptal space was entered and excess orbital fat pads were excised. Medially, the origins of the palpebral part of the orbicularis oculi, the tear trough ligament, and orbital part of the orbicularis oculi were sequentially released, connecting the dissection with the premaxillary space. More laterally, the orbicularis retaining ligament was released, connecting the dissection with the prezygomatic space. Excised orbital fat was then grafted under the released tear trough ligament to correct the tear trough deformity. When the patients had significant maxillary retrusion, structural fat grafting was performed at the same time.\n\n\nRESULTS\nThe mean follow-up was 10 months. High satisfaction was noted among the patients treated with this technique. The revision rate was 2 percent. Complication rates were low. No chemosis, prolonged swelling, lower eyelid retraction, or ectropion was seen in any patients.\n\n\nCONCLUSION\nThe extended transconjunctival lower blepharoplasty using the midcheek soft-tissue spaces is a safe and effective approach for treating patients presenting with eye bags and tear trough deformity.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, V.", "title": "" }, { "docid": "1ce0d44502fd53c708b8ccab21151e79", "text": "Exploration in multi-task reinforcement learning is critical in training agents to deduce the underlying MDP. Many of the existing exploration frameworks such as E, Rmax, Thompson sampling assume a single stationary MDP and are not suitable for system identification in the multi-task setting. We present a novel method to facilitate exploration in multi-task reinforcement learning using deep generative models. We supplement our method with a low dimensional energy model to learn the underlying MDP distribution and provide a resilient and adaptive exploration signal to the agent. We evaluate our method on a new set of environments and provide intuitive interpretation of our results.", "title": "" }, { "docid": "f723f2d583c6313396db195876876f98", "text": "After decades of continuous scaling, further advancement of silicon microelectronics across the entire spectrum of computing applications is today limited by power dissipation. While the trade-off between power and performance is well-recognized, most recent studies focus on the extreme ends of this balance. By concentrating instead on an intermediate range, an ~ 8× improvement in power efficiency can be attained without system performance loss in parallelizable applications-those in which such efficiency is most critical. It is argued that power-efficient hardware is fundamentally limited by voltage scaling, which can be achieved only by blurring the boundaries between devices, circuits, and systems and cannot be realized by addressing any one area alone. By simultaneously considering all three perspectives, the major issues involved in improving power efficiency in light of performance and area constraints are identified. Solutions for the critical elements of a practical computing system are discussed, including the underlying logic device, associated cache memory, off-chip interconnect, and power delivery system. The IBM Blue Gene system is then presented as a case study to exemplify several proposed directions. Going forward, further power reduction may demand radical changes in device technologies and computer architecture; hence, a few such promising methods are briefly considered.", "title": "" }, { "docid": "2afd6afa18653ab234533bc99db0b4d8", "text": "Autophagy is a lysosomal degradation pathway that is essential for survival, differentiation, development, and homeostasis. Autophagy principally serves an adaptive role to protect organisms against diverse pathologies, including infections, cancer, neurodegeneration, aging, and heart disease. However, in certain experimental disease settings, the self-cannibalistic or, paradoxically, even the prosurvival functions of autophagy may be deleterious. This Review summarizes recent advances in understanding the physiological functions of autophagy and its possible roles in the causation and prevention of human diseases.", "title": "" }, { "docid": "11b2da0b86180878e8d5031a9069adae", "text": "PURPOSE\nThis article describes a cancer-related advocacy skill set that can be acquired through a learning process.\n\n\nOVERVIEW\nCancer survivorship is a process rather than a stage or time point, and it involves a continuum of events from diagnosis onward. There exists little consensus about what underlying processes explain different levels of long term functioning, but skills necessary for positive adaptation to cancer have been identified from both the professional literature and from the rich experiences of cancer survivors.\n\n\nCLINICAL IMPLICATIONS\nHealthcare practitioners need to be more creative and assertive in fostering consumer empowerment and should incorporate advocacy training into care plans. Strategies that emphasize personal competency and increase self-advocacy capabilities enable patients to make the best possible decisions for themselves regarding their cancer care. In addition, oncology practitioners must become informed advocacy partners with their patients in the public debate about healthcare and cancer care delivery.", "title": "" }, { "docid": "6e4775c9459de240850a7cba509ca01f", "text": "Forecasting performances of feed-forward and recurrent neural networks (NN) trained with different learning algorithms are analyzed and compared using the Mackey-Glass nonlinear chaotic time series. This system is a known benchmark test whose elements are hard to predict. Multi-layer Perceptron NN was chosen as a feed-forward neural network because it is still the most commonly used network in financial forecasting models. It is compared with the modified version of the so-called Dynamic Multi-layer Perceptron NN characterized with a dynamic neuron model, i.e., Auto Regressive Moving Average filter built into the hidden layer neurons. Thus, every hidden layer neuron has the ability to process previous values of its own activity together with new input signals. The obtained results indicate satisfactory forecasting characteristics of both networks. However, recurrent NN was more accurate in practically all tests using less number of hidden layer neurons than the feed-forward NN. This study once again confirmed a great effectiveness and potential of dynamic neural networks in modeling and predicting highly nonlinear processes. Their application in the design of financial forecasting models is therefore most recommended.", "title": "" }, { "docid": "dc867072ef34de6bb6aafe34fa310d97", "text": "This paper discusses the use of voice coil actuators to enhance the performance of shift-by-wire systems. This innovative purely electric actuation approach was implemented and applied to a Formula SAE race car. The result was more compact and faster than conventional solutions, which usually employ pneumatic actuators. The designed shift-by-wire system incorporates a control unit based on a digital signal processor, which runs the control algorithms developed for both gear shifting and launching the car. The system was successfully validated through laboratory and on-track tests. In addition, a comparative test with an equivalent pneumatic counterpart was carried out. This showed that an effective use of voice coil actuators enabled the upshift time to be almost halved, thus proving that these actuators are a viable solution to improving shift-by-wire system performance.", "title": "" }, { "docid": "f554af0d260de70f6efbc8fe8d64a357", "text": "Hypocretin deficiency causes narcolepsy and may affect neuroendocrine systems and body composition. Additionally, growth hormone (GH) alterations my influence weight in narcolepsy. Symptoms can be treated effectively with sodium oxybate (SXB; γ-hydroxybutyrate) in many patients. This study compared growth hormone secretion in patients and matched controls and established the effect of SXB administration on GH and sleep in both groups. Eight male hypocretin-deficient patients with narcolepsy and cataplexy and eight controls matched for sex, age, BMI, waist-to-hip ratio, and fat percentage were enrolled. Blood was sampled before and on the 5th day of SXB administration. SXB was taken two times 3 g/night for 5 consecutive nights. Both groups underwent 24-h blood sampling at 10-min intervals for measurement of GH concentrations. The GH concentration time series were analyzed with AutoDecon and approximate entropy (ApEn). Basal and pulsatile GH secretion, pulse regularity, and frequency, as well as ApEn values, were similar in patients and controls. Administration of SXB caused a significant increase in total 24-h GH secretion rate in narcolepsy patients, but not in controls. After SXB, slow-wave sleep (SWS) and, importantly, the cross-correlation between GH levels and SWS more than doubled in both groups. In conclusion, SXB leads to a consistent increase in nocturnal GH secretion and strengthens the temporal relation between GH secretion and SWS. These data suggest that SXB may alter somatotropic tone in addition to its consolidating effect on nighttime sleep in narcolepsy. This could explain the suggested nonsleep effects of SXB, including body weight reduction.", "title": "" }, { "docid": "a9d136429d3d5b871fa84c3209bd763c", "text": "Portable embedded computing systems require energy autonomy. This is achieved by batteries serving as a dedicated energy source. The requirement of portability places severe restrictions on size and weight, which in turn limits the amount of energy that is continuously available to maintain system operability. For these reasons, efficient energy utilization has become one of the key challenges to the designer of battery-powered embedded computing systems.In this paper, we first present a novel analytical battery model, which can be used for the battery lifetime estimation. The high quality of the proposed model is demonstrated with measurements and simulations. Using this battery model, we introduce a new \"battery-aware\" cost function, which will be used for optimizing the lifetime of the battery. This cost function generalizes the traditional minimization metric, namely the energy consumption of the system. We formulate the problem of battery-aware task scheduling on a single processor with multiple voltages. Then, we prove several important mathematical properties of the cost function. Based on these properties, we propose several algorithms for task ordering and voltage assignment, including optimal idle period insertion to exercise charge recovery.This paper presents the first effort toward a formal treatment of battery-aware task scheduling and voltage scaling, based on an accurate analytical model of the battery behavior.", "title": "" }, { "docid": "0012f70ed83e001aa074a9c4d1a41a61", "text": "In this paper, instead of multilayered notch antenna, the ridged tapered slot antenna (RTSA) is chosen as an element of wideband phased array antenna (PAA) since it has rigid body and can be easily manufactured by mechanical wire-cutting. In addition, because the RTSA is made of conductor, it doesn't need via-holes which are required to avoid the blind angles out of the operation frequency band. Theses blind angles come from the self resonance of the dielectric material of notch antenna. We developed wide band/wide scan PAA which has a bandwidth of 3:1 and scan volume of plusmn45deg. In order to determine the shape of the RTSA, the active VSWR (AVSWR) of the RTSA was optimized in the numerical waveguide simulator. And then using the E-plane/H-plane simulator, the AVSWR with beam scan angles in E-plane/H-plane are calculated respectively. On the basis of optimized design, numerical analysis of finite arrays was performed by commercial time domain solver. Through the simulation of 10 times 6 quad-element RTSA arrays, the AVSWR at the center element was computed and compared with the measured result. The active element pattern (AEP) of 10 times 6 quad-element RTSA arrays was also computed and had a good agreement with the measured AEP. From the result of the AEP, we can easily predict that 10 times 6 quad-element RTSA arrays have a good beam scanning capabilities", "title": "" }, { "docid": "1172addf46ec4c70ec658ab0c0a17902", "text": "This paper extends research on ethical leadership by proposing a responsibility orientation for leaders. Responsible leadership is based on the concept of leaders who are not isolated from the environment, who critically evaluate prevailing norms, are forward-looking, share responsibility, and aim to solve problems collectively. Adding such a responsibility orientation helps to address critical issues that persist in research on ethical leadership. The paper discusses important aspects of responsible leadership, which include being able to make informed ethical judgments about prevailing norms and rules, communicating effectively with stakeholders, engaging in long-term thinking and in perspective-taking, displaying moral courage, and aspiring to positive change. Furthermore, responsible leadership means actively engaging stakeholders, encouraging participative decision-making, and aiming for shared problem-solving. A case study that draws on in-depth interviews with the representatives of businesses and non-governmental organizations illustrates the practical relevance of thinking about responsibility and reveals the challenges of responsible leadership.", "title": "" }, { "docid": "b3b050c35a1517dc52351cd917d0665a", "text": "The amount of information shared via social media is rapidly increasing amid growing concerns over online privacy. This study investigates the effect of controversiality and social endorsement of media content on sharing behavior when choosing between sharing publicly or anonymously. Anonymous sharing is found to be a popular choice (59% of shares), especially for controversial content which is 3.2x more likely to be shard anonymously. Social endorsement was not found to affect sharing behavior, except for sports-related content. Implications for social media interface design are dis-", "title": "" }, { "docid": "ff3359fe51ed275de1f3b61eee833045", "text": "Opinion target extraction is a fundamental task in opinion mining. In recent years, neural network based supervised learning methods have achieved competitive performance on this task. However, as with any supervised learning method, neural network based methods for this task cannot work well when the training data comes from a different domain than the test data. On the other hand, some rule-based unsupervised methods have shown to be robust when applied to different domains. In this work, we use rule-based unsupervised methods to create auxiliary labels and use neural network models to learn a hidden representation that works well for different domains. When this hidden representation is used for opinion target extraction, we find that it can outperform a number of strong baselines with a large margin.", "title": "" }, { "docid": "ff9b5d96b762b2baacf4bf19348c614b", "text": "Drought stress is a major factor in reduce growth, development and production of plants. Stress was applied with polyethylene glycol (PEG) 6000 and water potentials were: zero (control), -0.15 (PEG 10%), -0.49 (PEG 20%), -1.03 (PEG 30%) and -1.76 (PEG40%) MPa. The solutes accumulation of two maize (Zea mays L.) cultivars -704 and 301were determined after drought stress. In our experiments, a higher amount of soluble sugars and a lower amount of starch were found under stress. Soluble sugars concentration increased (from 1.18 to 1.90 times) in roots and shoots of both varieties when the studied varieties were subjected to drought stress, but starch content were significantly (p<0.05) decreased (from 16 to 84%) in both varieties. This suggests that sugars play an important role in Osmotic Adjustment (OA) in maize. The free proline level also increased (from 1.56 to 3.13 times) in response to drought stress and the increase in 704 var. was higher than 301 var. It seems to proline may play a role in minimizing the damage caused by dehydration. Increase of proline content in shoots was higher than roots, but increase of soluble sugar content and decrease of starch content in roots was higher than shoots.", "title": "" }, { "docid": "8481bf05a0afc1de516d951474fb9d92", "text": "We propose an approach to Multitask Learning (MTL) to make deep learning models faster and lighter for applications in which multiple tasks need to be solved simultaneously, which is particularly useful in embedded, real-time systems. We develop a multitask model for both Object Detection and Semantic Segmentation and analyze the challenges that appear during its training. Our multitask network is 1.6x faster, lighter and uses less memory than deploying the single-task models in parallel. We conclude that MTL has the potential to give superior performance in exchange of a more complex training process that introduces challenges not present in single-task models.", "title": "" }, { "docid": "257eca5511b1657f4a3cd2adff1989f8", "text": "The monitoring of volcanoes is mainly performed by sensors installed on their structures, aiming at recording seismic activities and reporting them to observatories to be later analyzed by specialists. However, due to the high volume of data continuously collected, the use of automatic techniques is an important requirement to support real time analyses. In this sense, a basic but challenging task is the classification of seismic activities to identify signals yielded by different sources as, for instance, the movement of magmatic fluids. Although there exists several approaches proposed to perform such task, they were mainly designed to deal with raw signals. In this paper, we present a 2D approach developed considering two main steps. Firstly, spectrograms for every collected signal are calculated by using Fourier Transform. Secondly, we set a deep neural network to discriminate seismic activities by analyzing the spectrogram shapes. As a consequence, our classifier provided outstanding results with accuracy rates greater than 95%.", "title": "" }, { "docid": "66255dc6c741737b3576e7ddefec96ce", "text": "Neural Machine Translation (NMT) with source side attention have achieved remarkable performance. however, there has been little work exploring to attend to the target side which can potentially enhance the memory capbility of NMT. We reformulate a Decoding-History Enhanced Attention mechanism (DHEA) to render NMT model better at selecting both source side and target side information. DHEA enables a dynamic control on the ratios at which source and target contexts contribute to the generation of target words, offering a way to weakly induce structure relations among both source and target tokens. It also allows training errors to be directly back-propagated through short-cut connections and effectively alleviates the gradient vanishing problem. The empirical study on Chinese-English translation shows that our model with proper configuration can improve by 0.9 BLEU upon Transformer and achieve the best reported results in the same dataset. On WMT14 English-German task and a larger WMT14 English-French task, our model achieves comparable results with the state-of-the-art NMT systems.", "title": "" }, { "docid": "a9440a3eb37360176f5ee792da1dbdf3", "text": "Background: Test quality is a prerequisite for achieving production system quality. While the concept of quality is multidimensional, most of the effort in testing context hasbeen channelled towards measuring test effectiveness. Objective: While effectiveness of tests is certainly important, we aim to identify a core list of testing principles that also address other quality facets of testing, and to discuss how they can be quantified as indicators of test quality. Method: We have conducted a two-day workshop with our industry partners to come up with a list of relevant principles and best practices expected to result in high quality tests. We then utilised our academic and industrial training materials together with recommendations in practitioner oriented testing books to refine the list. We surveyed existing literature for potential metrics to quantify identified principles. Results: We have identified a list of 15 testing principles to capture the essence of testing goals and best practices from quality perspective. Eight principles do not map toexisting test smells and we propose metrics for six of those. Further, we have identified additional potential metrics for the seven principles that partially map to test smells. Conclusion: We provide a core list of testing principles along with a discussion of possible ways to quantify them for assessing goodness of tests. We believe that our work wouldbe useful for practitioners in assessing the quality of their tests from multiple perspectives including but not limited to maintainability, comprehension and simplicity.", "title": "" }, { "docid": "feec0094203fdae5a900831ea81fcfb0", "text": "Costs, market fragmentation, and new media channels that let customers bypass advertisements seem to be in league against the old ways of marketing. Relying on mass media campaigns to build strong brands may be a thing of the past. Several companies in Europe, making a virtue of necessity, have come up with alternative brand-building approaches and are blazing a trail in the post-mass-media age. In England, Nestlé's Buitoni brand grew through programs that taught the English how to cook Italian food. The Body Shop garnered loyalty with its support of environmental and social causes. Cadbury funded a theme park tied to its history in the chocolate business. Häagen-Dazs opened posh ice-cream parlors and got itself featured by name on the menus of fine restaurants. Hugo Boss and Swatch backed athletic or cultural events that became associated with their brands. The various campaigns shared characteristics that could serve as guidelines for any company hoping to build a successful brand: senior managers were closely involved with brand-building efforts; the companies recognized the importance of clarifying their core brand identity; and they made sure that all their efforts to gain visibility were tied to that core identity. Studying the methods of companies outside one's own industry and country can be instructive for managers. Pilot testing and the use of a single and continuous measure of brand equity also help managers get the most out of novel approaches in their ever more competitive world.", "title": "" }, { "docid": "83742a3fcaed826877074343232be864", "text": "In this paper we propose a design of the main modulation and demodulation units of a modem compliant with the new DVB-S2 standard (Int. J. Satellite Commun. 2004; 22:249–268). A typical satellite channel model consistent with the targeted applications of the aforementioned standard is assumed. In particular, non-linear pre-compensation as well as synchronization techniques are described in detail and their performance assessed by means of analysis and computer simulations. The proposed algorithms are shown to provide a good trade-off between complexity and performance and they apply to both the broadcast and the unicast profiles, the latter allowing the exploitation of adaptive coding and modulation (ACM) (Proceedings of the 20th AIAA Satellite Communication Systems Conference, Montreal, AIAA-paper 2002-1863, May 2002). Finally, end-to-end system performances in term of BER versus the signal-to-noise ratio are shown as a result of extensive computer simulations. The whole communication chain is modelled in these simulations, including the BCH and LDPC coder, the modulator with the pre-distortion techniques, the satellite transponder model with its typical impairments, the downlink chain inclusive of the RF-front-end phase noise, the demodulator with the synchronization sub-system units and finally the LDPC and BCH decoders. Copyright # 2004 John Wiley & Sons, Ltd.", "title": "" } ]
scidocsrr
64e1fcc08257837fb4008f694e086ee6
Design of UWB Bandpass Filter Using Stepped-Impedance Stub-Loaded Resonator
[ { "docid": "89835907e8212f7980c35ae12d711339", "text": "In this letter, a novel ultra-wideband (UWB) bandpass filter with compact size and improved upper-stopband performance has been studied and implemented using multiple-mode resonator (MMR). The MMR is formed by attaching three pairs of circular impedance-stepped stubs in shunt to a high impedance microstrip line. By simply adjusting the radius of the circles of the stubs, the resonant modes of the MMR can be roughly allocated within the 3.1-10.6 GHz UWB band while suppressing the spurious harmonics in the upper-stopband. In order to enhance the coupling degree, two interdigital coupled-lines are used in the input and output sides. Thus, a predicted UWB passband is realized. Meanwhile, the insertion loss is higher than 30.0 dB in the upper-stopband from 12.1 to 27.8 GHz. Finally, the filter is successfully designed and fabricated. The EM-simulated and the measured results are presented in this work where excellent agreement between them is obtained.", "title": "" } ]
[ { "docid": "b2a7c0a96f29a554ecdba2d56778b7c7", "text": "Existing video streaming algorithms use various estimation approaches to infer the inherently variable bandwidth in cellular networks, which often leads to reduced quality of experience (QoE). We ask the question: \"If accurate bandwidth prediction were possible in a cellular network, how much can we improve video QoE?\". Assuming we know the bandwidth for the entire video session, we show that existing streaming algorithms only achieve between 69%-86% of optimal quality. Since such knowledge may be impractical, we study algorithms that know the available bandwidth for a few seconds into the future. We observe that prediction alone is not sufficient and can in fact lead to degraded QoE. However, when combined with rate stabilization functions, prediction outperforms existing algorithms and reduces the gap with optimal to 4%. Our results lead us to believe that cellular operators and content providers can tremendously improve video QoE by predicting available bandwidth and sharing it through APIs.", "title": "" }, { "docid": "bde253462808988038235a46791affc1", "text": "Power electronic Grid-Connected Converters (GCCs) are widely applied as grid interface in renewable energy sources. This paper proposes an extended Direct Power Control with Space Vector Modulation (DPC-SVM) scheme with improved operation performance under grid distortions. The real-time operated DPC-SVM scheme has to execute several important tasks as: space vector pulse width modulation, active and reactive power feedback control, grid current harmonics and voltage dips compensation. Thus, development and implementation of the DPC-SVM algorithm using single chip floating-point microcontroller TMS320F28335 is described. It combines large peripheral equipment, typical for microcontrollers, with high computation capacity characteristic for Digital Signal Processors (DSPs). The novelty of the proposed system lies in extension of the generic DPC-SVM scheme by additional higher harmonic and voltage dips compensation modules and implementation of the whole algorithm in a single chip floating point microcontroller. Overview of the laboratory setup, description of basic algorithm subtasks sequence, software optimization as well as execution time of specific program modules on fixed-point and floating-point processors are discussed. Selected oscillograms illustrating operation and robustness of the developed algorithm used in 5 kVA laboratory model of the GCC are presented.", "title": "" }, { "docid": "33187aba3285bcd040c45edf2eba284e", "text": "This paper describes the acquisition of the multichannel multimodal database AV@CAR for automatic audio-visual speech recognition in cars. Automatic speech recognition (ASR) plays an important role inside vehicles to keep the driver away from distraction. It is also known that visual information (lip-reading) can improve accuracy in ASR under adverse conditions as those within a car. The corpus described here is intended to provide training and testing material for several classes of audiovisual speech recognizers including isolated word system, word-spotting systems, vocabulary independent systems, and speaker dependent or speaker independent systems for a wide range of applications. The audio database is composed of seven audio channels including, clean speech (captured using a close talk microphone), noisy speech from several microphones placed on the overhead of the cabin, noise only signal coming from the engine compartment and information about the speed of the car. For the video database, a small video camera sensible to the visible and the near infrared bands is placed on the windscreen and used to capture the face of the driver. This is done under different light conditions both during the day and at night. Additionally, the same individuals are recorded in laboratory, under controlled environment conditions to obtain noise free speech signals, 2D images and 3D + texture face models.", "title": "" }, { "docid": "0daeab709ae4d77ef3a349236d3811fd", "text": "Ray tracing has become commodity in rendering and first ray tracing hardware emerges. Hence, the quest for an API is on. The course reviews current efforts and abstractions, especially the interaction of rasterization and ray tracing, cross platform challenges, realtime constraints, and enabling applications beyond image synthesis.", "title": "" }, { "docid": "5e460b65fdc7d369a6e8ff39dce9ca81", "text": "The classical Steiner tree problem in weighted graphs seeks a minimum weight connected subgraph containing a given subset of the vertices (terminals). We present a new polynomialtime heuristic that achieves a best-known approximation ratio of 1 + ln 3 2 ≈ 1.55 for general graphs, and best-known approximation ratios of ≈ 1.28 for quasi-bipartite graphs (i.e., where no two non-terminals are adjacent) and for complete graphs with edge weights 1 and 2. Our method is considerably simpler and easier to implement than previous approaches. We also prove the first known non-trivial performance bound (1.5 · OPT) for the Iterated 1-Steiner heuristic of Kahng and Robins in quasi-bipartite graphs.", "title": "" }, { "docid": "062fb8603fe65ddde2be90bac0519f97", "text": "Meta-heuristic methods represent very powerful tools for dealing with hard combinatorial optimization problems. However, real life instances usually cannot be treated efficiently in \"reasonable\" computing times. Moreover, a major issue in metaheuristic design and calibration is to make them robust, i.e., to provide high performance solutions for a variety of problem settings. Parallel meta-heuristics aim to address both issues. The objective of this chapter is to present a state-of-the-art survey of the main parallel meta-heuristic ideas and strategies, and to discuss general design principles applicable to all meta-heuristic classes. To achieve this goal, we explain various paradigms related to parallel meta-heuristic development, where communications, synchronization and control aspects are the most relevant. We also discuss implementation issues, namely the influence of the target architecture on parallel execution of meta-heuristics, pointing out the characteristics of shared and distributed memory multiprocessor systems. All these topics are illustrated by examples from recent literature. These examples are related to the parallelization of various meta-heuristic methods, but we focus here on Variable Neighborhood Search and Bee Colony Optimization.", "title": "" }, { "docid": "87f0a390580c452d77fcfc7040352832", "text": "• J. Wieting, M. Bansal, K. Gimpel, K. Livescu, and D. Roth. 2015. From paraphrase database to compositional paraphrase model and back. TACL. • K. S. Tai, R. Socher, and C. D. Manning. 2015. Improved semantic representations from treestructured long short-term memory networks. ACL. • W. Yin and H. Schutze. 2015. Convolutional neural network for paraphrase identification. NAACL. The product also streams internet radio and comes with a 30-day free trial for realnetworks' rhapsody music subscription. The device plays internet radio streams and comes with a 30-day trial of realnetworks rhapsody music service. Given two sentences, measure their similarity:", "title": "" }, { "docid": "ebf92a0faf6538f1d2b85fb2aa497e80", "text": "The generally accepted assumption by most multimedia researchers is that learning is inhibited when on-screen text and narration containing the same information is presented simultaneously, rather than on-screen text or narration alone. This is known as the verbal redundancy effect. Are there situations where the reverse is true? This research was designed to investigate the reverse redundancy effect for non-native English speakers learning English reading comprehension, where two instructional modes were used the redundant mode and the modality mode. In the redundant mode, static pictures and audio narration were presented with synchronized redundant on-screen text. In the modality mode, only static pictures and audio were presented. In both modes, learners were allowed to control the pacing of the lessons. Participants were 209 Yemeni learners in their first year of tertiary education. Examination of text comprehension scores indicated that those learners who were exposed to the redundancy mode performed significantly better than learners in the modality mode. They were also significantly more motivated than their counterparts in the modality mode. This finding has added an important modification to the redundancy effect. That is the reverse redundancy effect is true for multimedia learning of English as a foreign language for students where textual information was foreign to them. In such situations, the redundant synchronized on-screen text did not impede learning; rather it reduced the cognitive load and thereby enhanced learning.", "title": "" }, { "docid": "9ecf20a9df11e008ddd01c9dea38b942", "text": "A n interest rate swap is a contractual agreement between two parties to exchange a series of interest rate payments without exchanging the underlying debt. The interest rate swap represents one example of a general category of financial instruments known as derivative instruments. In the most general terms, a derivative instrument is an agreement whose value derives from some underlying market return, market price, or price index. The rapid growth of the market for swaps and other derivatives in recent years has spurred considerable controversy over the economic rationale for these instruments. Many observers have expressed alarm over the growth and size of the market, arguing that interest rate swaps and other derivative instruments threaten the stability of financial markets. Recently, such fears have led both legislators and bank regulators to consider measures to curb the growth of the market. Several legislators have begun to promote initiatives to create an entirely new regulatory agency to supervise derivatives trading activity. Underlying these initiatives is the premise that derivative instruments increase aggregate risk in the economy, either by encouraging speculation or by burdening firms with risks that management does not understand fully and is incapable of controlling.1 To be certain, much of this criticism is aimed at many of the more exotic derivative instruments that have begun to appear recently. Nevertheless, it is difficult, if not impossible, to appreciate the economic role of these more exotic instruments without an understanding of the role of the interest rate swap, the most basic of the new generation of financial derivatives.", "title": "" }, { "docid": "5b9d26fc8b5c45a26377885f75c0f509", "text": "Background: The objective of this study is to assess the feasibility of aprimary transfistula anorectoplasty (TFARP) in congenital recto-vestibular fistula without a covering colostomy in the north of Iraq. Patients and Methods: Female patients having imperforate anus with congenital rectovestibular fistula presenting to pediatric surgical centres in the north of Iraq (Mosul & Erbil) between 1995 to 2011 were reviewed in a nonrandomized manner, after excluding those with pouch colon, rectovaginal fistula and patients with colostomy. All cases underwent one stage primary (TFARP) anorectoplasty at age between 1-30 months, after on table rectal irrigation with normal saline & povidoneIodine. They were kept nil by mouth until 24 hours postoperatively. Postoperative regular anal dilatation were commenced after 2 weeks of operation when needed. The results were evaluated for need of bowel preparation, duration of surgery,, cosmetic appearance, commencement of feed and hospital stay,postoperative results. Patients were also followed up for assessment of continence and anal dilatation.", "title": "" }, { "docid": "302a838f1a94596d37693363abcf1978", "text": "In this paper we present a method for organizing and indexing logo digital libraries like the ones of the patent and trademark offices. We propose an efficient queried-by-example retrieval system which is able to retrieve logos by similarity from large databases of logo images. Logos are compactly described by a variant of the shape context descriptor. These descriptors are then indexed by a locality-sensitive hashing data structure aiming to perform approximate k-NN search in high dimensional spaces in sub-linear time. The experiments demonstrate the effectiveness and efficiency of this system on realistic datasets as the Tobacco-800 logo database.", "title": "" }, { "docid": "fcc7ef9f58038eead6e55b27b0cf5f0b", "text": "Project managers aim at keeping track of interdependencies between various artifacts of the software development lifecycle, to find out potential requirements conflicts, to better understand the impact of change requests, and to fulfill process quality standards, such as CMMI requirements. While there are many methods and techniques on how to technically store requirements traces, the economic issues of dealing with requirements tracing complexity remain open. In practice tracing is typically not an explicit systematic process, but occurs rather ad hoc with considerable hidden tracing-related quality costs. This paper reports a case study on value-based requirements tracing (VBRT) that systematically supports project managers in tailoring requirements tracing precision and effort based on the parameters stakeholder value, requirements risk/volatility, and tracing costs. Main results of the case study were: (a) VBRT took around 35% effort of full requirements tracing; (b) more risky or volatile requirements warranted more detailed tracing because of their higher change probability.", "title": "" }, { "docid": "3cc74bce3c395b82dac437286aace591", "text": "We present a technique for simulating plastic deformation in sheets of thin materials, such as crumpled paper, dented metal, and wrinkled cloth. Our simulation uses a framework of adaptive mesh refinement to dynamically align mesh edges with folds and creases. This framework allows efficient modeling of sharp features and avoids bend locking that would be otherwise caused by stiff in-plane behavior. By using an explicit plastic embedding space we prevent remeshing from causing shape diffusion. We include several examples demonstrating that the resulting method realistically simulates the behavior of thin sheets as they fold and crumple.", "title": "" }, { "docid": "a9b20ad74b3a448fbc1555b27c4dcac9", "text": "A new learning algorithm for multilayer feedforward networks, RPROP, is proposed. To overcome the inherent disadvantages of pure gradient-descent, RPROP performs a local adaptation of the weight-updates according to the behaviour of the errorfunction. In substantial difference to other adaptive techniques, the effect of the RPROP adaptation process is not blurred by the unforseeable influence of the size of the derivative but only dependent on the temporal behaviour of its sign. This leads to an efficient and transparent adaptation process. The promising capabilities of RPROP are shown in comparison to other wellknown adaptive techniques.", "title": "" }, { "docid": "9fdd2b84fc412e03016a12d951e4be01", "text": "We examine the implications of shape on the process of finding dense correspondence and half-occlusions for a stereo pair of images. The desired property of the disparity map is that it should be a piecewise continuous function which is consistent with the images and which has the minimum number of discontinuities. To zeroth order, piecewise continuity becomes piecewise constancy. Using this approximation, we first discuss an approach for dealing with such a fronto-parallel shapeless world, and the problems involved therein. We then introduce horizontal and vertical slant to create a first order approximation to piecewise continuity. In particular, we emphasize the following geometric fact: a horizontally slanted surface (i.e., having depth variation in the direction of the separation of the two cameras) will appear horizontally stretched in one image as compared to the other image. Thus, while corresponding two images, N pixels on a scanline in one image may correspond to a different number of pixels M in the other image. This leads to three important modifications to existing stereo algorithms: (a) due to unequal sampling, existing intensity matching metrics must be modified, (b) unequal numbers of pixels in the two images must be allowed to correspond to each other, and (c) the uniqueness constraint, which is often used for detecting occlusions, must be changed to an interval uniqueness constraint. We also discuss the asymmetry between vertical and horizontal slant, and the central role of non-horizontal edges in the context of vertical slant. Using experiments, we discuss cases where existing algorithms fail, and how the incorporation of these new constraints provides correct results.", "title": "" }, { "docid": "175229c7b756a2ce40f86e27efe28d53", "text": "This paper describes a comparative study of the envelope extraction algorithms for the cardiac sound signal segmentation. In order to extract the envelope curves based on the time elapses of the first and the second heart sounds of cardiac sound signals, three representative algorithms such as the normalized average Shannon energy, the envelope information of Hilbert transform, and the cardiac sound characteristic waveform (CSCW) are introduced. Performance comparison of the envelope extraction algorithms, and the advantages and disadvantages of the methods are examined by some parameters. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "fbee148ef2de028cc53a371c27b4d2be", "text": "Desalination is a water-treatment process that separates salts from saline water to produce potable water or water that is low in total dissolved solids (TDS). Globally, the total installed capacity of desalination plants was 61 million m3 per day in 2008 [1]. Seawater desalination accounts for 67% of production, followed by brackish water at 19%, river water at 8%, and wastewater at 6%. Figure 1 show the worldwide feed-water percentage used in desalination. The most prolific users of desalinated water are in the Arab region, namely, Saudi Arabia, Kuwait, United Arab Emirates, Qatar, Oman, and Bahrain [2].", "title": "" }, { "docid": "4a63f4357885287019095b5b736cd453", "text": "In this paper, we present what we think is an elegant solution to some problems in the discourse-structural modelling of speech attribution. Using mostly examples from the Wall Street Journal Corpus, we show that the approach proposed by Carlson and Marcu (2001) leads to irresolvable dilemmas that can be avoided with a suitable treatment of attribution in an underspecified representation of discourse structure. Most approaches to discourse structure assume that textual coherence can be modelled as trees. In particular, it has been shown that coherent discourse follows the so-called rightfrontier constraint (RFC), which essentially ascertains a hierarchical structure without crossed dependencies. We will discuss putative counterexamples to these two assumptions, most of which involve reported speech as in (1) (cited in Wolf and Gibson 2005): (1) “Sure I’ll be polite,” promised one BMW driver who gave his name only as Rudolph. “As long as the trucks and the timid stay out of the left lane.” In (1) the second part of the quote should be linked to the first part (and not the whole first sentence) by a condition relation. If we were to analyse the parenthetical speech reporting clause (“promised one BMW driver ...”) as the nucleus of its host clause (i.e., the quote), the RFC would prevent linkage between the two parts of the quote. If the attribution is analysed as a satellite of the quote, as in Carlson and Marcu (2001), Wolf and Gibson argue, it should be a satellite to both parts of the quote, thus violating treeness. In this paper, we will explore the problems arising from this type of construction and propose a treatment of speech report attributions that we will argue allows us to preserve both, treeness and the RFC in building discourse structures. 1 The (non-)treatment of speech attribution in classic Rhetorical Structure Theory (RST) In ‘classic’ Rhetorical Structure Theory (RST; Mann and Thompson 1988), the problems in accommodating speech report attributions do not arise, because classic RST does not separate complements of verbs and parenthetical speech reporting clauses from their host clause. Leaving speech attribution implicit is in line with the general ‘philosophy’ of RST, which aims to represent not all possible links, but the most plausible structure licensed by 1 Markus Egg is now at Humboldt University, Berlin. This manuscript is a slightly updated (2009) version of our paper in the Proceedings of the Workshop on Constraints in Discourse, Maynooth, Ireland 2006 (http://www.constraints-in-discourse.org/cid06/).", "title": "" }, { "docid": "0210a0cd8c530dd181bbae1a5bdd9b1a", "text": "Most of the social media platforms generate a massive amount of raw data that is slow-paced. On the other hand, Internet Relay Chat (IRC) protocol, which has been extensively used by hacker community to discuss and share their knowledge, facilitates fast-paced and real-time text communications. Previous studies of malicious IRC behavior analysis were mostly either offline or batch processing. This results in a long response time for data collection, pre-processing, and threat detection. However, since the threats can use the latest vulnerabilities to exploit systems (e.g. zero-day attack) and which can spread fast using IRC channels. Current IRC channel monitoring techniques cannot provide the required fast detection and alerting. In this paper, we present an alternative approach to overcome this limitation by providing real-time and autonomic threat detection in IRC channels. We demonstrate the capabilities of our approach using as an example the shadow brokers' leak exploit (the exploit leveraged by WannaCry ransomware attack) that was captured and detected by our framework.", "title": "" }, { "docid": "72c79181572c836cb92aac8fe7a14c5d", "text": "When automatic plagiarism detection is carried out considering a reference corpus, a suspicious text is compared to a set of original documents in order to relate the plagiarised text fragments to their potential source. One of the biggest difficulties in this task is to locate plagiarised fragments that have been modified (by rewording, insertion or deletion, for example) from the source text. The definition of proper text chunks as comparison units of the suspicious and original texts is crucial for the success of this kind of applications. Our experiments with the METER corpus show that the best results are obtained when considering low level word n-grams comparisons (n = {2, 3}).", "title": "" } ]
scidocsrr
259ed9eed850bd92677c3cc46029f478
Mining Potential Domain Expertise in Pinterest
[ { "docid": "ccf40417ca3858d69c4cd3fd031ea7c1", "text": "Online social networks (OSNs) have become popular platforms for people to connect and interact with each other. Among those networks, Pinterest has recently become noteworthy for its growth and promotion of visual over textual content. The purpose of this study is to analyze this imagebased network in a gender-sensitive fashion, in order to understand (i) user motivation and usage pattern in the network, (ii) how communications and social interactions happen and (iii) how users describe themselves to others. This work is based on more than 220 million items generated by 683,273 users. We were able to find significant differences w.r.t. all mentioned aspects. We observed that, although the network does not encourage direct social communication, females make more use of lightweight interactions than males. Moreover, females invest more effort in reciprocating social links, are more active and generalist in content generation, and describe themselves using words of affection and positive emotions. Males, on the other hand, are more likely to be specialists and tend to describe themselves in an assertive way. We also observed that each gender has different interests in the network, females tend to make more use of the network’s commercial capabilities, while males are more prone to the role of curators of items that reflect their personal taste. It is important to understand gender differences in online social networks, so one can design services and applications that leverage human social interactions and provide more targeted and relevant user experiences.", "title": "" }, { "docid": "f9b01c707482eebb9af472fd019f56eb", "text": "In this paper we discuss the task of discovering topical influ ence within the online social network T WITTER. The main goal of this research is to discover who the influenti al users are with respect to a certain given topic. For this research we have sampled a portion of the T WIT ER social graph, from which we have distilled topics and topical activity, and constructed a se t of diverse features which we believe are useful in capturing the concept of topical influence. We will use sev eral correlation and classification techniques to determine which features perform best with respect to the TWITTER network. Our findings support the claim that only looking at simple popularity features such a s the number of followers is not enough to capture the concept of topical influence. It appears that mor e int icate features are required.", "title": "" } ]
[ { "docid": "d315aa25c69ad39164c458dabe914417", "text": "The increase of scientific collaboration coincides with the technological and social advancement of social software applications which can change the way we research. Among social software, social network sites have recently gained immense popularity in a hedonic context. This paper focuses on social network sites as an emerging application designed for the specific needs of researchers. To give an overview about these sites we use a data set of 24 case studies and in-depth interviews with the founders of ten social research network sites. The gathered data leads to a first tentative taxonomy and to a definition of SRNS identifying four basic functionalities identity and network management, communication, information management, and collaboration. The sites in the sample correspond to one of the following four types: research directory sites, research awareness sites, research management sites and research collaboration sites. These results conclude with implications for providers of social research network sites.", "title": "" }, { "docid": "3b07476ebb8b1d22949ec32fc42d2d05", "text": "We provide a systematic review of the adaptive comanagement (ACM) literature to (i) investigate how the concept of governance is considered and (ii) examine what insights ACM offers with reference to six key concerns in environmental governance literature: accountability and legitimacy; actors and roles; fit, interplay, and scale; adaptiveness, flexibility, and learning; evaluation and monitoring; and, knowledge. Findings from the systematic review uncover a complicated relationship with evidence of conceptual closeness as well as relational ambiguities. The findings also reveal several specific contributions from the ACM literature to each of the six key environmental governance concerns, including applied strategies for sharing power and responsibility and value of systems approaches in understanding problems of fit. More broadly, the research suggests a dissolving or fuzzy boundary between ACM and governance, with implications for understanding emerging approaches to navigate social-ecological system change. Future research opportunities may be found at the confluence of ACM and environmental governance scholarship, such as identifying ways to build adaptive capacity and encouraging the development of more flexible governance arrangements.", "title": "" }, { "docid": "9d33565dbd5148730094a165bb2e968f", "text": "The demand for greater battery life in low-power consumer electronics and implantable medical devices presents a need for improved energy efficiency in the management of small rechargeable cells. This paper describes an ultra-compact analog lithium-ion (Li-ion) battery charger with high energy efficiency. The charger presented here utilizes the tanh basis function of a subthreshold operational transconductance amplifier to smoothly transition between constant-current and constant-voltage charging regimes without the need for additional area- and power-consuming control circuitry. Current-domain circuitry for end-of-charge detection negates the need for precision-sense resistors in either the charging path or control loop. We show theoretically and experimentally that the low-frequency pole-zero nature of most battery impedances leads to inherent stability of the analog control loop. The circuit was fabricated in an AMI 0.5-μm complementary metal-oxide semiconductor process, and achieves 89.7% average power efficiency and an end voltage accuracy of 99.9% relative to the desired target 4.2 V, while consuming 0.16 mm2 of chip area. To date and to the best of our knowledge, this design represents the most area-efficient and most energy-efficient battery charger circuit reported in the literature.", "title": "" }, { "docid": "0e48de6dc8d1f51eb2a7844d4d67b8f5", "text": "Vygotsky asserted that the student who had mastered algebra had attained “a new higher plane of thought”, a level of abstraction and generalization which transformed the meaning of the lower (arithmetic) level. He also affirmed the importance of the mastery of scientific concepts for the development of the ability to think theoretically, and emphasized the mediating role of semiotic forms and symbol systems in developing this ability. Although historically in mathematics and traditionally in education, algebra followed arithmetic, Vygotskian theory supports the reversal of this sequence in the service of orienting children to the most abstract and general level of understanding initially. This organization of learning activity for the development of algebraic thinking is very different from the introduction of elements of algebra into the study of arithmetic in the early grades. The intended theoretical (algebraic) understanding is attained through appropriation of psychological tools, in the form of specially designed schematics, whose mastery is not merely incidental to but the explicit focus of instruction. The author’s research in implementing Davydov’s Vygotskian-based elementary mathematics curriculum in the U.S. suggests that these characteristics function synergistically to develop algebraic understanding and computational competence as well. Kurzreferat: Vygotsky ging davon aus, dass Lernende, denen es gelingt, Algebra zu beherrschen, „ein höheres gedankliches Niveau” erreicht hätten, eine Ebene von Abstraktion und Generalisierung, welche die Bedeutung der niederen (arithmetischen) Ebene verändert. Er bestätigte auch die Relevanz der Beherrschung von wissenschaftlichen Begriffen für die Entwicklung der Fähigkeit, theoretisch zu denken und betonte dabei die vermittelnde Rolle von semiotischen Formen und Symbolsystemen für die Ausformung dieser Fähigkeit. Obwohl mathematik-his tor isch und t radi t ionel l erziehungswissenschaftlich betrachtet, Algebra der Arithmetik folgte, stützt Vygotski’s Theorie die Umkehrung dieser Sequenz bei dem Bemühen, Kinder an das abstrakteste und allgemeinste Niveau des ersten Verstehens heranzuführen. Diese Organisation von Lernaktivitäten für die Ausbildung algebraischen Denkens unterscheidet sich erheblich von der Einführung von Algebra-Elementen in das Lernen von Arithmetik während der ersten Schuljahre. Das beabsichtigte theoretische (algebraische) Verstehen wird erreicht durch die Aneignung psychologischer Mittel, und zwar in Form von dafür speziell entwickelten Schemata, deren Beherrschung nicht nur beiläufig erfolgt, sondern Schwerpunkt des Unterrichts ist. Die im Beitrag beschriebenen Forschungen zur Implementierung von Davydov’s elementarmathematischen Curriculum in den Vereinigten Staaten, das auf Vygotsky basiert, legt die Vermutung nahe, dass diese Charakteristika bei der Entwicklung von algebraischem Verstehen und von Rechenkompetenzen synergetisch funktionieren. ZDM-Classification: C30, D30, H20 l. Historical Context Russian psychologist Lev Vygotsky stated clearly his perspective on algebraic thinking. Commenting on its development within the structure of the Russian curriculum in the early decades of the twentieth century,", "title": "" }, { "docid": "b3a148abb00e35e59a7d05289595d438", "text": "CONTEXT\nMajor depressive disorder (MDD) occurs in 15% to 23% of patients with acute coronary syndromes and constitutes an independent risk factor for morbidity and mortality. However, no published evidence exists that antidepressant drugs are safe or efficacious in patients with unstable ischemic heart disease.\n\n\nOBJECTIVE\nTo evaluate the safety and efficacy of sertraline treatment of MDD in patients hospitalized for acute myocardial infarction (MI) or unstable angina and free of other life-threatening medical conditions.\n\n\nDESIGN AND SETTING\nRandomized, double-blind, placebo-controlled trial conducted in 40 outpatient cardiology centers and psychiatry clinics in the United States, Europe, Canada, and Australia. Enrollment began in April 1997 and follow-up ended in April 2001.\n\n\nPATIENTS\nA total of 369 patients with MDD (64% male; mean age, 57.1 years; mean 17-item Hamilton Depression [HAM-D] score, 19.6; MI, 74%; unstable angina, 26%).\n\n\nINTERVENTION\nAfter a 2-week single-blind placebo run-in, patients were randomly assigned to receive sertraline in flexible dosages of 50 to 200 mg/d (n = 186) or placebo (n = 183) for 24 weeks.\n\n\nMAIN OUTCOME MEASURES\nThe primary (safety) outcome measure was change from baseline in left ventricular ejection fraction (LVEF); secondary measures included surrogate cardiac measures and cardiovascular adverse events, as well as scores on the HAM-D scale and Clinical Global Impression Improvement scale (CGI-I) in the total randomized sample, in a group with any prior history of MDD, and in a more severe MDD subgroup defined a priori by a HAM-D score of at least 18 and history of 2 or more prior episodes of MDD.\n\n\nRESULTS\nSertraline had no significant effect on mean (SD) LVEF (sertraline: baseline, 54% [10%]; week 16, 54% [11%]; placebo: baseline, 52% [13%]; week 16, 53% [13%]), treatment-emergent increase in ventricular premature complex (VPC) runs (sertraline: 13.1%; placebo: 12.9%), QTc interval greater than 450 milliseconds at end point (sertraline: 12%; placebo: 13%), or other cardiac measures. All comparisons were statistically nonsignificant (P> or = .05). The incidence of severe cardiovascular adverse events was 14.5% with sertraline and 22.4% with placebo. In the total randomized sample, the CGI-I (P =.049), but not the HAM-D (P =.14), favored sertraline. The CGI-I responder rates for sertraline were significantly higher than for placebo in the total sample (67% vs 53%; P =.01), in the group with at least 1 prior episode of depression (72% vs 51%; P =.003), and in the more severe MDD group (78% vs 45%; P =.001). In the latter 2 groups, both CGI-I and HAM-D measures were significantly better in those assigned to sertraline.\n\n\nCONCLUSION\nOur results suggest that sertraline is a safe and effective treatment for recurrent depression in patients with recent MI or unstable angina and without other life-threatening medical conditions.", "title": "" }, { "docid": "c12d27988e70e9b3e6987ca2f0ca8bca", "text": "In this tutorial, we introduce the basic theory behind Stega nography and Steganalysis, and present some recent algorithms and devel opm nts of these fields. We show how the existing techniques used nowadays are relate d to Image Processing and Computer Vision, point out several trendy applicati ons of Steganography and Steganalysis, and list a few great research opportunities j ust waiting to be addressed.", "title": "" }, { "docid": "6247c827c6fdbc976b900e69a9eb275c", "text": "Despite the fact that commercial computer systems have been in existence for almost three decades, many systems in the process of being implemented may be classed as failures. One of the factors frequently cited as important to successful system development is involving users in the design and implementation process. This paper reports the results of a field study, conducted on data from forty-two systems, that investigates the role of user involvement and factors affecting the employment of user involvement on the success of system development. Path analysis was used to investigate both the direct effects of the contingent variables on system success and the effect of user involvement as a mediating variable between the contingent variables and system success. The results show that high system complexity and constraints on the resources available for system development are associated with less successful systems.", "title": "" }, { "docid": "0fd7a70c0d46100d32e0bcb0f65528e3", "text": "INTRODUCTION Document clustering is an automatic grouping of text documents into clusters so that documents within a cluster have high similarity in comparison to one another, but are dissimilar to documents in other clusters. Unlike document classification (Wang, Zhou, and He, 2001), no labeled documents are provided in clustering; hence, clustering is also known as unsupervised learning. Hierarchical document clustering organizes clusters into a tree or a hierarchy that facilitates browsing. The parent-child relationship among the nodes in the tree can be viewed as a topic-subtopic relationship in a subject hierarchy such as the Yahoo! directory. This chapter discusses several special challenges in hierarchical document clustering: high dimensionality, high volume of data, ease of browsing, and meaningful cluster labels. State-ofthe-art document clustering algorithms are reviewed: the partitioning method (Steinbach, Karypis, and Kumar, 2000), agglomerative and divisive hierarchical clustering (Kaufman and Rousseeuw, 1990), and frequent itemset-based hierarchical clustering (Fung, Wang, and Ester, 2003). The last one, which was recently developed by the authors, is further elaborated since it has been specially designed to address the hierarchical document clustering problem.", "title": "" }, { "docid": "add30dc8d14a26eba48dbe5baaaf4169", "text": "The authors investigated whether intensive musical experience leads to enhancements in executive processing, as has been shown for bilingualism. Young adults who were bilinguals, musical performers (instrumentalists or vocalists), or neither completed 3 cognitive measures and 2 executive function tasks based on conflict. Both executive function tasks included control conditions that assessed performance in the absence of conflict. All participants performed equivalently for the cognitive measures and the control conditions of the executive function tasks, but performance diverged in the conflict conditions. In a version of the Simon task involving spatial conflict between a target cue and its position, bilinguals and musicians outperformed monolinguals, replicating earlier research with bilinguals. In a version of the Stroop task involving auditory and linguistic conflict between a word and its pitch, the musicians performed better than the other participants. Instrumentalists and vocalists did not differ on any measure. Results demonstrate that extended musical experience enhances executive control on a nonverbal spatial task, as previously shown for bilingualism, but also enhances control in a more specialized auditory task, although the effect of bilingualism did not extend to that domain.", "title": "" }, { "docid": "2643c7960df0aed773aeca6e04fde67e", "text": "Many studies utilizing dogs, cats, birds, fish, and robotic simulations of animals have tried to ascertain the health benefits of pet ownership or animal-assisted therapy in the elderly. Several small unblinded investigations outlined improvements in behavior in demented persons given treatment in the presence of animals. Studies piloting the use of animals in the treatment of depression and schizophrenia have yielded mixed results. Animals may provide intangible benefits to the mental health of older persons, such as relief social isolation and boredom, but these have not been formally studied. Several investigations of the effect of pets on physical health suggest animals can lower blood pressure, and dog walkers partake in more physical activity. Dog walking, in epidemiological studies and few preliminary trials, is associated with lower complication risk among patients with cardiovascular disease. Pets may also have harms: they may be expensive to care for, and their owners are more likely to fall. Theoretically, zoonotic infections and bites can occur, but how often this occurs in the context of pet ownership or animal-assisted therapy is unknown. Despite the poor methodological quality of pet research after decades of study, pet ownership and animal-assisted therapy are likely to continue due to positive subjective feelings many people have toward animals.", "title": "" }, { "docid": "6400b594b7a7624cf638961ee904e7d0", "text": "As the demands for portable electronic products increase, through-silicon-via (TSV)-based three-dimensional integrated-circuit (3-D IC) integration is becoming increasingly important. Micro-bump-bonded interconnection is one approach that has great potential to meet this requirement. In this paper, a 30-μm pitch chip-to-chip (C2C) interconnection with Cu/Ni/SnAg micro bumps was assembled using the gap-controllable thermal bonding method. The bonding parameters were evaluated by considering the variation in the contact resistance after bonding. The effects of the bonding time and temperature on the IMC thickness of the fabricated C2C interconnects are also investigated to determine the correlation between its thickness and reliability performance. The reliability of the C2C interconnects with the selected underfill was studied by performing a -55°C- 125°C temperature cycling test (TCT) for 2000 cycles and a 150°C high-temperature storage (HTS) test for 2000 h. The interfaces of the failed samples in the TCT and HTS tests are then inspected by scanning electron microscopy (SEM), which is utilized to obtain cross-sectional images. To validate the experimental results, finite-element (FE) analysis is also conducted to elucidate the interconnect reliability of the C2C interconnection. Results show that consistent bonding quality and stable contact resistance of the fine-pitch C2C interconnection with the micro bumps were achieved by giving the appropriate choice of the bonding parameters, and those bonded joints can thus serve as reliable interconnects for use in 3-D chip stacking.", "title": "" }, { "docid": "75d57c2f82fb7852feef4c7bcde41590", "text": "This paper studies the causal impact of sibling gender composition on participation in Science, Technology, Engineering, and Mathematics (STEM) education. I focus on a sample of first-born children who all have a younger biological sibling, using rich administrative data on the total Danish population. The randomness of the secondborn siblings’ gender allows me to estimate the causal effect of having an opposite sex sibling relative to a same sex sibling. The results are robust to family size and show that having a second-born opposite sex sibling makes first-born men more and women less likely to enroll in a STEM program. Although sibling gender composition has no impact on men’s probability of actually completing a STEM degree, it has a powerful effect on women’s success within these fields: women with a younger brother are eleven percent less likely to complete any field-specific STEM education relative to women with a sister. I provide evidence that parents of mixed sex children gender-specialize their parenting more than parents of same sex children. These findings indicate that the family environment plays in important role for shaping interests in STEM fields. JEL classification: I2, J1, J3", "title": "" }, { "docid": "c7351e8ce6d32b281d5bd33b245939c6", "text": "In TREC 2002 the Berkeley group participated only in the English-Arabic cross-language retrieval (CLIR) track. One Arabic monolingual run and three English-Arabic cross-language runs were submitted. Our approach to the crosslanguage retrieval was to translate the English topics into Arabic using online English-Arabic machine translation systems. The four official runs are named as BKYMON, BKYCL1, BKYCL2, and BKYCL3. The BKYMON is the Arabic monolingual run, and the other three runs are English-to-Arabic cross-language runs. This paper reports on the construction of an Arabic stoplist and two Arabic stemmers, and the experiments on Arabic monolingual retrieval, English-to-Arabic cross-language retrieval.", "title": "" }, { "docid": "e13798bd8605c3c679f6e72df515d35a", "text": "After more than a decade of research in Model-Driven Engineering (MDE), the state-of-the-art and the state-of-the-practice in MDE has significantly progressed. Therefore, during this workshop we raised the question of how to proceed next, and we identified a number of future challenges in the field of MDE. The objective of the workshop was to provide a forum for discussing the future of MDE research and practice. Seven presenters shared their vision on the future challenges in the field of MDE. Four breakout groups discussed scalability, consistency and co-evolution, formal foundations, and industrial adoption, respectively. These themes were identified as major categories of challenges by the participants. This report summarises the different presentations, the MDE challenges identified by the workshop participants, and the discussions of the breakout groups.", "title": "" }, { "docid": "cd1bf567e2e8bfbf460abb3ac1a0d4a5", "text": "Memory channel contention is a critical performance bottleneck in modern systems that have highly parallelized processing units operating on large data sets. The memory channel is contended not only by requests from different user applications (CPU access) but also by system requests for peripheral data (IO access), usually controlled by Direct Memory Access (DMA) engines. Our goal, in this work, is to improve system performance byeliminating memory channel contention between CPU accesses and IO accesses. To this end, we propose a hardware-software cooperative data transfer mechanism, Decoupled DMA (DDMA) that provides a specialized low-cost memory channel for IO accesses. In our DDMA design, main memoryhas two independent data channels, of which one is connected to the processor (CPU channel) and the other to the IO devices (IO channel), enabling CPU and IO accesses to be served on different channels. Systemsoftware or the compiler identifies which requests should be handled on the IO channel and communicates this to the DDMA engine, which then initiates the transfers on the IO channel. By doing so, our proposal increasesthe effective memory channel bandwidth, thereby either accelerating data transfers between system components, or providing opportunities to employ IO performance enhancement techniques (e.g., aggressive IO prefetching)without interfering with CPU accessesWe demonstrate the effectiveness of our DDMA framework in two scenarios: (i) CPU-GPU communication and (ii) in-memory communication (bulk datacopy/initialization within the main memory). By effectively decoupling accesses for CPU-GPU communication and in-memory communication from CPU accesses, our DDMA-based design achieves significant performanceimprovement across a wide variety of system configurations (e.g., 20% average performance improvement on a typical 2-channel 2-rank memory system).", "title": "" }, { "docid": "aecd7a910b52b6e34e10f10a12d0f966", "text": "Language processing is an example of implicit learning of multiple statistical cues that provide probabilistic information regarding word structure and use. Much of the current debate about language embodiment is devoted to how action words are represented in the brain, with motor cortex activity evoked by these words assumed to selectively reflect conceptual content and/or its simulation. We investigated whether motor cortex activity evoked by manual action words (e.g., caress) might reflect sensitivity to probabilistic orthographic–phonological cues to grammatical category embedded within individual words. We first review neuroimaging data demonstrating that nonwords evoke activity much more reliably than action words along the entire motor strip, encompassing regions proposed to be action category specific. Using fMRI, we found that disyllabic words denoting manual actions evoked increased motor cortex activity compared with non-body-part-related words (e.g., canyon), activity which overlaps that evoked by observing and executing hand movements. This result is typically interpreted in support of language embodiment. Crucially, we also found that disyllabic nonwords containing endings with probabilistic cues predictive of verb status (e.g., -eve) evoked increased activity compared with nonwords with endings predictive of noun status (e.g., -age) in the identical motor area. Thus, motor cortex responses to action words cannot be assumed to selectively reflect conceptual content and/or its simulation. Our results clearly demonstrate motor cortex activity reflects implicit processing of ortho-phonological statistical regularities that help to distinguish a word's grammatical class.", "title": "" }, { "docid": "07c9bf0432e67580b7e19a2889aa80a9", "text": "We give a detailed account of the one-way quantum computer, a scheme of quantum computation that consists entirely of one-qubit measurements on a particular class of entangled states, the cluster states. We prove its universality, describe why its underlying computational model is different from the network model of quantum computation, and relate quantum algorithms to mathematical graphs. Further we investigate the scaling of required resources and give a number of examples for circuits of practical interest such as the circuit for quantum Fourier transformation and for the quantum adder. Finally, we describe computation with clusters of finite size.", "title": "" }, { "docid": "1afd50a91b67bd1eab0db1c2a19a6c73", "text": "In this paper we present syntactic characterization of temporal formulas that express various properties of interest in the verification of concurrent programs. Such a characterization helps us in choosing the right techniques for proving correctness with respect to these properties. The properties that we consider include safety properties, liveness properties and fairness properties. We also present algorithms for checking if a given temporal formula expresses any of these properties.", "title": "" }, { "docid": "6be09c03c23168af7d8f21feb905020e", "text": "Software test effort estimation has always been a challenge for the software practitioners, because it consumes approximately half of the overall development costs of any software project. In order to provide effective software maintenance it is necessary to carry out the regression testing of the software. Hence, this research work aims to propose a measure for the estimation of the software test effort in regression testing. Since, the effort required developing or test software shall depend on various major contributing factors like, therefore, the proposed measure first estimates the change type of any software, make test cases for any software, then calculate execution complexity of any software and tester rank. In general, the regression testing takes more time and cost to perform it. Therefore, the effort estimation in regression testing is utmost required in order to compute man-hour for any software. In order to analyze the validity of the proposed test effort estimation measure, the measure is compared for various ranges of problem from small, mid and large size program to real life software projects. The result obtained shows that, the proposed test measure is a comprehensive one and compares well with other prevalent measures proposed in the past.", "title": "" }, { "docid": "d2225efeffbb885bc9e3e9322c214a2e", "text": "A 40-Gb/s transimpedance amplifier (TIA) is proposed using multistage inductive-series peaking for low group-delay variation. A transimpedance limit for multistage TIAs is derived, and a bandwidth-enhancement technique using inductive-series π -networks is analyzed. A design method for low group delay constrained to 3-dB bandwidth enhancement is suggested. The TIA is implemented in a 0.13-μm CMOS process and achieves a 3-dB bandwidth of 29 GHz. The transimpedance gain is 50 dB·Ω , and the transimpedance group-delay variation is less than 16 ps over the 3-dB bandwidth. The chip occupies an area of 0.4 mm2, including the pads, and consumes 45.7 mW from a 1.5-V supply. The measured TIA demonstrates a transimpedance figure of merit of 200.7 Ω/pJ.", "title": "" } ]
scidocsrr