query_id
stringlengths
32
32
query
stringlengths
6
5.38k
positive_passages
listlengths
1
22
negative_passages
listlengths
9
100
subset
stringclasses
7 values
136a7ade2c802609e5a827cc95f83190
A Novel Continuum Trunk Robot Based on Contractor Muscles
[ { "docid": "8bb465b2ec1f751b235992a79c6f7bf1", "text": "Continuum robotics has rapidly become a rich and diverse area of research, with many designs and applications demonstrated. Despite this diversity in form and purpose, there exists remarkable similarity in the fundamental simplified kinematic models that have been applied to continuum robots. However, this can easily be obscured, especially to a newcomer to the field, by the different applications, coordinate frame choices, and analytical formalisms employed. In this paper we review several modeling approaches in a common frame and notational convention, illustrating that for piecewise constant curvature, they produce identical results. This discussion elucidates what has been articulated in different ways by a number of researchers in the past several years, namely that constant-curvature kinematics can be considered as consisting of two separate submappings: one that is general and applies to all continuum robots, and another that is robot-specific. These mappings are then developed both for the singlesection and for the multi-section case. Similarly, we discuss the decomposition of differential kinematics (the robot’s Jacobian) into robot-specific and robot-independent portions. The paper concludes with a perspective on several of the themes of current research that are shaping the future of continuum robotics.", "title": "" } ]
[ { "docid": "467d953d489ca8f7d75c798d6e948a86", "text": "The ability to detect recent natural selection in the human population would have profound implications for the study of human history and for medicine. Here, we introduce a framework for detecting the genetic imprint of recent positive selection by analysing long-range haplotypes in human populations. We first identify haplotypes at a locus of interest (core haplotypes). We then assess the age of each core haplotype by the decay of its association to alleles at various distances from the locus, as measured by extended haplotype homozygosity (EHH). Core haplotypes that have unusually high EHH and a high population frequency indicate the presence of a mutation that rose to prominence in the human gene pool faster than expected under neutral evolution. We applied this approach to investigate selection at two genes carrying common variants implicated in resistance to malaria: G6PD and CD40 ligand. At both loci, the core haplotypes carrying the proposed protective mutation stand out and show significant evidence of selection. More generally, the method could be used to scan the entire genome for evidence of recent positive selection.", "title": "" }, { "docid": "ec7931f1a56bf7d4dd6cc1a5cb2d0625", "text": "Modern life is intimately linked to the availability of fossil fuels, which continue to meet the world's growing energy needs even though their use drives climate change, exhausts finite reserves and contributes to global political strife. Biofuels made from renewable resources could be a more sustainable alternative, particularly if sourced from organisms, such as algae, that can be farmed without using valuable arable land. Strain development and process engineering are needed to make algal biofuels practical and economically viable.", "title": "" }, { "docid": "d01692a4ee83531badacea6658b74d8f", "text": "Question Answering (QA) research for factoid questions has recently achieved great success. Presently, QA systems developed for European, Middle Eastern and Asian languages are capable of providing answers with reasonable accuracy. However, Bengali being among themost spoken languages in theworld, no factoid question answering system is available for Bengali till date. This paper describes the first attempt on building a factoid question answering system for Bengali language. The challenges in developing a question answering system for Bengali have been discussed. Extraction and ranking of relevant sentences have also been proposed. Also extraction strategy of the ranked answers from the relevant sentences are suggested for Bengali question answering system.", "title": "" }, { "docid": "e8fee9f93106ce292c89c26be373030f", "text": "As a non-invasive imaging modality, optical coherence tomography (OCT) can provide micrometer-resolution 3D images of retinal structures. Therefore it is commonly used in the diagnosis of retinal diseases associated with edema in and under the retinal layers. In this paper, a new framework is proposed for the task of fluid segmentation and detection in retinal OCT images. Based on the raw images and layers segmented by a graph-cut algorithm, a fully convolutional neural network was trained to recognize and label the fluid pixels. Random forest classification was performed on the segmented fluid regions to detect and reject the falsely labeled fluid regions. The leave-one-out cross validation experiments on the RETOUCH database show that our method performs well in both segmentation (mean Dice: 0.7317) and detection (mean AUC: 0.985) tasks.", "title": "" }, { "docid": "04756d4dfc34215c8acb895ecfcfb406", "text": "The author describes five separate projects he has undertaken in the intersection of computer science and Canadian income tax law. They are:A computer-assisted instruction (CAI) course for teaching income tax, programmed using conventional CAI techniques;\nA “document modeling” computer program for generating the documentation for a tax-based transaction and advising the lawyer-user as to what decisions should be made and what the tax effects will be, programmed in a conventional language;\nA prototype expert system for determining the income tax effects of transactions and tax-defined relationships, based on a PROLOG representation of the rules of the Income Tax Act;\nAn intelligent CAI (ICAI) system for generating infinite numbers of randomized quiz questions for students, computing the answers, and matching wrong answers to particular student errors, based on a PROLOG representation of the rules of the Income Tax Act; and\nA Hypercard stack for providing information about income tax, enabling both education and practical research to follow the user's needs path.\n\nThe author shows that non-AI approaches are a way to produce packages quickly and efficiently. Their primary disadvantage is the massive rewriting required when the tax law changes. AI approaches based on PROLOG, on the other hand, are harder to develop to a practical level but will be easier to audit and maintain. The relationship between expert systems and CAI is discussed.", "title": "" }, { "docid": "1e82e123cacca01a84a8ea2fef641d98", "text": "We propose a new class of convex penalty functions, called variational Gram functions (VGFs), that can promote pairwise relations, such as orthogonality, among a set of vectors in a vector space. These functions can serve as regularizers in convex optimization problems arising from hierarchical classification, multitask learning, and estimating vectors with disjoint supports, among other applications. We study necessary and sufficient conditions under which a VGF is convex, and give a characterization of its subdifferential. We show how to compute its proximal operator, and discuss efficient optimization algorithms for regularized loss minimization problems where the loss admits a simple variational representation and the regularizer is a VGF. We also establish a general representer theorem for such learning problems. Lastly, numerical experiments on a hierarchical classification problem are presented to demonstrate the effectiveness of VGFs and the associated optimization algorithms.", "title": "" }, { "docid": "9083b448b8bd82705db99c2e0104f9a7", "text": "In free-viewpoint video, there is a recent trend to represent scene objects as solids rather than using multiple depth maps. Point clouds have been used in computer graphics for a long time, and with the recent possibility of real-time capturing and rendering, point clouds have been favored over meshes in order to save computation. Each point in the cloud is associated with its 3D position and its color. We devise a method to compress the colors in point clouds, which is based on a hierarchical transform and arithmetic coding. The transform is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The arithmetic encoding of the coefficients assumes Laplace distributions, one per sub-band. The Laplace parameter for each distribution is transmitted to the decoder using a custom method. The geometry of the point cloud is encoded using the well-established octtree scanning. Results show that the proposed solution performs comparably with the current state-of-the-art, in many occasions outperforming it, while being much more computationally efficient. We believe this paper represents the state of the art in intra-frame compression of point clouds for real-time 3D video.", "title": "" }, { "docid": "4a94fb7432d172d5c1ce1e5429cc38b3", "text": "OBJECTIVE\nAssociations between eminent creativity and bipolar disorders have been reported, but there are few data relating non-eminent creativity to bipolar disorders in clinical samples. We assessed non-eminent creativity in euthymic bipolar (BP) and unipolar major depressive disorder (MDD) patients, creative discipline controls (CC), and healthy controls (HC).\n\n\nMETHODS\n49 BP, 25 MDD, 32 CC, and 47 HC (all euthymic) completed four creativity measures yielding six parameters: the Barron-Welsh Art Scale (BWAS-Total, and two subscales, BWAS-Dislike and BWAS-Like), the Adjective Check List Creative Personality Scale (ACL-CPS), and the Torrance Tests of Creative Thinking--Figural (TTCT-F) and Verbal (TTCT-V) versions. Mean scores on these instruments were compared across groups.\n\n\nRESULTS\nBP and CC (but not MDD) compared to HC scored significantly higher on BWAS-Total (45% and 48% higher, respectively) and BWAS-Dislike (90% and 88% higher, respectively), but not on BWAS-Like. CC compared to MDD scored significantly higher (12% higher) on TTCT-F. For all other comparisons, creativity scores did not differ significantly between groups.\n\n\nCONCLUSIONS\nWe found BP and CC (but not MDD) had similarly enhanced creativity on the BWAS-Total (driven by an increase on the BWAS-Dislike) compared to HC. Further studies are needed to determine the mechanisms of enhanced creativity and how it relates to clinical (e.g. temperament, mood, and medication status) and preclinical (e.g. visual and affective processing substrates) parameters.", "title": "" }, { "docid": "7b7924ccd60d01468f6651b9226cbed0", "text": "Leucine-rich repeat kinase 2 (LRRK2) mutations have been implicated in autosomal dominant parkinsonism, consistent with typical levodopa-responsive Parkinson's disease. The gene maps to chromosome 12q12 and encodes a large, multifunctional protein. To identify novel LRRK2 mutations, we have sequenced 100 affected probands with family history of parkinsonism. Semiquantitative analysis was also performed in all probands to identify LRRK2 genomic multiplication or deletion. In these kindreds, referred from movement disorder clinics in many parts of Europe, Asia, and North America, parkinsonism segregates as an autosomal dominant trait. All 51 exons of the LRRK2 gene were analyzed and the frequency of all novel sequence variants was assessed within controls. The segregation of mutations with disease has been examined in larger, multiplex families. Our study identified 26 coding variants, including 15 nonsynonymous amino acid substitutions of which three affect the same codon (R1441C, R1441G, and R1441H). Seven of these coding changes seem to be pathogenic, as they segregate with disease and were not identified within controls. No multiplications or deletions were identified.", "title": "" }, { "docid": "65ac52564041b0c2e173560d49ec762f", "text": "Constructionism can be a powerful framework for teaching complex content to novices. At the core of constructionism is the suggestion that by enabling learners to build creative artifacts that require complex content to function, those learners will have opportunities to learn this content in contextualized, personally-meaningful ways. In this paper, we investigate the relevance of a set of approaches broadly called “educational data mining” or “learning analytics” (henceforth, EDM) to help provide a basis for quantitative research on constructionist learning which does not abandon the richness seen as essential by many researchers in that paradigm. We suggest that EDM may have the potential to support research that is meaningful and useful both to researchers working actively in the constructionist tradition but also to wider communities. Finally, we explore potential collaborations between researchers in the EDM and constructionist traditions; such collaborations have the potential to enhance the ability of constructionist researchers to make rich inference about learning and learners, while providing EDM researchers with many interesting new research questions and challenges. In recent years, project-based, student-centered approaches to education have gained prominence, due in part to an increased demand for higher-level skills in the job market (Levi and Murname, 2004), positive research findings on the effectiveness of such approaches (Barron, Pearson, et al., 2008), and a broader acceptance in public policy circles, as shown, for example, by the Next Generation Science Standards (NGSS Lead States, 2013). While several approaches for this type of learning exist, Constructionism is one of the most popular and well-developed ones (Papert, 1980). In this paper, we investigate the relevance of a set of approaches called “educational data mining” or “learning analytics” (henceforth abbreviated as ‘EDM’) (R. Baker & Yacef, 2009; Romero & Ventura, 2010a; R. Baker & Siemens, in press) to help provide a basis for quantitative research on constructionist learning which does not abandon the richness seen as essential by many researchers in that paradigm. As such, EDM may have the potential to support research that is meaningful and useful both to researchers working actively in the constructionist tradition and to the wider community of learning scientists and policymakers. EDM, broadly, is a set of methods that apply data mining and machine learning techniques such as prediction, classification, and discovery of latent structural regularities to rich, voluminous, and idiosyncratic educational data, potentially similar to those data generated by many constructionist learning environments which allows students to explore and build their own artifacts, computer programs, and media pieces. As such, we identify four axes in which EDM methods may be helpful for constructionist research: 1. EDM methods do not require constructionists to abandon deep qualitative analysis for simplistic summative or confirmatory quantitative analysis; 2. EDM methods can generate different and complementary new analyses to support qualitative research; 3. By enabling precise formative assessments of complex constructs, EDM methods can support an increase in methodological rigor and replicability; 4. EDM can be used to present comprehensible and actionable data to learners and teachers in situ. In order to investigate those axes, we start by describing our perspective on compatibilities and incompatibilities between constructionism and EDM. At the core of constructionism is the suggestion that by enabling learners to build creative artifacts that require complex content to function, those learners will have opportunities to learn that complex content in connected, meaningful ways. Constructionist projects often emphasize making those artifacts (and often data) public, socially relevant, and personally meaningful to learners, and encourage working in social spaces such that learners engage each other to accelerate the learning process. diSessa and Cobb (2004) argue that constructionism serves a framework for action, as it describes its own praxis (i.e., how it matches theory to practice). The learning theory supporting constructionism is classically constructivist, combining concepts from Piaget and Vygotsky (Fosnot, 2005). As constructionism matures as a constructivist framework for action and expands in scale, constructionist projects are becoming both more complex (Reynolds & Caperton, 2011), more scalable (Resnick, Maloney, et al., 2009), and more affordable for schools following significant development in low cost “construction” technologies such as robotics and 3D printers. As such, there have been increasing opportunities to learn more about how students learn in constructionist contexts, advancing the science of learning. These discoveries will have the potential to improve the quality of all constructivist learning experiences. For example, Wilensky and Reisman (2006) have shown how constructionist modeling and simulation can make science learning more accessible, Resnick (1998) has shown how constructionism can reframe programming as art at scale, Buechley & Eisenberg (2008) have used e-textiles to engage female students in robotics, Eisenberg (2011) and Blikstein (2013, 2014) use constructionist digital fabrication to successfully teach programming, engineering, and electronics in a novel, integrated way. The findings of these research and design projects have the potential to be useful to a wide external community of teachers, researchers, practitioners, and other stakeholders. However, connecting findings from the constructionist tradition to the goals of policymakers can be challenging, due to the historical differences in methodology and values between these communities. The resources needed to study such interventions at scale are considerable, given the need to carefully document, code, and analyze each student’s work processes and artifacts. The designs of constructionist research often result in findings that do not map to what researchers, outside interests, and policymakers are expecting, in contrast to conventional controlled studies, which are designed to (more conclusively) answer a limited set of sharply targeted research questions. Due the lack of a common ground to discuss benefits and scalability of constructionist and project-based designs, these designs have been too frequently sidelined to niche institutions such as private schools, museums, or atypical public schools. To understand what the role EDM methods can play in constructionist research, we must frame what we mean by constructionist research more precisely. We follow Papert and Harel (1991) in their situating of constructionism, but they do not constrain the term to one formal definition. The definition is further complicated by the fact that constructionism has many overlaps with other research and design traditions, such as constructivism and socio-constructivism themselves, as well as project-based pedagogies and inquiry-based designs. However, we believe that it is possible to define the subset of constructionism amenable to EDM, a focus we adopt in this article for brevity. In this paper, we focus on the constructionist literature dealing with students learning to construct understandings by constructing (physical or virtual) artifacts, where the students' learning environments are designed and constrained such that building artifacts in/with that environment is designed to help students construct their own understandings. In other words, we are focusing on creative work done in computational environments designed to foster creative and transformational learning, such as NetLogo (Wilensky, 1999), Scratch (Resnick, Maloney, et al., 2009), or LEGO Mindstorms. This sub-category of constructionism can and does generate considerable formative and summative data. It also has the benefit of having a history of success in the classroom. From Papert’s seminal (1972) work through today, constructionist learning has been shown to promote the development of deep understanding of relatively complex content, with many examples ranging from mathematics (Harel, 1990; Wilensky, 1996) to history (Zahn, Krauskopf, Hesse, & Pea, 2010). However, constructionist learning environments, ideas, and findings have yet to reach the majority of classrooms and have had incomplete influence in the broader education research community. There are several potential reasons for this. One of them may be a lack of demonstration that findings are generalizable across populations and across specific content. Another reason is that constructionist activities are seen to be timeconsuming for teachers (Warschauer & Matuchniak, 2010), though, in practice, it has been shown that supporting understanding through project-based work could actually save time (Fosnot, 2005) and enable classroom dynamics that may streamline class preparation (e.g., peer teaching or peer feedback). A last reason is that constructionists almost universally value more deep understanding of scientific principles than facts or procedural skills even in contexts (e.g., many classrooms) in which memorization of facts and procedural skills is the target to be evaluated (Abelson & diSessa, 1986; Papert & Harel, 1991). Therefore, much of what is learned in constructionist environments does not directly translate to test scores or other established metrics. Constructionist research can be useful and convincing to audiences that do not yet take full advantage of the scientific findings of this community, but it requires careful consideration of framing and evidence to reach them. Educational data mining methods pose the potential to both enhance constructionist research, and to support constructionist researchers in communicating their findings in a fashion that other researchers consider valid. Blikstein (2011, p. 110) made ", "title": "" }, { "docid": "e7b9c3ef571770788cd557f8c4843bcf", "text": "Different efforts have been done to address the problem of information overload on the Internet. Recommender systems aim at directing users through this information space, toward the resources that best meet their needs and interests by extracting knowledge from the previous users’ interactions. In this paper, we propose an algorithm to solve the web page recommendation problem. In our algorithm, we use distributed learning automata to learn the behavior of previous users’ and recommend pages to the current user based on learned pattern. Our experiments on real data set show that the proposed algorithm performs better than the other algorithms that we compared to and, at the same time, it is less complex than other algorithms with respect to memory usage and computational cost too.", "title": "" }, { "docid": "a24eddbadb54b6012d243c3fd624d5aa", "text": "A simple algorithm for computing the three-dimensional structure of a scene from a correlated pair of perspective projections is described here, when the spatial relationship between the two projections is unknown. This problem is relevant not only to photographic surveying1 but also to binocular vision2, where the non-visual information available to the observer about the orientation and focal length of each eye is much less accurate than the optical information supplied by the retinal images themselves. The problem also arises in monocular perception of motion3, where the two projections represent views which are separated in time as well as space. As Marr and Poggio4 have noted, the fusing of two images to produce a three-dimensional percept involves two distinct processes: the establishment of a 1:1 correspondence between image points in the two views—the ‘correspondence problem’—and the use of the associated disparities for determining the distances of visible elements in the scene. I shall assume that the correspondence problem has been solved; the problem of reconstructing the scene then reduces to that of finding the relative orientation of the two viewpoints.", "title": "" }, { "docid": "28beae47973ec8dbf1b487daa389f37e", "text": "Although cloud computing has the advantages of cost-saving, efficiency and scalability, it also brings about many security issues. Because almost all software, hardware, and application data are deployed and stored in the cloud platforms, there is often the distrust between users and cloud suppliers. To resolve the problem, this paper proposes a risk management framework on the basis of the previous work. The framework consists of five components: user requirement self-assessment, cloud service providers desktop assessment, risk assessment, third-party agencies review, and continuous monitoring. By means of the framework, the cloud service suppliers can better understand the user's requirements, and the trust between the users and the suppliers is more easily acquired.", "title": "" }, { "docid": "2503784af4149b3d5bd61c458b6df2bf", "text": "In this paper, our proposed method has two contributions to demosaicking: first, different from conventional interpolation methods based on two directions or four directions, the proposed method exploits to a greater degree correlations among neighboring pixels along eight directions to improve the interpolation performance. Second, we propose an efficient post-processing method to reduce interpolation artifacts based on the color difference planes. As compared with the latest demosaicking algorithms, experiments show that the proposed algorithm provides superior performance in terms of both objective and subjective image qualities.", "title": "" }, { "docid": "79414d5ba6a202bf52d26a74caff4784", "text": "The Co-Training algorithm uses unlabeled examples in multiple views to bootstrap classifiers in each view, typically in a greedy manner, and operating under assumptions of view-independence and compatibility. In this paper, we propose a Co-Regularization framework where classifiers are learnt in each view through forms of multi-view regularization. We propose algorithms within this framework that are based on optimizing measures of agreement and smoothness over labeled and unlabeled examples. These algorithms naturally extend standard regularization methods like Support Vector Machines (SVM) and Regularized Least squares (RLS) for multi-view semi-supervised learning, and inherit their benefits and applicability to high-dimensional classification problems. An empirical investigation is presented that confirms the promise of this approach.", "title": "" }, { "docid": "325b97e73ea0a50d2413757e95628163", "text": "Due to the recent advancement in procedural generation techniques, games are presenting players with ever growing cities and terrains to explore. However most sandbox-style games situated in cities, do not allow players to wander into buildings. In past research, space planning techniques have already been utilized to generate suitable layouts for both building floor plans and room layouts. We introduce a novel rule-based layout solving approach, especially suited for use in conjunction with procedural generation methods. We show how this solving approach can be used for procedural generation by providing the solver with a userdefined plan. In this plan, users can specify objects to be placed as instances of classes, which in turn contain rules about how instances should be placed. This approach gives us the opportunity to use our generic solver in different procedural generation scenarios. In this paper, we will illustrate mainly with interior generation examples.", "title": "" }, { "docid": "bcb10716690875ec0e397eec4ba3ea2e", "text": "Shamos [1] recently showed that the diameter of a convex n-sided polygon could be computed in O(n) time using a very elegant and simple procedure which resembles rotating a set of calipers around the polygon once. In this paper we show that this simple idea can be generalized in two ways: several sets of calipers can be used simultaneously on one convex polygon, or one set of calipers can be used on several convex polygons simultaneously. We then show that these generalizations allow us to obtain simple O(n) algorithms for solving a variety of problems defined on convex polygons. Such problems include (1) finding the minimum-area rectangle enclosing a polygon, (2) computing the maximum distance between two polygons, (3) performing the vector-sum of two polygons, (4) merging polygons in a convex hull finding algorithms, and (5) finding the critical support lines between two polygons. Finding the critical support lines, in turn, leads to obtaining solutions to several additional problems concerned with visibility, collision, avoidance, range fitting, linear separability, and computing the Grenander distance between sets.", "title": "" }, { "docid": "d094b75f0a1b7f40b39f02bb74397d71", "text": "We propose a theory that relates difficulty of learning in deep architectures to culture and language. It is articulated around the following hypotheses: (1) learning in an individual human brain is hampered by the presence of effective local minima; (2) this optimization difficulty is particularly important when it comes to learning higher-level abstractions, i.e., concepts that cover a vast and highly-nonlinear span of sensory configurations; (3) such high-level abstractions are best represented in brains by the composition of many levels of representation, i.e., by deep architectures; (4) a human brain can learn such high-level abstractions if guided by the signals produced by other humans, which act as hints or indirect supervision for these high-level abstractions; and (5), language and the recombination and optimization of mental concepts provide an efficient evolutionary recombination operator, and this gives rise to rapid search in the space of communicable ideas that help humans build up better high-level internal representations of their world. These hypotheses put together imply that human culture and the evolution of ideas have been crucial to counter an optimization difficulty: this optimization difficulty would otherwise make it very difficult for human brains to capture high-level knowledge of the world. The theory is grounded in experimental observations of the difficulties of training deep artificial neural networks. Plausible consequences of this theory for the efficiency of cultural evolution are sketched.", "title": "" }, { "docid": "74adf22dff08c0d914197d71fabe4938", "text": "Modeling contact in multibody simulation is a difficult problem frequently characterized by numerically brittle algorithms, long running times, and inaccurate (with respect to theory) models. We present a comprehensive evaluation of four methods for contact modeling on seven benchmark scenarios in order to quantify the performance of these methods with respect to robustness and speed. We also assess the accuracy of these methods where possible. We conclude the paper with a prescriptive description in order to guide the user of multibody simulation.", "title": "" }, { "docid": "d4269f7b6f2ace3b459668f4d6cb6861", "text": "The ability to rise above the present environment and reflect upon the past, the future, and the minds of others is a fundamentally defining human feature. It has been proposed that these three self-referential processes involve a highly interconnected core set of brain structures known as the default mode network (DMN). The DMN appears to be active when individuals are engaged in stimulus-independent thought. This network is a likely candidate for supporting multiple processes, but this idea has not been tested directly. We used fMRI to examine brain activity during autobiographical remembering, prospection, and theory-of-mind reasoning. Using multivariate analyses, we found a common pattern of neural activation underlying all three processes in the DMN. In addition, autobiographical remembering and prospection engaged midline DMN structures to a greater degree and theory-of-mind reasoning engaged lateral DMN areas. A functional connectivity analysis revealed that activity of a critical node in the DMN, medial prefrontal cortex, was correlated with activity in other regions in the DMN during all three tasks. We conclude that the DMN supports common aspects of these cognitive behaviors involved in simulating an internalized experience.", "title": "" } ]
scidocsrr
af79c6fe5d97593381c4521db52843be
Privacy-by-design in big data analytics and social mining
[ { "docid": "848e56ec20ccab212567087178e36979", "text": "The technologies of mobile communications pervade our society and wireless networks sense the movement of people, generating large volumes of mobility data, such as mobile phone call records and Global Positioning System (GPS) tracks. In this work, we illustrate the striking analytical power of massive collections of trajectory data in unveiling the complexity of human mobility. We present the results of a large-scale experiment, based on the detailed trajectories of tens of thousands private cars with on-board GPS receivers, tracked during weeks of ordinary mobile activity. We illustrate the knowledge discovery process that, based on these data, addresses some fundamental questions of mobility analysts: what are the frequent patterns of people’s travels? How big attractors and extraordinary events influence mobility? How to predict areas of dense traffic in the near future? How to characterize traffic jams and congestions? We also describe M-Atlas, the querying and mining language and system that makes this analytical process possible, providing the mechanisms to master the complexity of transforming raw GPS tracks into mobility knowledge. M-Atlas is centered onto the concept of a trajectory, and the mobility knowledge discovery process can be specified by M-Atlas queries that realize data transformations, data-driven estimation of the parameters of the mining methods, the quality assessment of the obtained results, the quantitative and visual exploration of the discovered behavioral patterns and models, the composition of mined patterns, models and data with further analyses and mining, and the incremental mining strategies to address scalability.", "title": "" } ]
[ { "docid": "d65e4b79ae580d3b8572c1746357f854", "text": "We present a large-scale object detection system by team PFDet. Our system enables training with huge datasets using 512 GPUs, handles sparsely verified classes, and massive class imbalance. Using our method, we achieved 2nd place in the Google AI Open Images Object Detection Track 2018 on Kaggle. 1", "title": "" }, { "docid": "55631b81d46fc3dcaad8375176cb1c68", "text": "UNLABELLED\nThe need for long-term retention to prevent post-treatment tooth movement is now widely accepted by orthodontists. This may be achieved with removable retainers or permanent bonded retainers. This article aims to provide simple guidance for the dentist on how to maintain and repair both removable and fixed retainers.\n\n\nCLINICAL RELEVANCE\nThe general dental practitioner is more likely to review patients over time and needs to be aware of the need for long-term retention and how to maintain and repair the retainers.", "title": "" }, { "docid": "c8f1d563987245bcb052e4b2c3937ec9", "text": "Scaling the distributed deep learning to a massive GPU cluster level is challenging due to the instability of the large mini-batch training and the overhead of the gradient synchronization. We address the instability of the large mini-batch training with batch size control. We address the overhead of the gradient synchronization with 2D-Torus all-reduce. Specifically, 2D-Torus all-reduce arranges GPUs in a logical 2D grid and performs a series of collective operation in different orientations. These two techniques are implemented with Neural Network Libraries (NNL) 1 . We have successfully trained ImageNet/ResNet-50 in 224 seconds without significant accuracy loss on ABCI cluster.", "title": "" }, { "docid": "ef04d580d7c1ab165335145c13a1701f", "text": "Finding good representations of text documents is crucial in information retrieval and classification systems. Today the most popular document representation is based on a vector of word counts in the document. This representation neither captures dependencies between related words, nor handles synonyms or polysemous words. In this paper, we propose an algorithm to learn text document representations based on semi-supervised autoencoders that are stacked to form a deep network. The model can be trained efficiently on partially labeled corpora, producing very compact representations of documents, while retaining as much class information and joint word statistics as possible. We show that it is advantageous to exploit even a few labeled samples during training.", "title": "" }, { "docid": "eceebb0adab2e962c4da1fed031dd8b9", "text": "Evaluation of Code Coverage is the problem of identifying the parts of a program that did not execute in one or more runs of a program. The traditional approach for code coverage tools is to use static code instrumentation. In this paper we present a new approach to dynamically insert and remove instrumentation code to reduce the runtime overhead of code coverage. We also explore the use of dominator tree information to reduce the number of instrumentation points needed. Our experiments show that our approach reduces runtime overhead by 38-90% compared with purecov, a commercial code coverage tool. Our tool is fully automated and available for download from the Internet.", "title": "" }, { "docid": "4417f505ed279689afa0bde104b3d472", "text": "A single-cavity dual-mode substrate integrated waveguide (SIW) bandpass filter (BPF) for X-band application is presented in this paper. Coplanar waveguide (CPW) is used as SIW-microstrip transition in this design. Two slots of the CPW with unequal lengths are used to excite two degenerate modes, i.e. TE102 and TE201. A slot line is etched on the ground plane of the SIW cavity for perturbation. Its size and position are related to the effect of mode-split, namely the coupling between the two degenerate modes. Due to the cancellation of the two modes, a transmission zero in the lower stopband of the BPF is achieved, which improves the selectivity of the proposed BPF. And the location of the transmission zero can be controlled by adjusting the position and the size of the slot line perturbation properly. By introducing source-load coupling, an additional transmission zero is produced in the upper stopband of the BPF, it enhances the stopband performance of the BPF. Influences of the slot line perturbation on the BPF have been studied. A dual-mode BPF for X-band application has been designed, fabricated and measured. A good agreement between simulation and measurement verifies the validity of this design methodology.", "title": "" }, { "docid": "fc63dbad7a3c6769ee1a1df19da6e235", "text": "For global companies that compete in high-velocity industries, business strategies and initiatives change rapidly, and thus the CIO struggles to keep the IT organization aligned with a moving target. In this paper we report on research-in-progress that focuses on how the CIO attempts to meet this challenge. Specifically, we are conducting case studies to closely examine how toy industry CIOs develop their IT organizations’ assets, competencies, and dynamic capabilities in alignment with their companies’ evolving strategy and business priorities (which constitute the “moving target”). We have chosen to study toy industry CIOs, because their companies compete in a global, high-velocity environment, yet this industry has been largely overlooked by the information systems research community. Early findings reveal that four IT application areas are seen as holding strong promise: supply chain management, knowledge management, data mining, and eCommerce, and that toy CIO’s are attempting to both cope with and capitalize on the current financial crisis by more aggressively pursuing offshore outsourcing than heretofore. We conclude with a discussion of next steps as the study proceeds.", "title": "" }, { "docid": "141287de7b743db26c26e8c9d46338f3", "text": "This paper presents a core decision tree algorithm to identify money laundering activities. The clustering algorithm is the combination of BIRCH and K-means. In this method, decision tree of data mining technology is applied to anti-money-laundering filed after research of money laundering features. We select an appropriate identifying strategy to discover typical money laundering patterns and money laundering rules. Consequently, with the core decision tree algorithm, we can identify abnormal transaction data more effectively.", "title": "" }, { "docid": "568317c1f18c476de5029d0a1e91438e", "text": "Plant volatiles (PVs) are lipophilic molecules with high vapor pressure that serve various ecological roles. The synthesis of PVs involves the removal of hydrophilic moieties and oxidation/hydroxylation, reduction, methylation, and acylation reactions. Some PV biosynthetic enzymes produce multiple products from a single substrate or act on multiple substrates. Genes for PV biosynthesis evolve by duplication of genes that direct other aspects of plant metabolism; these duplicated genes then diverge from each other over time. Changes in the preferred substrate or resultant product of PV enzymes may occur through minimal changes of critical residues. Convergent evolution is often responsible for the ability of distally related species to synthesize the same volatile.", "title": "" }, { "docid": "6ad344c7049abad62cd53dacc694c651", "text": "Primary syphilis with oropharyngeal manifestations should be kept in mind, though. Lips and tongue ulcers are the most frequently reported lesions and tonsillar ulcers are much more rare. We report the case of a 24-year-old woman with a syphilitic ulcer localized in her left tonsil.", "title": "" }, { "docid": "02eec4b9078af92a774f6e46b36808f7", "text": "Cancer cell migration is a plastic and adaptive process integrating cytoskeletal dynamics, cell-extracellular matrix and cell-cell adhesion, as well as tissue remodeling. In response to molecular and physical microenvironmental cues during metastatic dissemination, cancer cells exploit a versatile repertoire of invasion and dissemination strategies, including collective and single-cell migration programs. This diversity generates molecular and physical heterogeneity of migration mechanisms and metastatic routes, and provides a basis for adaptation in response to microenvironmental and therapeutic challenge. We here summarize how cytoskeletal dynamics, protease systems, cell-matrix and cell-cell adhesion pathways control cancer cell invasion programs, and how reciprocal interaction of tumor cells with the microenvironment contributes to plasticity of invasion and dissemination strategies. We discuss the potential and future implications of predicted \"antimigration\" therapies that target cytoskeletal dynamics, adhesion, and protease systems to interfere with metastatic dissemination, and the options for integrating antimigration therapy into the spectrum of targeted molecular therapies.", "title": "" }, { "docid": "94e2bfa218791199a59037f9ea882487", "text": "As a developing discipline, research results in the field of human computer interaction (HCI) tends to be \"soft\". Many workers in the field have argued that the advancement of HCI lies in \"hardening\" the field with quantitative and robust models. In reality, few theoretical, quantitative tools are available in user interface research and development. A rare exception to this is Fitts' law. Extending information theory to human perceptual-motor system, Paul Fitts (1954) found a logarithmic relationship that models speed accuracy tradeoffs in aimed movements. A great number of studies have verified and / or applied Fitts' law to HCI problems, such as pointing performance on a screen, making Fitts' law one of the most intensively studied topic in the HCI literature.", "title": "" }, { "docid": "7dd3c935b6a5a38284b36ddc1dc1d368", "text": "(2012): Mindfulness and self-compassion as predictors of psychological wellbeing in long-term meditators and matched nonmeditators, The Journal of Positive Psychology: This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.", "title": "" }, { "docid": "153daf89486b5df0d43b408fdc2bd428", "text": "In this paper, a phased array antenna is designed at millimeter-wave frequency bands for future 5G based smartphone applications. The proposed antenna is a novel open slot-PIFA antenna made on a low cost FR4 board. The antenna array covers a frequency range of 26–32 GHz with a bandwidth of 6 GHz. The antenna exhibits a very good radiation pattern when integrated with the mobile phone chassis. The 8 — element antenna array exhibits a maximum gain around 13 dBi. The pattern can be steered by varying the phase shift at each antenna element.", "title": "" }, { "docid": "47b7ebc460ce1273941bdef5bc754d4a", "text": "When people predict their future behavior, they tend to place too much weight on their current intentions, which produces an optimistic bias for behaviors associated with currently strong intentions. More realistic self-predictions require greater sensitivity to situational barriers, such as obstacles or competing demands, that may interfere with the translation of current intentions into future behavior. We consider three reasons why people may not adjust sufficiently for such barriers. First, self-predictions may focus exclusively on current intentions, ignoring potential barriers altogether. We test this possibility, in three studies, with manipulations that draw greater attention to barriers. Second, barriers may be discounted in the self-prediction process. We test this possibility by comparing prospective and retrospective ratings of the impact of barriers on the target behavior. Neither possibility was supported in these tests, or in a further test examining whether an optimally weighted statistical model could improve on the accuracy of self-predictions by placing greater weight on anticipated situational barriers. Instead, the evidence supports a third possibility: Even when they acknowledge that situational factors can affect the likelihood of carrying out an intended behavior, people do not adequately moderate the weight placed on their current intentions when predicting their future behavior.", "title": "" }, { "docid": "526854ab5bf3c01f9e88dee8aeaa8dda", "text": "Key establishment in sensor networks is a challenging problem because asymmetric key cryptosystems are unsuitable for use in resource constrained sensor nodes, and also because the nodes could be physically compromised by an adversary. We present three new mechanisms for key establishment using the framework of pre-distributing a random set of keys to each node. First, in the q-composite keys scheme, we trade off the unlikeliness of a large-scale network attack in order to significantly strengthen random key predistribution’s strength against smaller-scale attacks. Second, in the multipath-reinforcement scheme, we show how to strengthen the security between any two nodes by leveraging the security of other links. Finally, we present the random-pairwise keys scheme, which perfectly preserves the secrecy of the rest of the network when any node is captured, and also enables node-to-node authentication and quorum-based revocation.", "title": "" }, { "docid": "a53904f277c06e32bd6ad148399443c6", "text": "Big data is flowing into every area of our life, professional and personal. Big data is defined as datasets whose size is beyond the ability of typical software tools to capture, store, manage and analyze, due to the time and memory complexity. Velocity is one of the main properties of big data. In this demo, we present SAMOA (Scalable Advanced Massive Online Analysis), an open-source platform for mining big data streams. It provides a collection of distributed streaming algorithms for the most common data mining and machine learning tasks such as classification, clustering, and regression, as well as programming abstractions to develop new algorithms. It features a pluggable architecture that allows it to run on several distributed stream processing engines such as Storm, S4, and Samza. SAMOA is written in Java and is available at http://samoa-project.net under the Apache Software License version 2.0.", "title": "" }, { "docid": "8d7e63dcb792a2b61dd708475117dac7", "text": "Nanotechnology has played a crucial role in the development of biosensors over the past decade. The development, testing, optimization, and validation of new biosensors has become a highly interdisciplinary effort involving experts in chemistry, biology, physics, engineering, and medicine. The sensitivity, the specificity and the reproducibility of biosensors have improved tremendously as a result of incorporating nanomaterials in their design. In general, nanomaterials-based electrochemical immunosensors amplify the sensitivity by facilitating greater loading of the larger sensing surface with biorecognition molecules as well as improving the electrochemical properties of the transducer. The most common types of nanomaterials and their properties will be described. In addition, the utilization of nanomaterials in immunosensors for biomarker detection will be discussed since these biosensors have enormous potential for a myriad of clinical uses. Electrochemical immunosensors provide a specific and simple analytical alternative as evidenced by their brief analysis times, inexpensive instrumentation, lower assay cost as well as good portability and amenability to miniaturization. The role nanomaterials play in biosensors, their ability to improve detection capabilities in low concentration analytes yielding clinically useful data and their impact on other biosensor performance properties will be discussed. Finally, the most common types of electroanalytical detection methods will be briefly touched upon.", "title": "" }, { "docid": "b5a9bbf52279ce7826434b7e5d3ccbb6", "text": "We present our 11-layers deep, double-pathway, 3D Convolutional Neural Network, developed for the segmentation of brain lesions. The developed system segments pathology voxel-wise after processing a corresponding multi-modal 3D patch at multiple scales. We demonstrate that it is possible to train such a deep and wide 3D CNN on a small dataset of 28 cases. Our network yields promising results on the task of segmenting ischemic stroke lesions, accomplishing a mean Dice of 64% (66% after postprocessing) on the ISLES 2015 training dataset, ranking among the top entries. Regardless its size, our network is capable of processing a 3D brain volume in 3 minutes, making it applicable to the automated analysis of larger study cohorts.", "title": "" }, { "docid": "21002b649f123b61b99f9167952b5888", "text": "Transposable elements and retroviruses are found in most genomes, can be pathogenic and are widely used as gene-delivery and functional genomics tools. Exploring whether these genetic elements target specific genomic sites for integration and how this preference is achieved is crucial to our understanding of genome evolution, somatic genome plasticity in cancer and ageing, host–parasite interactions and genome engineering applications. High-throughput profiling of integration sites by next-generation sequencing, combined with large-scale genomic data mining and cellular or biochemical approaches, has revealed that the insertions are usually non-random. The DNA sequence, chromatin and nuclear context, and cellular proteins cooperate in guiding integration in eukaryotic genomes, leading to a remarkable diversity of insertion site distribution and evolutionary strategies.", "title": "" } ]
scidocsrr
4e257d52c0ac01fc06ec10685b48288b
A New Heuristic Reduct Algorithm Base on Rough Sets Theory
[ { "docid": "8c9503084e7f1a286ac50af9ef123132", "text": "Rough set theory, introduced by Zdzislaw Pawlak in the early 1980s [11, 12], is a new mathematical tool to deal with vagueness and uncertainty. This approach seems to be of fundamental importance to artificial intelligence (AI) and cognitive sciences, especially in the areas of machine learning, knowledge acquisition, decision analysis, knowledge discovery from databases, expert systems, decision support systems, inductive reasoning, and pattern recognition.", "title": "" } ]
[ { "docid": "866615dd2e2fec79dbb7a31de3853493", "text": "Appointment scheduling systems are used by primary and specialty care clinics to manage access to service providers, as well as by hospitals to schedule elective surgeries. Many factors affect the performance of appointment systems including arrival and service time variability, patient and provider preferences, available information technology and the experience level of the scheduling staff. In addition, a critical bottleneck lies in the application of Industrial Engineering and Operations Research (IE/OR) techniques. The most common types of health care delivery systems are described in this article with particular attention on the factors that make appointment scheduling challenging. For each environment relevant decisions ranging from a set of rules that guide schedulers to real-time responses to deviations from plans are described. A road map of the state of the art in the design of appointment management systems is provided and future opportunities for novel applications of IE/OR models are identified.", "title": "" }, { "docid": "59776cc8a1ab1d1ac86034c98760b7cf", "text": "The problems encountered by students in first year computer programming units are a common concern in many universities including Victoria University. A fundamental component of a computer science curriculum, computer programming is a mandatory unit in a computing course. It is also one of the most feared and hated units by many novice computing students who, having failed or performed poorly in a programming unit, often drop out from a course. This article discusses some of the difficulties experienced by first year programming students, and reviews some of the initiatives undertaken to counter the problems. The article also reports on the first stage of a current research project at Victoria University that aims to develop a balanced approach to teaching first year programming units; its goal is to ‘befriend’ computer programming to help promote success among new programming students.", "title": "" }, { "docid": "96344ccc2aac1a7e7fbab96c1355fa10", "text": "A highly sensitive field-effect sensor immune to environmental potential fluctuation is proposed. The sensor circuit consists of two sensors each with a charge sensing field effect transistor (FET) and an extended sensing gate (SG). By enlarging the sensing gate of an extended gate ISFET, a remarkable sensitivity of 130mV/pH is achieved, exceeding the conventional Nernst limit of 59mV/pH. The proposed differential sensing circuit consists of a pair of matching n-channel and p-channel ion sensitive sensors connected in parallel and biased at a matched transconductance bias point. Potential fluctuations in the electrolyte appear as common mode signal to the differential pair and are cancelled by the matched transistors. This novel differential measurement technique eliminates the need for a true reference electrode such as the bulky Ag/AgCl reference electrode and enables the use of the sensor for autonomous and implantable applications.", "title": "" }, { "docid": "c9ca8d6f38c44bde6983e401a967c399", "text": "The validation and verification of cognitive skills of highly automated vehicles is an important milestone for legal and public acceptance of advanced driver assistance systems (ADAS). In this paper, we present an innovative data-driven method in order to create critical traffic situations from recorded sensor data. This concept is completely contrary to previous approaches using parametrizable simulation models. We demonstrate our concept at the example of parametrizing lane change maneuvers: Firstly, the road layout is automatically derived from observed vehicle trajectories. The road layout is then used in order to detect vehicle maneuvers, which is shown exemplarily on lane change maneuvers. Then, the maneuvers are parametrized using data operators in order to create critical traffic scenarios. Finally, we demonstrate our concept using LIDAR-captured traffic situations on urban and highway scenes, creating critical scenarios out of safely recorded data.", "title": "" }, { "docid": "6e02cdb0ade3479e0df03c30d9d69fa3", "text": "Reinforcement learning is considered as a promising direction for driving policy learning. However, training autonomous driving vehicle with reinforcement learning in real environment involves non-affordable trial-and-error. It is more desirable to first train in a virtual environment and then transfer to the real environment. In this paper, we propose a novel realistic translation network to make model trained in virtual environment be workable in real world. The proposed network can convert non-realistic virtual image input into a realistic one with similar scene structure. Given realistic frames as input, driving policy trained by reinforcement learning can nicely adapt to real world driving. Experiments show that our proposed virtual to real (VR) reinforcement learning (RL) works pretty well. To our knowledge, this is the first successful case of driving policy trained by reinforcement learning that can adapt to real world driving data.", "title": "" }, { "docid": "3c3b97046e00df5863d817cb222a5017", "text": "Having favorite corporate image and powerful brand equity build a strategic position in market for corporations. This position plays vital role of sustainable advantage. Therefore, we focus on the impacts of marketing strategies such as channel performance, value-oriented price, promotion, and after-sales service on brand equity directly and by corporate image indirectly. The explored results of Chi-square test analysis show that all the marketing-mix efforts positively affect the overall value of brand equity, which is a proxy of market performance, via the three dimensions of brand equity. Corporate image mediates the effect of the marketing-mix efforts on the three dimensions of brand equity.", "title": "" }, { "docid": "d85f7260f6d6a83504b3679fc3b562f1", "text": "New antimicrobials with novel mechanisms need to be developed to combat antimicrobial-resistant pathogenic bacteria. The current authors recently reported discovery of a new antibiotic named \"Lysocin E\". Lysocin E was identified using a silkworm model of bacterial infection. The current review discusses the advantages of using a silkworm model of bacterial infection to identify and develop therapeutically efficacious antimicrobials. This review also discusses the discovery of lysocin E and its novel mechanism of action.", "title": "" }, { "docid": "bb8fe4145e1ea2337f5cc1a18a9a348f", "text": "Automatic License Plate Recognition (ALPR) has been a frequent topic of research due to many practical applications. However, many of the current solutions are still not robust in real-world situations, commonly depending on many constraints. This paper presents a robust and efficient ALPR system based on the state-of-the-art YOLO object detector. The Convolutional Neural Networks (CNNs) are trained and finetuned for each ALPR stage so that they are robust under different conditions (e.g., variations in camera, lighting, and background). Specially for character segmentation and recognition, we design a two-stage approach employing simple data augmentation tricks such as inverted License Plates (LPs) and flipped characters. The resulting ALPR approach achieved impressive results in two datasets. First, in the SSIG dataset, composed of 2,000 frames from 101 vehicle videos, our system achieved a recognition rate of 93.53% and 47 Frames Per Second (FPS), performing better than both Sighthound and OpenALPR commercial systems (89.80% and 93.03%, respectively) and considerably outperforming previous results (81.80%). Second, targeting a more realistic scenario, we introduce a larger public dataset1 dataset, designed to ALPR. This dataset contains 150 videos and 4,500 frames captured when both camera and vehicles are moving and also contains different types of vehicles (cars, motorcycles, buses and trucks). In our proposed dataset, the trial versions of commercial systems achieved recognition rates below 70%. On the other hand, our system performed better, with recognition rate of 78.33% and 35 FPS.The UFPR-ALPR dataset is publicly available to the research community at https://web.inf.ufpr.br/vri/databases/ufpr-alpr/ subject to privacy restrictions.", "title": "" }, { "docid": "3e5cd0282b0cb36413e3eeb6c5418305", "text": "We use the paradigm of diffusing computation, introduced by Dijkstra and Scholten, to solve a class of graph problems. We present a detailed solution to the problem of computing shortest paths from a single vertex to all other vertices, in the presence of negative cycles.", "title": "" }, { "docid": "d2e3b893e257d04da0cccbd4b1def9f7", "text": "Augmented reality (AR) is currently considered as having potential for pedagogical applications. However, in science education, research regarding AR-aided learning is in its infancy. To understand how AR could help science learning, this review paper firstly has identified two major approaches of utilizing AR technology in science education, which are named as image-based AR and locationbased AR. These approaches may result in different affordances for science learning. It is then found that students’ spatial ability, practical skills, and conceptual understanding are often afforded by image-based AR and location-based AR usually supports inquiry-based scientific activities. After examining what has been done in science learning with AR supports, several suggestions for future research are proposed. For example, more research is required to explore learning experience (e.g., motivation or cognitive load) and learner characteristics (e.g., spatial ability or perceived presence) involved in AR. Mixed methods of investigating learning process (e.g., a content analysis and a sequential analysis) and in-depth examination of user experience beyond usability (e.g., affective variables of esthetic pleasure or emotional fulfillment) should be considered. Combining image-based and location-based AR technology may bring new possibility for supporting science learning. Theories including mental models, spatial cognition, situated cognition, and social constructivist learning are suggested for the profitable uses of future AR research in science education.", "title": "" }, { "docid": "569fcd0efaba3c142f8282369af9fff1", "text": "Since fouling-release coating systems do not prevent settlement, various methods to quantify the tenacity of adhesion of fouling organisms on these systems have been offered. One such method is the turbulent channel flow apparatus. The question remains how the results from laboratory scale tests relate to the self-cleaning of a ship coated with a fouling-release surface. This paper relates the detachment strength of low form fouling determined in the laboratory using a turbulent channel flow to the conditions necessary for detachment of these organisms in a turbulent boundary layer at ship scale. A power-law formula, the ITTC-57 formula, and a computational fluid dynamics (CFD) model are used to predict the skin-friction at ship scale. The results from all three methods show good agreement and are illustrated using turbulent channel flow data for sporelings of the green macrofouling alga Enteromorpha growing on a fouling-release coating.", "title": "" }, { "docid": "9c96d6e2e85df237eb808282cae53e82", "text": "Intel SGX provides confidentiality and integrity of a program running within the confines of an enclave, and is expected to enable valuable security applications such as private information retrieval. This paper is concerned with the security aspects of SGX in accessing a key system resource, files. Through concrete attack scenarios, we show that all existing SGX filesystems are vulnerable to either system call snooping, page fault, or cache based side-channel attacks. To address this security limitations in current SGX filesystems, we present OBLIVIATE, a data oblivious filesystem for Intel SGX. The key idea behind OBLIVIATE is in adapting the ORAM protocol to read and write data from a file within an SGX enclave. OBLIVIATE redesigns the conceptual components of ORAM for SGX environments, and it seamlessly supports an SGX program without requiring any changes in the application layer. OBLIVIATE also employs SGX-specific defenses and optimizations in order to ensure complete security with acceptable overhead. The evaluation of the prototype of OBLIVIATE demonstrated its practical effectiveness in running popular server applications such as SQLite and Lighttpd, while also achieving a throughput improvement of 2×8× over a baseline ORAM-based solution, and less than 2× overhead over an in-memory SGX filesystem.", "title": "" }, { "docid": "a6f5c789c8b4c9f6066675ed11292745", "text": "We propose a shared task based on recent advances in learning to generate natural language from meaning representations using semantically unaligned data. The aNALoGuE challenge aims to evaluate and compare recent corpus-based methods with respect to their scalability to data size and target complexity, as well as to assess predictive quality of automatic evaluation metrics.", "title": "" }, { "docid": "4575b5c93aa86c150944597638402439", "text": "Multilayer networks are networks where edges exist in multiple layers that encode different types or sources of interactions. As one of the most important problems in network science, discovering the underlying community structure in multilayer networks has received an increasing amount of attention in recent years. One of the challenging issues is to develop effective community structure quality functions for characterizing the structural or functional properties of the expected community structure. Although several quality functions have been developed for evaluating the detected community structure, little has been explored about how to explicitly bring our knowledge of the desired community structure into such quality functions, in particular for the multilayer networks. To address this issue, we propose the multilayer edge mixture model (MEMM), which is positioned as a general framework that enables us to design a quality function that reflects our knowledge about the desired community structure. The proposed model is based on a mixture of the edges, and the weights reflect their role in the detection process. By decomposing a community structure quality function into the form of MEMM, it becomes clear which type of community structure will be discovered by such quality function. Similarly, after such decomposition we can also modify the weights of the edges to find the desired community structure. In this paper, we apply the quality functions modified with the knowledge of MEMM to different multilayer benchmark networks as well as real-world multilayer networks and the detection results confirm the feasibility of MEMM.", "title": "" }, { "docid": "fdc580124be4f1398976d4161791bf8a", "text": "Child abuse is a problem that affects over six million children in the United States each year. Child neglect accounts for 78 % of those cases. Despite this, the issue of child neglect is still not well understood, partially because child neglect does not have a consistent, universally accepted definition. Some researchers consider child neglect and child abuse to be one in the same, while other researchers consider them to be conceptually different. Factors that make child neglect difficult to define include: (1) Cultural differences; motives must be taken into account because parents may believe they are acting in the child’s best interests based on cultural beliefs (2) the fact that the effect of child abuse is not always immediately visible; the effects of emotional neglect specifically may not be apparent until later in the child’s development, and (3) the large spectrum of actions that fall under the category of child abuse. Some of the risk factors for increased child neglect and maltreatment have been identified. These risk factors include socioeconomic status, education level, family composition, and the presence of dysfunction family characteristics. Studies have found that children from poorer families and children of less educated parents are more likely to sustain fatal unintentional injuries than children of wealthier, better educated parents. Studies have also found that children living with adults unrelated to them are at increased risk for unintentional injuries and maltreatment. Dysfunctional family characteristics may even be more indicative of child neglect. Parental alcohol or drug abuse, parental personal history of neglect, and parental stress greatly increase the odds of neglect. Parental depression doubles the odds of child neglect. However, more research needs to be done to better understand these risk factors and to identify others. Having a clearer understanding of the risk factors could lead to prevention and treatment, as it would allow for health care personnel to screen for high-risk children and intervene before it is too late. Screening could also be done in the schools and organized after school activities. Parenting classes have been shown to be an effective intervention strategy by decreasing parental stress and potential for abuse, but there has been limited research done on this approach. Parenting classes can be part of the corrective actions for parents found to be neglectful or abusive, but parenting classes may also be useful as a preventative measure, being taught in schools or readily available in higher-risk communities. More research has to be done to better define child abuse and neglect so that it can be effectively addressed and treated.", "title": "" }, { "docid": "d35c176cfe5c8296862513c26f0fdffa", "text": "Vertical scar mammaplasty, first described by Lötsch in 1923 and Dartigues in 1924 for mastopexy, was extended later to breast reduction by Arié in 1957. It was otherwise lost to surgical history until Lassus began experimenting with it in 1964. It then was extended by Marchac and de Olarte, finally to be popularized by Lejour. Despite initial skepticism, vertical reduction mammaplasty is becoming increasingly popular in recent years because it best incorporates the two concepts of minimal scarring and a satisfactory breast shape. At the moment, vertical scar techniques seem to be more popular in Europe than in the United States. A recent survey, however, has demonstrated that even in the United States, it has surpassed the rate of inverted T-scar breast reductions. The technique, however, is not without major drawbacks, such as long vertical scars extending below the inframammary crease and excessive skin gathering and “dog-ear” at the lower end of the scar that may require long periods for resolution, causing extreme distress to patients and surgeons alike. Efforts are being made to minimize these complications and make the procedure more user-friendly either by modifying it or by replacing it with an alternative that retains the same advantages. Although conceptually opposed to the standard vertical design, the circumvertical modification probably is the most important maneuver for shortening vertical scars. Residual dog-ears often are excised, resulting in a short transverse scar (inverted T- or L-scar). The authors describe limited subdermal undermining of the skin at the inferior edge of the vertical incisions with liposculpture of the inframammary crease, avoiding scar extension altogether. Simplified circumvertical drawing that uses the familiar Wise pattern also is described.", "title": "" }, { "docid": "2a68d57f8d59205122dd11461accecab", "text": "A resistive methanol sensor based on ZnO hexagonal nanorods having average diameter (60–70 nm) and average length of <formula formulatype=\"inline\"><tex Notation=\"TeX\">${\\sim}{\\rm 500}~{\\rm nm}$</tex></formula>, is reported in this paper. A low temperature chemical bath deposition technique is employed to deposit vertically aligned ZnO hexagonal nanorods using zinc acetate dihydrate and hexamethylenetetramine (HMT) precursors at 100<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{\\circ}{\\rm C}$</tex></formula> on a <formula formulatype=\"inline\"><tex Notation=\"TeX\">${\\rm SiO}_{2}$</tex></formula> substrate having Sol-Gel grown ZnO seed layer. After structural (XRD, FESEM) and electrical (Hall effect) characterizations, four types of sensors structures incorporating the effect of catalytic metal electrode (Pd-Ag) and Pd nanoparticle sensitization, are fabricated and tested for sensing methanol vapor in the temperature range of 27<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{\\circ}{\\rm C}$</tex> </formula>–300<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{\\circ}{\\rm C}$</tex></formula>. The as deposited ZnO nanorods with Pd-Ag catalytic contact offered appreciably high dynamic range (190–3040 ppm) at moderately lower temperature (200<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{\\circ}{\\rm C}$</tex></formula>) compared to the sensors with noncatalytic electrode (Au). Surface modification of nanorods by Pd nanoparticles offered faster response and recovery with increased response magnitude for both type of electrodes, but at the cost of lower dynamic range (190–950 ppm). The possible sensing mechanism has also been discussed briefly.", "title": "" }, { "docid": "9b4ffbbcd97e94524d2598cd862a400a", "text": "Head pose monitoring is an important task for driver assistance systems, since it is a key indicator for human attention and behavior. However, current head pose datasets either lack complexity or do not adequately represent the conditions that occur while driving. Therefore, we introduce DriveAHead, a novel dataset designed to develop and evaluate head pose monitoring algorithms in real driving conditions. We provide frame-by-frame head pose labels obtained from a motion-capture system, as well as annotations about occlusions of the driver's face. To the best of our knowledge, DriveAHead is the largest publicly available driver head pose dataset, and also the only one that provides 2D and 3D data aligned at the pixel level using the Kinect v2. Existing performance metrics are based on the mean error without any consideration of the bias towards one position or another. Here, we suggest a new performance metric, named Balanced Mean Angular Error, that addresses the bias towards the forward looking position existing in driving datasets. Finally, we present the Head Pose Network, a deep learning model that achieves better performance than current state-of-the-art algorithms, and we analyze its performance when using our dataset.", "title": "" }, { "docid": "2ecb4d841ef57a3acdf05cbb727aecbf", "text": "Boosting is a general method for improving the accuracy of any given learning algorithm. This short overview paper introduces the boosting algorithm AdaBoost, and explains the underlying theory of boosting, including an explanation of why boosting often does not suffer from overfitting as well as boosting’s relationship to support-vector machines. Some examples of recent applications of boosting are also described.", "title": "" }, { "docid": "692d57348ed0d23b8398303864219691", "text": "This paper considers the problem of subspace clustering under noise. Specifically, we study the behavior of Sparse Subspace Clustering (SSC) when either adversarial or random noise is added to the unlabelled input data points, which are assumed to lie in a union of low-dimensional subspaces. We show that a modified version of SSC is provably effective in correctly identifying the underlying subspaces, even with noisy data. This extends theoretical guarantee of this algorithm to the practical setting and provides justification to the success of SSC in a class of real applications.", "title": "" } ]
scidocsrr
ad045d00734ddead7fa8d9fae76965f2
MAC-RANSAC : a robust algorithm for the recognition of multiple objects
[ { "docid": "bf1bd9bdbe8e4a93e814ea9dc91e6eb3", "text": "A new robust matching method is proposed. The progressive sample consensus (PROSAC) algorithm exploits the linear ordering defined on the set of correspondences by a similarity function used in establishing tentative correspondences. Unlike RANSAC, which treats all correspondences equally and draws random samples uniformly from the full set, PROSAC samples are drawn from progressively larger sets of top-ranked correspondences. Under the mild assumption that the similarity measure predicts correctness of a match better than random guessing, we show that PROSAC achieves large computational savings. Experiments demonstrate it is often significantly faster (up to more than hundred times) than RANSAC. For the derived size of the sampled set of correspondences as a function of the number of samples already drawn, PROSAC converges towards RANSAC in the worst case. The power of the method is demonstrated on wide-baseline matching problems.", "title": "" } ]
[ { "docid": "b852aad0d205aff17cd8a9b7c21ed99f", "text": "In present investigation, two glucose based smart tumor-targeted drug delivery systems coupled with enzyme-sensitive release strategy are introduced. Magnetic nanoparticles (Fe3O4) were grafted with carboxymethyl chitosan (CS) and β-cyclodextrin (β-CD) as carriers. Prodigiosin (PG) was used as the model anti-tumor drug, targeting aggressive tumor cells. The morphology, properties and composition and grafting process were characterized by transmission electron microscope (TEM), Fourier transform infrared spectroscopy (FT-IR), vibration sample magnetometer (VSM), X-ray diffraction (XRD) analysis. The results revealed that the core crystal size of the nanoparticles synthesized were 14.2±2.1 and 9.8±1.4nm for β-CD and CS-MNPs respectively when measured using TEM; while dynamic light scattering (DLS) gave diameters of 121.1 and 38.2nm. The saturation magnetization (Ms) of bare magnetic nanoparticles is 50.10emucm-3, while modification with β-CD and CS gave values of 37.48 and 65.01emucm-3, respectively. The anticancer compound, prodigiosin (PG) was loaded into the NPs with an encapsulation efficiency of approximately 81% for the β-CD-MNPs, and 92% for the CS-MNPs. This translates to a drug loading capacity of 56.17 and 59.17mg/100mg MNPs, respectively. Measurement of in vitro release of prodigiosin from the loaded nanocarriers in the presence of the hydrolytic enzymes, alpha-amylase and chitosanase showed that 58.1 and 44.6% of the drug was released after one-hour of incubation. Cytotoxicity studies of PG-loaded nanocarriers on two cancer cell lines, MCF-7 and HepG2, and on a non-cancerous control, NIH/3T3 cells, revealed that the drug loaded nanoparticles had greater efficacy on the cancer cell lines. The selective index (SI) for free PG on MCF-7 and HepG2 cells was 1.54 and 4.42 respectively. This parameter was reduced for PG-loaded β-CD-MNPs to 1.27 and 1.85, while the SI for CS-MNPs improved considerably to 7.03 on MCF-7 cells. Complementary studies by fluorescence and confocal microscopy and flow cytometry confirm specific targeting of the nanocarriers to the cancer cells. The results suggest that CS-MNPs have higher potency and are better able to target the prodigiosin toxicity effect on cancerous cells than β-CD-MNPs.", "title": "" }, { "docid": "c1eb17fdb22023ffac2b04bb552ee18a", "text": "Intrusion detection has emerged as an important technique for network security. Due to the complex and dynamic properties of intrusion behaviors, machine learning and data mining methods have been widely employed to optimize the performance of intrusion detection systems (IDSs). However, the results of existing work still need to be improved both in accuracy and in computational efficiency. In this paper, a novel reinforcement learning approach is presented for host-based intrusion detection using sequences of system calls. A Markov reward process model is introduced for modeling the behaviors of system call sequences and the intrusion detection problem is converted to predicting the value functions of the Markov reward process. A temporal different learning algorithm using linear basis functions is used for value function prediction so that abnormal temporal behaviors of host processes can be predicted accurately and efficiently. The proposed method has advantages over previous algorithms in that the temporal property of system call data is well captured in a natural and simple way and better intrusion detection performance can be achieved. Experimental results on the MIT system call data illustrate that compared with previous work, the proposed method has better detection accuracy with low training costs.", "title": "" }, { "docid": "371d28cf9be2e7fa95ac26075b1e96ba", "text": "The noun compound – a sequence of nouns which function as a single noun – is very common in English texts. No language processing system should ignore expressions like steel soup pot cover if it wants to be serious about such high-end applications of computational linguistics as question answering, information extraction, text summarization, machine translation – the list goes on. Processing noun compounds, however, is far from trouble-free. For one thing, they can be bracketed in various ways: is it steel soup, steel pot or steel cover? Then there are relations inside a compound, annoyingly not signalled by any words: does pot contain soup or is it for cooking soup? These and many other research challenges are the subject of this special issue. The volume opens with Preslav Nakov’s survey paper on the interpretation of noun compounds. It serves as en excellent, thorough introduction to the whole business of studying noun compounds computationally. Both theoretical and computational linguistics consider various formal definitions of the compound, its creation, its types and properties, its applications, its approximation by paraphrases. The discussion is also illustrated by a range of languages other than English. Next, the problem of bracketing is given a few typical solutions. There follows a detailed look at noun compound semantics, including coarse-grained and very fine-grained inventories of relations among nouns in a compound. Finally, a “capstone” project is presented: textual entailment, a tool which can be immensely helpful in many high-end applications. Diarmuid Ó Séaghdha and Ann Copestake tell us how to interpret compound nouns by classifying their relations with kernel methods. The kernels implement intuitive notions of lexical and relational similarity which are computed using distributional information extracted from large text corpora. The classification is tested at three different levels of specificity. Impressively, in all cases a combination of both lexical and relational information improves upon either source taken alone.", "title": "" }, { "docid": "8fd38494bb2e4ffcefc203c88d9605e7", "text": "The aim of the present study is to provide a detailed macroscopic mapping of the palatal and tuberal blood supply applying anatomical methods and studying specific anastomoses to bridge the gap between basic structural and empirical clinical knowledge. Ten cadavers (three dentate, seven edentulous) have been prepared for this study in the Department of Anatomy, Semmelweis University, Budapest, Hungary, and in the Department of Anatomy of the Medical University of Graz. All cadavers were fixed with Thiel’s solution. For the macroscopic analysis of the blood vessels supplying the palatal mucosa, corrosion casting in four cadavers and latex milk injection in other six cadavers were performed. We recorded major- and secondary branches of the greater palatine artery (GPA) and its relation to the palatine spine, different anastomoses with the nasopalatine artery (NPA), and lesser palatal artery (LPA) as well as with contralateral branches of the GPA. Penetrating intraosseous branches at the premolar-canine area were also detected. In edentulous patients, the GPA developed a curvy pathway in the premolar area. The blood supply around the maxillary tuberosity was also presented. The combination of different staining methods has shed light to findings with relevance to palatal blood supply, offering a powerful tool for the design and execution of surgical interventions involving the hard palate. The present study provides clinicians with a good basis to understand the anatomical background of palatal and tuberal blood supply. This might enable clinicians to design optimized incision- and flap designs. As a result, the risk of intraoperative bleeding and postoperative wound healing complications related to impaired blood supply can be minimized.", "title": "" }, { "docid": "b2de2955568a37301828708e15b5ed15", "text": "ISPRS and CNES announced the HRS (High Resolution Stereo) Scientific Assessment Program during the ISPRS Commission I Symposium in Denver in November 2002. 9 test areas throughout the world have been selected for this program. One of the test sites is located in Bavaria, Germany, for which the PI comes from DLR. For a second region, which is situated in Catalonia – Barcelona and surroundings – DLR has the role of a Co-Investigator. The goal is to derive a DEM from the along-track stereo data of the SPOT HRS sensor and to assess the accuracy by comparison with ground control points and DEM data of superior quality. For the derivation of the DEM, the stereo processing software, developed at DLR for the MOMS-2P three line stereo camera is used. As a first step, the interior and exterior orientation of the camera, delivered as ancillary data (DORIS and ULS) are extracted. According to CNES these data should lead to an absolute orientation accuracy of about 30 m. No bundle block adjustment with ground control is used in the first step of the photogrammetric evaluation. A dense image matching, using very dense positions as kernel centers provides the parallaxes. The quality of the matching is controlled by forward and backward matching of the two stereo partners using the local least squares matching method. Forward intersection leads to points in object space which are then interpolated to a DEM of the region in a regular grid. Additionally, orthoimages are generated from the images of the two looking directions. The orthoimage and DEM accuracy is determined by using the ground control points and the available DEM data of superior accuracy (DEM derived from laser data and/or classical airborne photogrammetry). DEM filtering methods are applied and a comparison to SRTM-DEMs is performed. It is shown that a fusion of the DEMs derived from optical and radar data leads to higher accuracies. In the second step ground control points are used for bundle adjustment to improve the exterior orientation and the absolute accuracy of the SPOT-DEM.", "title": "" }, { "docid": "12d31865b311f0ad88ef7dd694a2cfc1", "text": "With the advance of wireless communication systems and increasing importance of other wireless applications, wideband and low profile antennas are in great demand for both commercial and military applications. Multi-band and wideband antennas are desirable in personal communication systems, small satellite communication terminals, and other wireless applications. Wideband antennas also find applications in Unmanned Aerial Vehicles (UAVs), Counter Camouflage, Concealment and Deception (CC&D), Synthetic Aperture Radar (SAR), and Ground Moving Target Indicators (GMTI). Some of these applications also require that an antenna be embedded into the airframe structure Traditionally, a wideband antenna in the low frequency wireless bands can only be achieved with heavily loaded wire antennas, which usually means different antennas are needed for different frequency bands. Recent progress in the study of fractal antennas suggests some attractive solutions for using a single small antenna operating in several frequency bands. The purpose of this article is to introduce the concept of the fractal, review the progress in fractal antenna study and implementation, compare different types of fractal antenna elements and arrays and discuss the challenge and future of this new type of antenna.", "title": "" }, { "docid": "737ccd74e69d32ab1fdfec7df9674a4c", "text": "Data in the sciences frequently occur as sequences of multidimensional arrays called tensors. How can hidden, evolving trends in such data be extracted while preserving the tensor structure? The model that is traditionally used is the linear dynamical system (LDS) with Gaussian noise, which treats the latent state and observation at each time slice as a vector. We present the multilinear dynamical system (MLDS) for modeling tensor time series and an expectation–maximization (EM) algorithm to estimate the parameters. The MLDS models each tensor observation in the time series as the multilinear projection of the corresponding member of a sequence of latent tensors. The latent tensors are again evolving with respect to a multilinear projection. Compared to the LDS with an equal number of parameters, the MLDS achieves higher prediction accuracy and marginal likelihood for both artificial and real datasets.", "title": "" }, { "docid": "624e78153b58a69917d313989b72e6bf", "text": "In this article we describe a novel Particle Swarm Optimization (PSO) approach to multi-objective optimization (MOO), called Time Variant Multi-Objective Particle Swarm Optimization (TV-MOPSO). TV-MOPSO is made adaptive in nature by allowing its vital parameters (viz., inertia weight and acceleration coefficients) to change with iterations. This adaptiveness helps the algorithm to explore the search space more efficiently. A new diversity parameter has been used to ensure sufficient diversity amongst the solutions of the non-dominated fronts, while retaining at the same time the convergence to the Pareto-optimal front. TV-MOPSO has been compared with some recently developed multi-objective PSO techniques and evolutionary algorithms for 11 function optimization problems, using different performance measures. 2007 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "359da4efff872d1fd762c0aef1aa590c", "text": "One of the most efficient ways for a learning-based robotic arm to learn to process complex tasks as human, is to directly learn from observing how human complete those tasks, and then imitate. Our idea is based on success of Deep Q-Learning (DQN) algorithm according to reinforcement learning, and then extend to Deep Deterministic Policy Gradient (DDPG) algorithm. We developed a learning-based method, combining modified DDPG and visual imitation network. Our approach acquires frames only from a monocular camera, and no need to either construct a 3D environment or generate actual points. The result we expected during training, was that robot would be able to move as almost the same as how human hands did.", "title": "" }, { "docid": "1351b9d778da2821362a1b4caa35e7e4", "text": "Though designing a data warehouse requires techniques completely different from those adopted for operational systems, no significant effort has been made so far to develop a complete and consistent design methodology for data warehouses. In this paper we outline a general methodological framework for data warehouse design, based on our Dimensional Fact Model (DFM). After analyzing the existing information system and collecting the user requirements, conceptual design is carried out semi-automatically starting from the operational database scheme. A workload is then characterized in terms of data volumes and expected queries, to be used as the input of the logical and physical design phases whose output is the final scheme for the data warehouse.", "title": "" }, { "docid": "9311198676b2cc5ad31145c53c91134d", "text": "A novel fractal called Fractal Clover Leaf (FCL) is introduced and shown to have well miniaturization capabilities. The proposed patches are fed by L-shape probe to achieve wide bandwidth operation in PCS band. A numerical parametric study on the proposed antenna is presented. It is found that the antenna can attain more than 72% size reduction as well as 17% impedance bandwidth (VSWR<2), in cost of less gain. It is also shown that impedance matching could be reached by tuning probe parameters. The proposed antenna is suitable for handset applications and tight packed planar phased arrays to achieve lower scan angels than rectangular patches.", "title": "" }, { "docid": "ca2258408035374cd4e7d1519e24e187", "text": "In this paper we propose a novel application of Hidden Markov Models to automatic generation of informative headlines for English texts. We propose four decoding parameters to make the headlines appear more like Headlinese, the language of informative newspaper headlines. We also allow for morphological variation in words between headline and story English. Informal and formal evaluations indicate that our approach produces informative headlines, mimicking a Headlinese style generated by humans.", "title": "" }, { "docid": "42386bee406c51e568667abec4bc6a5e", "text": "Digital projection technology has improved significantly in recent years. But, the relationship of cost with respect to available resolution in projectors is still super-linear. In this paper, we present a method that uses projector light modulator panels (e.g. LCD or DMD panels) of resolution n X n to create a perceptually close match to a target higher resolution cn X cn image, where c is a small integer greater than 1. This is achieved by enhancing the resolution using smaller pixels at specific regions of interest like edges.\n A target high resolution image (cn X cn) is first decomposed into (a) a high resolution (cn X cn) but sparse edge image, and (b) a complementary lower resolution (n X n) non-edge image. These images are then projected in a time sequential manner at a high frame rate to create an edge-enhanced image -- an image where the pixel density is not uniform but changes spatially. In 3D ready projectors with readily available refresh rate of 120Hz, such a temporal multiplexing is imperceptible to the user and the edge-enhanced image is perceptually almost identical to the target high resolution image.\n To create the higher resolution edge image, we introduce the concept of optical pixel sharing. This reduces the projected pixel size by a factor of 1/c2 while increasing the pixel density by c2 at the edges enabling true higher resolution edges. Due to the sparsity of the edge pixels in an image we are able to choose a sufficiently large subset of these to be displayed at the higher resolution using perceptual parameters. We present a statistical analysis quantifying the expected number of pixels that will be reproduced at the higher resolution and verify it for different types of images.", "title": "" }, { "docid": "98729fc6a6b95222e6a6a12aa9a7ded7", "text": "What good is self-control? We incorporated a new measure of individual differences in self-control into two large investigations of a broad spectrum of behaviors. The new scale showed good internal consistency and retest reliability. Higher scores on self-control correlated with a higher grade point average, better adjustment (fewer reports of psychopathology, higher self-esteem), less binge eating and alcohol abuse, better relationships and interpersonal skills, secure attachment, and more optimal emotional responses. Tests for curvilinearity failed to indicate any drawbacks of so-called overcontrol, and the positive effects remained after controlling for social desirability. Low self-control is thus a significant risk factor for a broad range of personal and interpersonal problems.", "title": "" }, { "docid": "5f4ae911491ee11f44a37e23b6ae5d5d", "text": "Decentralised (on-blockchain) and centralised (off–blockchain) platforms are available for the implementation of smart contracts. However, none of the two alternatives can individually provide the services and quality of services (QoS) imposed on smart contracts involved in a large class of applications. The reason is that blockchain platforms suffer from scalability, performance, transaction costs and other limitations. Likewise, off–blockchain platforms are afflicted by drawbacks emerging from their dependence on single trusted third parties. We argue that in several applications, hybrid platforms composed from the integration of on and off–blockchain platforms are more adequate. Developers that informatively choose between the three alternatives are likely to implement smart contracts that deliver the expected QoS. Hybrid architectures are largely unexplored. To help cover the gap and as a proof of concept, in this paper we discuss the implementation of smart contracts on hybrid architectures. We show how a smart contract can be split and executed partially on an off–blockchain contract compliance checker and partially on the rinkeby ethereum network. To test the solution, we expose it to sequences of contractual operations generated mechanically by a contract validator tool.", "title": "" }, { "docid": "71b48c67ba508bdd707340b5d1632018", "text": "Two-photon laser scanning microscopy of calcium dynamics using fluorescent indicators is a widely used imaging method for large-scale recording of neural activity in vivo. Here, we introduce volumetric two-photon imaging of neurons using stereoscopy (vTwINS), a volumetric calcium imaging method that uses an elongated, V-shaped point spread function to image a 3D brain volume. Single neurons project to spatially displaced 'image pairs' in the resulting 2D image, and the separation distance between projections is proportional to depth in the volume. To demix the fluorescence time series of individual neurons, we introduce a modified orthogonal matching pursuit algorithm that also infers source locations within the 3D volume. We illustrated vTwINS by imaging neural population activity in the mouse primary visual cortex and hippocampus. Our results demonstrated that vTwINS provides an effective method for volumetric two-photon calcium imaging that increases the number of neurons recorded while maintaining a high frame rate.", "title": "" }, { "docid": "241a1589619c2db686675327cab1e8da", "text": "This paper describes a simple computational model of joint torque and impedance in human arm movements that can be used to simulate three-dimensional movements of the (redundant) arm or leg and to design the control of robots and human-machine interfaces. This model, based on recent physiological findings, assumes that (1) the central nervous system learns the force and impedance to perform a task successfully in a given stable or unstable dynamic environment and (2) stiffness is linearly related to the magnitude of the joint torque and increased to compensate for environment instability. Comparison with existing data shows that this simple model is able to predict impedance geometry well.", "title": "" }, { "docid": "9afdeab9abb1bfde45c6e9f922181c6b", "text": "Aiming at the need for autonomous learning in reinforcement learning (RL), a quantitative emotion-based motivation model is proposed by introducing psychological emotional factors as the intrinsic motivation. The curiosity is used to promote or hold back agents' exploration of unknown states, the happiness index is used to determine the current state-action's happiness level, the control power is used to indicate agents' control ability over its surrounding environment, and together to adjust agents' learning preferences and behavioral patterns. To combine intrinsic emotional motivations with classic RL, two methods are proposed. The first method is to use the intrinsic emotional motivations to explore unknown environment and learn the environment transitioning model ahead of time, while the second method is to combine intrinsic emotional motivations with external rewards as the ultimate joint reward function, directly to drive agents' learning. As the result shows, in the simulation experiments in the rat foraging in maze scenario, both methods have achieved relatively good performance, compared with classic RL purely driven by external rewards.", "title": "" }, { "docid": "8c6b7ef0da1b54b84f6e3912238bae04", "text": "With rapid increasing text information, the need for a computer system to processing and analyzing this information are felt. One of the systems that exist in analyzing and processing of text is a text summarization in which large volume of text is summarized based on different algorithms. In this paper, by using BabelNet knowledge base and its concept graph, a system for summarizing text is offered. In proposed approach, concepts of words by using BabelNet knowledge base are extracted and concept graphs are produced and sentences, according to concepts and resulting graph are rated. Therefore, these rating concepts are utilized in final summarization. Also, a replication control approach is proposed in a way that selected concepts in each state are punished and this causes to produce summaries with less redundancy. To compare and evaluate the performance of the proposed method, DUC2004 is used and ROUGE used as evaluation metric. The proposed method by compared to other methods produces summaries with more quality and fewer redundancies.", "title": "" }, { "docid": "b01e3b03cd418b9748de7546ef7a9ca2", "text": "We describe a lightweight protocol for oblivious evaluation of a pseudorandom function (OPRF) in the presence of semihonest adversaries. In an OPRF protocol a receiver has an input r; the sender gets output s and the receiver gets output F(s; r), where F is a pseudorandom function and s is a random seed. Our protocol uses a novel adaptation of 1-out-of-2 OT-extension protocols, and is particularly efficient when used to generate a large batch of OPRF instances. The cost to realize m OPRF instances is roughly the cost to realize 3:5m instances of standard 1-out-of-2 OTs (using state-of-the-art OT extension). We explore in detail our protocol's application to semihonest secure private set intersection (PSI). The fastest state-of- the-art PSI protocol (Pinkas et al., Usenix 2015) is based on efficient OT extension. We observe that our OPRF can be used to remove their PSI protocol's dependence on the bit-length of the parties' items. We implemented both PSI protocol variants and found ours to be 3.1{3.6 faster than Pinkas et al. for PSI of 128-bit strings and sufficiently large sets. Concretely, ours requires only 3.8 seconds to securely compute the intersection of 220-size sets, regardless of the bitlength of the items. For very large sets, our protocol is only 4:3 slower than the insecure naive hashing approach for PSI.", "title": "" } ]
scidocsrr
6624d43e050af91cfc5f8dce6458f3aa
Laser grooving of semiconductor wafers: Comparing a simplified numerical approach with experiments
[ { "docid": "ef706ea7a6dcd5b71602ea4c28eb9bd3", "text": "\"Stealth Dicing (SD) \" was developed to solve such inherent problems of dicing process as debris contaminants and unnecessary thermal damage on work wafer. In SD, laser beam power of transmissible wavelength is absorbed only around focal point in the wafer by utilizing temperature dependence of absorption coefficient of the wafer. And these absorbed power forms modified layer in the wafer, which functions as the origin of separation in followed separation process. Since only the limited interior region of a wafer is processed by laser beam irradiation, damages and debris contaminants can be avoided in SD. Besides characteristics of devices will not be affected. Completely dry process of SD is another big advantage over other dicing methods.", "title": "" } ]
[ { "docid": "221f28bc87e82f8264880c773b8b2fbe", "text": "BACKGROUND\nMuscle weakness in old age is associated with physical function decline. Progressive resistance strength training (PRT) exercises are designed to increase strength.\n\n\nOBJECTIVES\nTo assess the effects of PRT on older people and identify adverse events.\n\n\nSEARCH STRATEGY\nWe searched the Cochrane Bone, Joint and Muscle Trauma Group Specialized Register (to March 2007), the Cochrane Central Register of Controlled Trials (The Cochrane Library 2007, Issue 2), MEDLINE (1966 to May 01, 2008), EMBASE (1980 to February 06 2007), CINAHL (1982 to July 01 2007) and two other electronic databases. We also searched reference lists of articles, reviewed conference abstracts and contacted authors.\n\n\nSELECTION CRITERIA\nRandomised controlled trials reporting physical outcomes of PRT for older people were included.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently selected trials, assessed trial quality and extracted data. Data were pooled where appropriate.\n\n\nMAIN RESULTS\nOne hundred and twenty one trials with 6700 participants were included. In most trials, PRT was performed two to three times per week and at a high intensity. PRT resulted in a small but significant improvement in physical ability (33 trials, 2172 participants; SMD 0.14, 95% CI 0.05 to 0.22). Functional limitation measures also showed improvements: e.g. there was a modest improvement in gait speed (24 trials, 1179 participants, MD 0.08 m/s, 95% CI 0.04 to 0.12); and a moderate to large effect for getting out of a chair (11 trials, 384 participants, SMD -0.94, 95% CI -1.49 to -0.38). PRT had a large positive effect on muscle strength (73 trials, 3059 participants, SMD 0.84, 95% CI 0.67 to 1.00). Participants with osteoarthritis reported a reduction in pain following PRT(6 trials, 503 participants, SMD -0.30, 95% CI -0.48 to -0.13). There was no evidence from 10 other trials (587 participants) that PRT had an effect on bodily pain. Adverse events were poorly recorded but adverse events related to musculoskeletal complaints, such as joint pain and muscle soreness, were reported in many of the studies that prospectively defined and monitored these events. Serious adverse events were rare, and no serious events were reported to be directly related to the exercise programme.\n\n\nAUTHORS' CONCLUSIONS\nThis review provides evidence that PRT is an effective intervention for improving physical functioning in older people, including improving strength and the performance of some simple and complex activities. However, some caution is needed with transferring these exercises for use with clinical populations because adverse events are not adequately reported.", "title": "" }, { "docid": "dddac78006656304275b268fa0a4fd49", "text": "With the growing popularity of MOOCs and computer-aided learning systems, as well as the growth of social networks in education, we have begun to collect increasingly large amounts of educational graph data. This graph data includes complex user-system interaction logs, student-produced graphical representations, and conceptual hierarchies that large amounts of graph data have. There is abundant pedagogical information beneath these graph datasets. As a result, graph data mining techniques such as graph grammar induction, path analysis, and prerequisite relationship prediction has become increasingly important. Also, graphical model techniques (e.g. Hidden Markov Models or probabilistic graphical models) has become more and more important to analyze educational data. While educational graph data and data analysis based on graphical models has grown increasingly common, it’s necessary to build a strong community for educational graph researchers. This workshop will provide such a forum for interested researchers to discuss ongoing work, share common graph mining problems, and identify technique challenges. Researchers are encouraged to discuss prior analyses of graph data and educational data analyses based on graphical models. We also welcome discussions of in-progress work from researchers seeking to identify suitable sources of data or appropriate analytical tools. 1. PRIOR WORKSHOPS So far, we have successfully held two international workshops on Graph-based Educational Data-Mining. The first one was held in London, co-located with EDM 2014. It featured 12 publications of which 6 were full-papers, the remainder short papers. Having roughly 25 full-day attendees and additional drop-ins, it led to a number of individual connections between researchers and the formation of an e-mail list for group discussion. The second one was co-located with EDM 2015 in Spain. 10 authors presented their published work including 4 full papers and 6 short papers there. 2. OVERVIEW AND RELEVANCE Graph-based data mining and educational data analysis based on graphical models have become emerging disciplines in EDM. Large-scale graph data, such as social network data, complex user-system interaction logs, student-produced graphical representations, and conceptual hierarchies, carries multiple levels of pedagogical information. Exploring such data can help to answer a range of critical questions such as: • For social network data from MOOCs, online forums, and user-system interaction logs: – What social networks can foster or hinder learning? – Do users of online learning tools behave as we expect them to? – How does the interaction graph evolve over time? – What data we can use to define relationship graphs? – What path(s) do high-performing students take through online materials? – What is the impact of teacher-interaction on students’ observed behavior? – Can we identify students who are particularly helpful in a course? • For computer-aided learning (writing, programming, etc.): – What substructures are commonly found in studentproduced diagrams? – Can we use prior student data to identify students’ solution plan, if any? – Can we automatically induce empirically-valid graph rules from prior student data and use induced graph rules to support automated grading systems? Graphical model techniques, such as Bayesian Network, Markov Random Field, and Conditional Random Field, have been widely used in EDM for student modeling, decision making, and knowledge tracing. Utilizing these approaches can help to: • Learn students’ behavioral patterns. • Predict students’ behaviors and learning outcomes.", "title": "" }, { "docid": "d91e433a23545cac171006c40c2c2006", "text": "In this paper, we revisit the impact of skilled emigration on human capital accumulation using new panel data covering 147 countries on the period 1975-2000. We derive testable predictions from a stylized theoretical model and test them in dynamic regression models. Our empirical analysis predicts conditional convergence of human capital indicators. Our …ndings also reveal that skilled migration prospects foster human capital accumulation in low-income countries. In these countries, a net brain gain can be obtained if the skilled emigration rate is not too large (i.e. does not exceed 20 to 30 percent depending on other country characteristics). On the contrary, we …nd no evidence of a signi…cant incentive mechanism in middle-income and, unsuprisingly, in high-income countries. JEL Classi…cations: O15-O40-F22-F43 Keywords: human capital, convergence, brain drain We thank anonymous referees for their helpful comments. Suggestions from Barry Chiswick, Hubert Jayet, Joel Hellier and Fatemeh Shadman-Mehta were also appreciated. This article bene…ted from comments received at the SIUTE seminar (Lille, January 2006), the CReAM conference on ”Immigration: Impacts, Integration and Intergenerational Issues (London, March 2006), the Spring Meeting of Young Economists (Sevilla, May 2006), the XIV Villa Mondragone International Economic Seminar (Rome, July 2006) and the ESPE meeting (Chicago, 2007). The third author is grateful for the …nancial support from the Belgian French-speaking Community’s programme ”Action de recherches concertées” (ARC 03/08 -302) and from the Belgian Federal Government (PAI grant P6/07 Economic Policy and Finance in the Global Equilibrium Analysis and Social Evaluation). The usual disclaimers apply. Corresponding author: Michel Beine (michel.beine@uni.lu), University of Luxembourg, 162a av. de la Faiencerie, L-1511 Luxembourg.", "title": "" }, { "docid": "79f1473d4eb0c456660543fda3a648f1", "text": "Weexamine the problem of learning and planning on high-dimensional domains with long horizons and sparse rewards. Recent approaches have shown great successes in many Atari 2600 domains. However, domains with long horizons and sparse rewards, such as Montezuma’s Revenge and Venture, remain challenging for existing methods. Methods using abstraction [5, 13] have shown to be useful in tackling long-horizon problems. We combine recent techniques of deep reinforcement learning with existing model-based approaches using an expert-provided state abstraction. We construct toy domains that elucidate the problem of long horizons, sparse rewards and high-dimensional inputs, and show that our algorithm significantly outperforms previous methods on these domains. Our abstraction-based approach outperforms Deep QNetworks [11] on Montezuma’s Revenge and Venture, and exhibits backtracking behavior that is absent from previous methods.", "title": "" }, { "docid": "28ff49eb7af07fdf31694b6280fe8286", "text": "In this paper, the design of unbalanced fed 180° phase shifter with a wideband performances is proposed. This phase shifter uses a single dielectric substrate, and consists of multi-section Wilkinson divider, reference line, and phase inverter containing balanced-unbalanced transition in the input and output port. The simulated and measured results show that this device provides 180° phase shift with low insertion loss in the frequency band from 1 to 10 GHz.", "title": "" }, { "docid": "8140838d7ef17b3d6f6c042442de0f73", "text": "The two vascular systems of our body are the blood and lymphatic vasculature. Our understanding of the cellular and molecular processes controlling the development of the lymphatic vasculature has progressed significantly in the last decade. In mammals, this is a stepwise process that starts in the embryonic veins, where lymphatic EC (LEC) progenitors are initially specified. The differentiation and maturation of these progenitors continues as they bud from the veins to produce scattered primitive lymph sacs, from which most of the lymphatic vasculature is derived. Here, we summarize our current understanding of the key steps leading to the formation of a functional lymphatic vasculature.", "title": "" }, { "docid": "ad918df13aaa2e78c92a7626699f1ecc", "text": "Machine learning techniques, namely convolutional neural networks (CNN) and regression forests, have recently shown great promise in performing 6-DoF localization of monocular images. However, in most cases imagesequences, rather only single images, are readily available. To this extent, none of the proposed learning-based approaches exploit the valuable constraint of temporal smoothness, often leading to situations where the per-frame error is larger than the camera motion. In this paper we propose a recurrent model for performing 6-DoF localization of video-clips. We find that, even by considering only short sequences (20 frames), the pose estimates are smoothed and the localization error can be drastically reduced. Finally, we consider means of obtaining probabilistic pose estimates from our model. We evaluate our method on openly-available real-world autonomous driving and indoor localization datasets.", "title": "" }, { "docid": "bf9da537d5efcc5b90609db9f9ec39b9", "text": "why the pattern is found in other types of skin lesions with active vascularization, such as our patient’s scars. When first described in actinic keratosis, rosettes were characterized as ‘‘4 white points arranged as a 4-leaf clover.’’2 The sign has since been reported in other skin lesions such as squamous cell carcinoma, basal cell carcinoma, melanoma, and lichenoid keratosis.3--7 Rosettes are believed to be the result of an optical effect caused by interaction between polarized light and follicular openings.6 The rainbow pattern and rosettes are not considered to be specific dermoscopic features of the lesion. Since it appears that they are secondary effects of the interaction between different skin structures and polarized light, they will likely be observed in various types of skin lesions. References", "title": "" }, { "docid": "256b22fd89c0f7311e043efd2dd142f9", "text": "Suicide rates are higher in later life than in any other age group. The design of effective suicide prevention strategies hinges on the identification of specific, quantifiable risk factors. Methodological challenges include the lack of systematically applied terminology in suicide and risk factor research, the low base rate of suicide, and its complex, multidetermined nature. Although variables in mental, physical, and social domains have been correlated with completed suicide in older adults, controlled studies are necessary to test hypothesized risk factors. Prospective cohort and retrospective case control studies indicate that affective disorder is a powerful independent risk factor for suicide in elders. Other mental illnesses play less of a role. Physical illness and functional impairment increase risk, but their influence appears to be mediated by depression. Social ties and their disruption are significantly and independently associated with risk for suicide in later life, relationships between which may be moderated by a rigid, anxious, and obsessional personality style. Affective illness is a highly potent risk factor for suicide in later life with clear implications for the design of prevention strategies. Additional research is needed to define more precisely the interactions between emotional, physical, and social factors that determine risk for suicide in the older adult.", "title": "" }, { "docid": "a271371ba28be10b67e31ecca6f3aa88", "text": "The toxicity and repellency of the bioactive chemicals of clove (Syzygium aromaticum) powder, eugenol, eugenol acetate, and beta-caryophyllene were evaluated against workers of the red imported fire ant, Solenopsis invicta Buren. Clove powder applied at 3 and 12 mg/cm2 provided 100% ant mortality within 6 h, and repelled 99% within 3 h. Eugenol was the fastest acting compound against red imported fire ant compared with eugenol acetate, beta-caryophyllene, and clove oil. The LT50 values inclined exponentially with the increase in the application rate of the chemical compounds tested. However, repellency did not increase with the increase in the application rate of the chemical compounds tested, but did with the increase in exposure time. Eugenol, eugenol acetate, as well as beta-caryophyllene and clove oil may provide another tool for red imported fire ant integrated pest management, particularly in situations where conventional insecticides are inappropriate.", "title": "" }, { "docid": "e2df1a9d3b56e32deb660ba705953466", "text": "Graph and tree visualization techniques enable interactive exploration of complex relations while communicating topology. However, most existing techniques have not been designed for situations where visual information such as images is also present at each node and must be displayed. This paper presents MoireGraphs to address this need. MoireGraphs combine a new focus+context radial graph layout with a suite of interaction techniques (focus strength changing, radial rotation, level highlighting, secondary foci, animated transitions and node information) to assist in the exploration of graphs with visual nodes. The method is scalable to hundreds of displayed visual nodes.", "title": "" }, { "docid": "53064782a8f213b5f9fa68be084fde1b", "text": "Robotic lower limb exoskeletons have been built for augmenting human performance, assisting with disabilities, studying human physiology, and re-training motor deficiencies. At the University of Michigan Human Neuromechanics Laboratory, we have built pneumatically-powered lower limb exoskeletons for the last two purposes. Most of our prior research has focused on ankle joint exoskeletons because of the large contribution from plantar flexors to the mechanical work performed during gait. One way we control the exoskeletons is with proportional myoelectric control, effectively increasing the strength of the wearer with a physiological mode of control. Healthy human subjects quickly adapt to walking with the robotic ankle exoskeletons, reducing their overall energy expenditure. Individuals with incomplete spinal cord injury have demonstrated rapid modification of muscle recruitment patterns with practice walking with the ankle exoskeletons. Evidence suggests that proportional myoelectric control may have distinct advantages over other types of control for robotic exoskeletons in basic science and rehabilitation.", "title": "" }, { "docid": "6b6dd935eebca1ea08e10af8afcbfbdd", "text": "CONTEXT\nThe quality of consumer health information on the World Wide Web is an important issue for medicine, but to date no systematic and comprehensive synthesis of the methods and evidence has been performed.\n\n\nOBJECTIVES\nTo establish a methodological framework on how quality on the Web is evaluated in practice, to determine the heterogeneity of the results and conclusions, and to compare the methodological rigor of these studies, to determine to what extent the conclusions depend on the methodology used, and to suggest future directions for research.\n\n\nDATA SOURCES\nWe searched MEDLINE and PREMEDLINE (1966 through September 2001), Science Citation Index (1997 through September 2001), Social Sciences Citation Index (1997 through September 2001), Arts and Humanities Citation Index (1997 through September 2001), LISA (1969 through July 2001), CINAHL (1982 through July 2001), PsychINFO (1988 through September 2001), EMBASE (1988 through June 2001), and SIGLE (1980 through June 2001). We also conducted hand searches, general Internet searches, and a personal bibliographic database search.\n\n\nSTUDY SELECTION\nWe included published and unpublished empirical studies in any language in which investigators searched the Web systematically for specific health information, evaluated the quality of Web sites or pages, and reported quantitative results. We screened 7830 citations and retrieved 170 potentially eligible full articles. A total of 79 distinct studies met the inclusion criteria, evaluating 5941 health Web sites and 1329 Web pages, and reporting 408 evaluation results for 86 different quality criteria.\n\n\nDATA EXTRACTION\nTwo reviewers independently extracted study characteristics, medical domains, search strategies used, methods and criteria of quality assessment, results (percentage of sites or pages rated as inadequate pertaining to a quality criterion), and quality and rigor of study methods and reporting.\n\n\nDATA SYNTHESIS\nMost frequently used quality criteria used include accuracy, completeness, readability, design, disclosures, and references provided. Fifty-five studies (70%) concluded that quality is a problem on the Web, 17 (22%) remained neutral, and 7 studies (9%) came to a positive conclusion. Positive studies scored significantly lower in search (P =.02) and evaluation (P =.04) methods.\n\n\nCONCLUSIONS\nDue to differences in study methods and rigor, quality criteria, study population, and topic chosen, study results and conclusions on health-related Web sites vary widely. Operational definitions of quality criteria are needed.", "title": "" }, { "docid": "01eaab0d3c2ef1d4aec1adc08efd1b67", "text": "A printed circuit board, or (PCB) is used to mechanically support and electrically connect electronic components using conductive pathways, track or signal traces etched from copper sheets laminated onto anon conductive substrate. The automatic inspection of PCBs serves a purpose which is traditional in computer technology. The purpose is to relieve human inspectors of the tedious and inefficient task of looking for those defects in PCBs which could lead to electric failure. In this project Machine Vision PCB Inspection System is applied at the first step of manufacturing, i.e., the making of bare PCB. We first compare a PCB standard image with a PCB image, using a simple subtraction algorithm that can highlight the main problem-regions. We have also seen the effect of noise in a PCB image that at what level this method is suitable to detect the faulty image. Our focus is to detect defects on printed circuit boards & to see the effect of noise. Typical defects that can be detected are over etchings (opens), under-etchings (shorts), holes etc. Index terms – Machine vision, PCB defects, Image Subtraction Algorithm, PCB Inspection", "title": "" }, { "docid": "0e31140dcece980db65943c360f8615e", "text": "Balinese people have one of the civilization histories and cultural heritage are handwritten in Balinese script on palm leaves known as Balinese Papyrus (LontarAksara Bali). Until now that cultural heritage is still continuously strived its preservation along with the implementation begin to be abandoned in public life. Some of Balinese Papyrus now begins to rot and fade under influenced by age. Information technology utilization can be a tool to solve the problems faced in the preservation of the Balinese papyrus. By using digital image processing techniques, the papyrus script can be reconstructed digitally so that it can be retrieved and store the content in the digital media. Balinese papyrus reconstructed through several processes from scanning into a digital image, performing preprocessing for image quality improvement, segmenting the Balinese characters on image, doing character recognition using LDA algorithm, rearranging the result of recognition in accordance with the original content in papyrus, and translating that characters result into Latin. LDA algorithm quite successfully performs the classification associated with handwritten character recognition.", "title": "" }, { "docid": "4520cafacd4794ec942030252652ae7c", "text": "While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it’s critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB) pages which are more reliable than least significant bit (LSB) pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme. OPEN ACCESS Sensors 2014, 14 18852", "title": "" }, { "docid": "68de3b6f111b61cdf1babd4acbe5467d", "text": "Music recommender systems are lately seeing a sharp increase in popularity due to many novel commercial music streaming services. Most systems, however, do not decently take their listeners into account when recommending music items. In this note, we summarize our recent work and report our latest findings on the topics of tailoring music recommendations to individual listeners and to groups of listeners sharing certain characteristics. We focus on two tasks: context-aware automatic playlist generation (also known as serial recommendation) using sensor data and music artist recommendation using social media data.", "title": "" }, { "docid": "4d51e2a6f1ddfb15753117b0f22e0fad", "text": "We describe distributed algorithms for two widely-used topic models, namely the Latent Dirichlet Allocation (LDA) model, and the Hierarchical Dirichet Process (HDP) model. In our distributed algorithms the data is partitioned across separate processors and inference is done in a parallel, distributed fashion. We propose two distributed algorithms for LDA. The first algorithm is a straightforward mapping of LDA to a distributed processor setting. In this algorithm processors concurrently perform Gibbs sampling over local data followed by a global update of topic counts. The algorithm is simple to implement and can be viewed as an approximation to Gibbs-sampled LDA. The second version is a model that uses a hierarchical Bayesian extension of LDA to directly account for distributed data. This model has a theoretical guarantee of convergence but is more complex to implement than the first algorithm. Our distributed algorithm for HDP takes the straightforward mapping approach, and merges newly-created topics either by matching or by topic-id. Using five real-world text corpora we show that distributed learning works well in practice. For both LDA and HDP, we show that the converged test-data log probability for distributed learning is indistinguishable from that obtained with single-processor learning. Our extensive experimental results include learning topic models for two multi-million document collections using a 1024-processor parallel computer.", "title": "" }, { "docid": "724b049bd1ba662ebc29cc9eddad4a82", "text": "The way people look in terms of facial attributes (ethnicity, hair color, facial hair, etc.) and the clothes or accessories they wear (sunglasses, hat, hoodies, etc.) is highly dependent on geo-location and weather condition, respectively. This work explores, for the first time, the use of this contextual information, as people with wearable cameras walk across different neighborhoods of a city, in order to learn a rich feature representation for facial attribute classification, without the costly manual annotation required by previous methods. By tracking the faces of casual walkers on more than 40 hours of egocentric video, we are able to cover tens of thousands of different identities and automatically extract nearly 5 million pairs of images connected by or from different face tracks, along with their weather and location context, under pose and lighting variations. These image pairs are then fed into a deep network that preserves similarity of images connected by the same track, in order to capture identity-related attribute features, and optimizes for location and weather prediction to capture additional facial attribute features. Finally, the network is fine-tuned with manually annotated samples. We perform an extensive experimental analysis on wearable data and two standard benchmark datasets based on web images (LFWA and CelebA). Our method outperforms by a large margin a network trained from scratch. Moreover, even without using manually annotated identity labels for pre-training as in previous methods, our approach achieves results that are better than the state of the art.", "title": "" }, { "docid": "6844bbc2d7690056af95c9c3cd2ed665", "text": "Many decisions taken during software development impact the resulting application performance. The key decisions whose potential impact is large are usually carefully weighed. In contrast, the same care is not used for many decisions whose individual impact is likely to be small -- simply because the costs would outweigh the benefits. Developer opinion is the common deciding factor for these cases, and our goal is to provide the developer with information that would help form such opinion, thus preventing performance loss due to the accumulated effect of many poor decisions.\n Our method turns performance unit tests into recipes for generating performance documentation. When the developer selects an interface and workload of interest, relevant performance documentation is generated interactively. This increases performance awareness -- with performance information available alongside standard interface documentation, developers should find it easier to take informed decisions even in situations where expensive performance evaluation is not practical. We demonstrate the method on multiple examples, which show how equipping code with performance unit tests works.", "title": "" } ]
scidocsrr
db6d53c426ba019cda9eb595bbdf1ac1
An Efficient High-Frequency Drive Circuit for GaN Power HFETs
[ { "docid": "361bdfcbe909788f674683c9d122dea4", "text": "High frequency pulse-width modulation (PWM) converters generally suffer from excessive gate drive loss. This paper presents a resonant gate drive circuit that features efficient energy recovery at both charging and discharging transitions. Following a brief introduction of metal oxide semiconductor field effect transistor (MOSFET) gate drive loss, this paper discusses the gate drive requirements for high frequency PWM applications and common shortcomings of existing resonant gate drive techniques. To overcome the apparent disparity, a new resonant MOSFET gate drive circuit is then presented. The new circuit produces low gate drive loss, fast switching speed, clamped gate voltages, immunity to false trigger and has no limitation on the duty cycle. Experimental results further verify its functionality.", "title": "" } ]
[ { "docid": "4378eeacb8596690468c38ce26d44bdf", "text": "Classification algorithms can lead to biased decisions, so researchers are trying to identify such biases and root them out.", "title": "" }, { "docid": "32ae0b0c5b3ca3a7ede687872d631d29", "text": "Background—The benefit of catheter-based reperfusion for acute myocardial infarction (MI) is limited by a 5% to 15% incidence of in-hospital major ischemic events, usually caused by infarct artery reocclusion, and a 20% to 40% need for repeat percutaneous or surgical revascularization. Platelets play a key role in the process of early infarct artery reocclusion, but inhibition of aggregation via the glycoprotein IIb/IIIa receptor has not been prospectively evaluated in the setting of acute MI. Methods and Results —Patients with acute MI of,12 hours’ duration were randomized, on a double-blind basis, to placebo or abciximab if they were deemed candidates for primary PTCA. The primary efficacy end point was death, reinfarction, or any (urgent or elective) target vessel revascularization (TVR) at 6 months by intention-to-treat (ITT) analysis. Other key prespecified end points were early (7 and 30 days) death, reinfarction, or urgent TVR. The baseline clinical and angiographic variables of the 483 (242 placebo and 241 abciximab) patients were balanced. There was no difference in the incidence of the primary 6-month end point (ITT analysis) in the 2 groups (28.1% and 28.2%, P50.97, of the placebo and abciximab patients, respectively). However, abciximab significantly reduced the incidence of death, reinfarction, or urgent TVR at all time points assessed (9.9% versus 3.3%, P50.003, at 7 days; 11.2% versus 5.8%, P50.03, at 30 days; and 17.8% versus 11.6%, P50.05, at 6 months). Analysis by actual treatment with PTCA and study drug demonstrated a considerable effect of abciximab with respect to death or reinfarction: 4.7% versus 1.4%, P50.047, at 7 days; 5.8% versus 3.2%, P50.20, at 30 days; and 12.0% versus 6.9%, P50.07, at 6 months. The need for unplanned, “bail-out” stenting was reduced by 42% in the abciximab group (20.4% versus 11.9%, P50.008). Major bleeding occurred significantly more frequently in the abciximab group (16.6% versus 9.5%, P 0.02), mostly at the arterial access site. There was no intracranial hemorrhage in either group. Conclusions—Aggressive platelet inhibition with abciximab during primary PTCA for acute MI yielded a substantial reduction in the acute (30-day) phase for death, reinfarction, and urgent target vessel revascularization. However, the bleeding rates were excessive, and the 6-month primary end point, which included elective revascularization, was not favorably affected.(Circulation. 1998;98:734-741.)", "title": "" }, { "docid": "cd89079c74f5bb0218be67bf680b410f", "text": "This paper illustrates a sentiment analysis approach to extract sentiments associated with polarities of positive or negative for specific subjects from a document, instead of classifying the whole document into positive or negative.The essential issues in sentiment analysis are to identify how sentiments are expressed in texts and whether the expressions indicate positive (favorable) or negative (unfavorable) opinions toward the subject. In order to improve the accuracy of the sentiment analysis, it is important to properly identify the semantic relationships between the sentiment expressions and the subject. By applying semantic analysis with a syntactic parser and sentiment lexicon, our prototype system achieved high precision (75-95%, depending on the data) in finding sentiments within Web pages and news articles.", "title": "" }, { "docid": "0396940ea3ced8d79ba3eda1fae2c469", "text": "Adblocking tools like Adblock Plus continue to rise in popularity, potentially threatening the dynamics of advertising revenue streams. In response, a number of publishers have ramped up efforts to develop and deploy mechanisms for detecting and/or counter-blocking adblockers (which we refer to as anti-adblockers), effectively escalating the online advertising arms race. In this paper, we develop a scalable approach for identifying third-party services shared across multiple websites and use it to provide a first characterization of antiadblocking across the Alexa Top-5K websites. We map websites that perform anti-adblocking as well as the entities that provide anti-adblocking scripts. We study the modus operandi of these scripts and their impact on popular adblockers. We find that at least 6.7% of websites in the Alexa Top-5K use anti-adblocking scripts, acquired from 12 distinct entities – some of which have a direct interest in nourishing the online advertising industry.", "title": "" }, { "docid": "0e2a31084fd377872da7b6fd9079d271", "text": "Speech enhancement under noise condition has always been an intriguing research topic. In this paper, we propose a new Deep Neural Networks (DNNs) based architecture for speech enhancement. In contrast to standard feed forward network architecture, we add skip connections between network inputs and outputs to indirectly force the DNNs to learn ideal ratio mask. We also show that the performance can be further improved by stacking multiple such network blocks. Experimental results demonstrate that our proposed architecture can achieve considerably better performance than the existing method in terms of three commonly used objective measurements under two real noise conditions.", "title": "" }, { "docid": "1d8667d40c6e6cd5881cf4fa0b788f10", "text": "While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.1", "title": "" }, { "docid": "cf38afa95362a4a86d88787fbf3d91ef", "text": "Pneumatic muscles with similar characteristics to biological muscles have been widely used in robots, and thus are promising drivers for frog inspired robots. However, the application and nonlinearity of the pneumatic system limit the advance. On the basis of the swimming mechanism of the frog, a frog-inspired robot based on pneumatic muscles is developed. To realize the independent tasks by the robot, a pneumatic system with internal chambers, micro air pump, and valves is implemented. The micro pump is used to maintain the pressure difference between the source and exhaust chambers. The pneumatic muscles are controlled by high-speed switch valves which can reduce the robot cost, volume, and mass. A dynamic model of the pneumatic system is established for the simulation to estimate the system, including the chamber, muscle, and pneumatic circuit models. The robot design is verified by the robot swimming experiments and the dynamic model is verified through the experiments and simulations of the pneumatic system. The simulation results are compared to analyze the functions of the source pressure, internal volume of the muscle, and circuit flow rate which is proved the main factor that limits the response of muscle pressure. The proposed research provides the application of the pneumatic muscles in the frog inspired robot and the pneumatic model to study muscle controller.", "title": "" }, { "docid": "a1c9553dbe9d4f9f9b5d81feb9ece9d5", "text": "Knowledge tracing is a sequence prediction problem where the goal is to predict the outcomes of students over questions as they are interacting with a learning platform. By tracking the evolution of the knowledge of some student, one can optimize instruction. Existing methods are either based on temporal latent variable models, or factor analysis with temporal features. We here show that factorization machines (FMs), a model for regression or classification, encompasses several existing models in the educational literature as special cases, notably additive factor model, performance factor model, and multidimensional item response theory. We show, using several real datasets of tens of thousands of users and items, that FMs can estimate student knowledge accurately and fast even when student data is sparsely observed, and handle side information such as multiple knowledge components and number of attempts at item or skill level. Our approach allows to fit student models of higher dimension than existing models, and provides a testbed to try new combinations of features in order to improve existing models. Modeling student learning is key to be able to detect students that need further attention, or recommend automatically relevant learning resources. Initially, models were developed for students sitting for standardized tests, where students could read every problem statement, and missing answers could be treated as incorrect. However, in online platforms such as MOOCs, students attempt some exercises, but do not even look at other ones. Also, they may learn between different attempts. How to measure knowledge when students have attempted different questions? We want to predict the performance of a set I of students, say users, over a set J of questions, say items (we will interchangeably refer to questions as items, problems, or tasks). Each student can attempt a question multiple times, and may learn between successive attempts. We assume we observe ordered triplets (i, j, o) ∈ I × J × {0, 1} which encode the fact that student i attempted question j and got it either correct (o = 1) or incorrect (o = 0). Triplets are sorted chronologically. Then, given a new pair (i′, j′), we need to predict whether student i′ will get question j′ correct or incorrect. We can also assume extra knowledge about users, or items. So far, various models have been designed for student Copyright c © 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. modeling, either based on prediction of sequences (Piech et al. 2015), or factor analysis (Thai-Nghe et al. 2011; Lavoué et al. 2018). Most of existing techniques model students or questions with unidimensional parameters. In this paper, we generalize these models to higher dimensions and manage to train efficiently student models of dimension up to 20. Our family of models is particularly convenient when observations from students are sparse, e.g. when some students attempted few questions, or some questions were answered by few students, which is most of the data usually encountered in online platforms such as MOOCs. When fitting student models, it is better to rely on all the information available at hand. In order to get information about questions, one can identify the knowledge components (KCs) involved in each question. This side information is usually encoded under the form of a q-matrix, that maps items to knowledge components: qjk is 1 if item j involves KC k, 0 otherwise. In this paper, we will also note KC(j) the sets of skills involved by question j, i.e. KC(j) = {k|qjk = 1}. In order to model different attempts, one can keep track of how many times a student has attempted a question, or how many times a student has had the opportunity to acquire a skill, while interacting with the learning material. Our experiments show, in particular, that: • It is better to estimate a bias for each item (not only skill), which popular educational data mining (EDM) models do not. • Most existing models in EDM cannot handle side information such as multiple skills for one item, but the proposed approach does. • Side information improves performance more than increasing the latent dimension. To the best of our knowledge, this is the most generic framework that incorporates side information into a student model. For the sake of reproducibility, our implementation is available on GitHub1. The interested reader can check our code and reuse it in order to try new combinations and devise new models. In Section 2, we show related work. In Section 3, we present a family of models, knowledge tracing machines, and recover famous models of the EDM literature as special https://github.com/jilljenn/ktm ar X iv :1 81 1. 03 38 8v 2 [ cs .I R ] 1 5 N ov 2 01 8 cases. Then, in Section 4 we conduct experiments and show our results in Section 5. We conclude with further work in Section 6.", "title": "" }, { "docid": "1b22fc07be594e494255ee83d4946c7e", "text": "Onychomycosis is difficult to treat topically due to the deep location of the infection under the densely keratinized nail plate. In order to obtain an in vitro index that is relevant to the clinical efficacy of topical anti-onychomycosis drugs, we profiled five topical drugs: amorolfine, ciclopirox, efinaconazole, luliconazole, and terbinafine, for their nail permeabilities, keratin affinities, and anti-dermatophytic activities in the presence of keratin. Efinaconazole and ciclopirox permeated full-thickness human nails more deeply than luliconazole. Amorolfine and terbinafine did not show any detectable permeation. The free-drug concentration of efinaconazole in a 5% human nail keratin suspension was 24.9%, which was significantly higher than those of the other drugs (1.1-3.9%). Additionally, efinaconazole was released from human nail keratin at a greater proportion than the other drugs. The MICs of the five drugs for Trichophyton rubrum were determined at various concentrations of keratin (0-20%) in RPMI 1640 medium. The MICs of ciclopirox were not affected by keratin, whereas those of efinaconazole were slightly increased and those of luliconazole and terbinafine were markedly increased in the presence of 20% keratin. Efficacy coefficients were calculated using the nail permeation flux and MIC in media without or with keratin. Efinaconazole showed the highest efficacy coefficient, which was determined using MIC in media with keratin. The order of efficacy coefficients determined using MIC in keratin-containing media rather than keratin-free media was consistent with that of complete cure rates in previously reported clinical trials. The present study revealed that efficacy coefficients determined using MIC in keratin-containing media are useful for predicting the clinical efficacies of topical drugs. In order to be more effective, topical drugs have to possess higher efficacy coefficients.", "title": "" }, { "docid": "20d00f63848b70f3a5688b68181088f2", "text": "This paper presents a method for modeling player decision making through the use of agents as AI-driven personas. The paper argues that artificial agents, as generative player models, have properties that allow them to be used as psychometrically valid, abstract simulations of a human player’s internal decision making processes. Such agents can then be used to interpret human decision making, as personas and playtesting tools in the game design process, as baselines for adapting agents to mimic classes of human players, or as believable, human-like opponents. This argument is explored in a crowdsourced decision making experiment, in which the decisions of human players are recorded in a small-scale dungeon themed puzzle game. Human decisions are compared to the decisions of a number of a priori defined“archetypical” agent-personas, and the humans are characterized by their likeness to or divergence from these. Essentially, at each step the action of the human is compared to what actions a number of reinforcement-learned agents would have taken in the same situation, where each agent is trained using a different reward scheme. Finally, extensions are outlined for adapting the agents to represent sub-classes found in the human decision making traces.", "title": "" }, { "docid": "444878620f18d3bb4b4c9eef96ba782e", "text": "Brain imaging studies over two decades have delineated the neural circuitry of anxiety and related disorders, particularly regions involved in fear processing and in obsessive-compulsive symptoms. The neural circuitry of fear processing involves the amygdala, anterior cingulate, and insular cortex, while cortico-striatal-thalamic circuitry plays a key role in obsessive-compulsive disorder. More recently, neuroimaging studies have examined how psychotherapy for anxiety and related disorders impacts on these neural circuits. Here we conduct a systematic review of the findings of such work, which yielded 19 functional magnetic resonance imaging studies examining the neural bases of cognitive-behavioral therapy (CBT) in 509 patients with anxiety and related disorders. We conclude that, although each of these related disorders is mediated by somewhat different neural circuitry, CBT may act in a similar way to increase prefrontal control of subcortical structures. These findings are consistent with an emphasis in cognitive-affective neuroscience on the potential therapeutic value of enhancing emotional regulation in various psychiatric conditions.", "title": "" }, { "docid": "cb697827829cbcb5920212a790a49f27", "text": "Functional imaging studies in blind subjects have shown tactile activation of cortical areas that normally subserve vision, but whether blind people have enhanced tactile acuity has long been controversial. We compared the passive tactile acuity of blind and sighted subjects on a fully automated grating orientation task and used multivariate Bayesian data analysis to determine predictors of acuity. Acuity was significantly superior in blind subjects, independently of the degree of childhood vision, light perception level, or Braille reading. Acuity was strongly dependent on the force of contact between the stimulus surface and the skin, declined with subject age, and was better in women than in men. Despite large intragroup variability, the difference between blind and sighted subjects was highly significant: the average blind subject had the acuity of an average sighted subject of the same gender but 23 years younger. The results suggest that crossmodal plasticity may underlie tactile acuity enhancement in blindness.", "title": "" }, { "docid": "c1914cb34eabf77731fa2bca2c183d21", "text": "Since road markings are one of the main landmarks used for traffic guidance, perceiving them may be a crucial task for autonomous vehicles. In visual approaches, road marking detection consists in detecting pixels of an image that corresponds to a road marking. Recently, most approaches have aimed on detecting lane markings only, and few of them proposed methods to detect other types of road markings. Moreover, most of those approaches are based on local gradient, which provides noisy detections caused by cluttered images. In this paper, we propose an alternative approach based on a deep Fully Convolutional Neural Network (FCNN) with an encoder-decoder architecture for road marking detection and segmentation. The experimental results reveal that the proposed approach can detect any road marking type in a high level of accuracy, resulting in a smooth segmentation.", "title": "" }, { "docid": "bb685e028e4f1005b7fe9da01f279784", "text": "Although there are few efficient algorithms in the literature for scientific workflow tasks allocation and scheduling for heterogeneous resources such as those proposed in grid computing context, they usually require a bounded number of computer resources that cannot be applied in Cloud computing environment. Indeed, unlike grid, elastic computing, such asAmazon's EC2, allows users to allocate and release compute resources on-demand and pay only for what they use. Therefore, it is reasonable to assume that the number of resources is infinite. This feature of Clouds has been called âillusion of infiniteresourcesâ. However, despite the proven benefits of using Cloud to run scientific workflows, users lack guidance for choosing between multiple offering while taking into account several objectives which are often conflicting. On the other side, the workflow tasks allocation and scheduling have been shown to be NP-complete problems. Thus, it is convenient to use heuristic rather than deterministic algorithm. The objective of this paper is to design an allocation strategy for Cloud computing platform. More precisely, we propose three complementary bi-criteria approaches for scheduling workflows on distributed Cloud resources, taking into account the overall execution time and the cost incurred by using a set of resources.", "title": "" }, { "docid": "22eb9b1de056d03d15c0a3774a898cfd", "text": "Massive volumes of big RDF data are growing beyond the performance capacity of conventional RDF data management systems operating on a single node. Applications using large RDF data demand efficient data partitioning solutions for supporting RDF data access on a cluster of compute nodes. In this paper we present a novel semantic hash partitioning approach and implement a Semantic HAsh Partitioning-Enabled distributed RDF data management system, called Shape. This paper makes three original contributions. First, the semantic hash partitioning approach we propose extends the simple hash partitioning method through direction-based triple groups and direction-based triple replications. The latter enhances the former by controlled data replication through intelligent utilization of data access locality, such that queries over big RDF graphs can be processed with zero or very small amount of inter-machine communication cost. Second, we generate locality-optimized query execution plans that are more efficient than popular multi-node RDF data management systems by effectively minimizing the inter-machine communication cost for query processing. Third but not the least, we provide a suite of locality-aware optimization techniques to further reduce the partition size and cut down on the inter-machine communication cost during distributed query processing. Experimental results show that our system scales well and can process big RDF datasets more efficiently than existing approaches.", "title": "" }, { "docid": "94a2b34eaa02ffeffdde5aa74e7836d2", "text": "Drought is a stochastic natural hazard that is instigated by intense and persistent shortage of precipitation. Following an initial meteorological phenomenon, subsequent impacts are realized on agriculture and hydrology. Among the natural hazards, droughts possess certain unique features; in addition to delayed effects, droughts vary by multiple dynamic dimensions including severity and duration, which in addition to causing a pervasive and subjective network of impacts makes them difficult to characterize. In order manage drought, drought characterization is essential enabling both retrospective analyses (e.g., severity versus impacts analysis) and prospective planning (e.g., risk assessment). The adaptation of a simplified method by drought indices has facilitated drought characterization for various users and entities. More than 100 drought indices have so far been proposed, some of which are operationally used to characterize drought using gridded maps at regional and national levels. These indices correspond to different types of drought, including meteorological, agricultural, and hydrological drought. By quantifying severity levels and declaring drought’s start and end, drought indices currently aid in a variety of operations including drought early warning and monitoring and contingency planning. Given their variety and ongoing development, it is crucial to provide a comprehensive overview of available drought indices that highlights their difference and examines the trend in their development. This paper reviews 74 operational and proposed drought indices and describes research directions.", "title": "" }, { "docid": "85cb15ae35a6368c004fde646c486491", "text": "OBJECTIVES\nThe purposes of this study were to identify age-related changes in objectively recorded sleep patterns across the human life span in healthy individuals and to clarify whether sleep latency and percentages of stage 1, stage 2, and rapid eye movement (REM) sleep significantly change with age.\n\n\nDESIGN\nReview of literature of articles published between 1960 and 2003 in peer-reviewed journals and meta-analysis.\n\n\nPARTICIPANTS\n65 studies representing 3,577 subjects aged 5 years to 102 years.\n\n\nMEASUREMENT\nThe research reports included in this meta-analysis met the following criteria: (1) included nonclinical participants aged 5 years or older; (2) included measures of sleep characteristics by \"all night\" polysomnography or actigraphy on sleep latency, sleep efficiency, total sleep time, stage 1 sleep, stage 2 sleep, slow-wave sleep, REM sleep, REM latency, or minutes awake after sleep onset; (3) included numeric presentation of the data; and (4) were published between 1960 and 2003 in peer-reviewed journals.\n\n\nRESULTS\nIn children and adolescents, total sleep time decreased with age only in studies performed on school days. Percentage of slow-wave sleep was significantly negatively correlated with age. Percentages of stage 2 and REM sleep significantly changed with age. In adults, total sleep time, sleep efficiency, percentage of slow-wave sleep, percentage of REM sleep, and REM latency all significantly decreased with age, while sleep latency, percentage of stage 1 sleep, percentage of stage 2 sleep, and wake after sleep onset significantly increased with age. However, only sleep efficiency continued to significantly decrease after 60 years of age. The magnitudes of the effect sizes noted changed depending on whether or not studied participants were screened for mental disorders, organic diseases, use of drug or alcohol, obstructive sleep apnea syndrome, or other sleep disorders.\n\n\nCONCLUSIONS\nIn adults, it appeared that sleep latency, percentages of stage 1 and stage 2 significantly increased with age while percentage of REM sleep decreased. However, effect sizes for the different sleep parameters were greatly modified by the quality of subject screening, diminishing or even masking age associations with different sleep parameters. The number of studies that examined the evolution of sleep parameters with age are scant among school-aged children, adolescents, and middle-aged adults. There are also very few studies that examined the effect of race on polysomnographic sleep parameters.", "title": "" }, { "docid": "ee73847c9dd27672c9860219c293b8dd", "text": "Sensing cost and data quality are two primary concerns in mobile crowd sensing. In this article, we propose a new crowd sensing paradigm, sparse mobile crowd sensing, which leverages the spatial and temporal correlation among the data sensed in different sub-areas to significantly reduce the required number of sensing tasks allocated, thus lowering overall sensing cost (e.g., smartphone energy consumption and incentives) while ensuring data quality. Sparse mobile crowdsensing applications intelligently select only a small portion of the target area for sensing while inferring the data of the remaining unsensed area with high accuracy. We discuss the fundamental research challenges in sparse mobile crowdsensing, and design a general framework with potential solutions to the challenges. To verify the effectiveness of the proposed framework, a sparse mobile crowdsensing prototype for temperature and traffic monitoring is implemented and evaluated. With several future research directions identified in sparse mobile crowdsensing, we expect that more research interests will be stimulated in this novel crowdsensing paradigm.", "title": "" }, { "docid": "709aa1bc4ace514e46f7edbb07fb03a9", "text": "Empirical scoring functions based on either molecular force fields or cheminformatics descriptors are widely used, in conjunction with molecular docking, during the early stages of drug discovery to predict potency and binding affinity of a drug-like molecule to a given target. These models require expert-level knowledge of physical chemistry and biology to be encoded as hand-tuned parameters or features rather than allowing the underlying model to select features in a data-driven procedure. Here, we develop a general 3-dimensional spatial convolution operation for learning atomic-level chemical interactions directly from atomic coordinates and demonstrate its application to structure-based bioactivity prediction. The atomic convolutional neural network is trained to predict the experimentally determined binding affinity of a protein-ligand complex by direct calculation of the energy associated with the complex, protein, and ligand given the crystal structure of the binding pose. Non-covalent interactions present in the complex that are absent in the protein-ligand sub-structures are identified and the model learns the interaction strength associated with these features. We test our model by predicting the binding free energy of a subset of protein-ligand complexes found in the PDBBind dataset and compare with state-of-the-art cheminformatics and machine learning-based approaches. We find that all methods achieve experimental accuracy (less than 1 kcal/mol mean absolute error) and that atomic convolutional networks either outperform or perform competitively with the cheminformatics based methods. Unlike all previous protein-ligand prediction systems, atomic convolutional networks are end-to-end and fully-differentiable. They represent a new data-driven, physics-based deep learning model paradigm that offers a strong foundation for future improvements in structure-based bioactivity prediction.", "title": "" } ]
scidocsrr
fedd32ed71202b1148eab4cce3c8edd7
REWIND: Recovery Write-Ahead System for In-Memory Non-Volatile Data-Structures
[ { "docid": "efcfb0aac56068374d861f24775c9cce", "text": "Hekaton is a new database engine optimized for memory resident data and OLTP workloads. Hekaton is fully integrated into SQL Server; it is not a separate system. To take advantage of Hekaton, a user simply declares a table memory optimized. Hekaton tables are fully transactional and durable and accessed using T-SQL in the same way as regular SQL Server tables. A query can reference both Hekaton tables and regular tables and a transaction can update data in both types of tables. T-SQL stored procedures that reference only Hekaton tables can be compiled into machine code for further performance improvements. The engine is designed for high con-currency. To achieve this it uses only latch-free data structures and a new optimistic, multiversion concurrency control technique. This paper gives an overview of the design of the Hekaton engine and reports some experimental results.", "title": "" }, { "docid": "08d1e3276a197639c56f406110707971", "text": "Phase change memory (PCM) is an emerging memory technology with many attractive features: it is non-volatile, byte-addressable, 2–4X denser than DRAM, and orders of magnitude better than NAND Flash in read latency, write latency, and write endurance. In the near future, PCM is expected to become a common component of the memory/storage hierarchy for a wide range of computer systems. In this paper, we describe the unique characteristics of PCM, and their potential impact on database system design. In particular, we present analytic metrics for PCM endurance, energy, and latency, and illustrate that current approaches for common database algorithms such as B-trees and Hash Joins are suboptimal for PCM. We present improved algorithms that reduce both execution time and energy on PCM while increasing write endurance.", "title": "" }, { "docid": "93177b2546e8efa1eccad4c81468f9fe", "text": "Online Transaction Processing (OLTP) databases include a suite of features - disk-resident B-trees and heap files, locking-based concurrency control, support for multi-threading - that were optimized for computer technology of the late 1970's. Advances in modern processors, memories, and networks mean that today's computers are vastly different from those of 30 years ago, such that many OLTP databases will now fit in main memory, and most OLTP transactions can be processed in milliseconds or less. Yet database architecture has changed little.\n Based on this observation, we look at some interesting variants of conventional database systems that one might build that exploit recent hardware trends, and speculate on their performance through a detailed instruction-level breakdown of the major components involved in a transaction processing database system (Shore) running a subset of TPC-C. Rather than simply profiling Shore, we progressively modified it so that after every feature removal or optimization, we had a (faster) working system that fully ran our workload. Overall, we identify overheads and optimizations that explain a total difference of about a factor of 20x in raw performance. We also show that there is no single \"high pole in the tent\" in modern (memory resident) database systems, but that substantial time is spent in logging, latching, locking, B-tree, and buffer management operations.", "title": "" } ]
[ { "docid": "4aad195a8dd20cd2531f0429ed6b0966", "text": "To solve problems associated with conventional 2D fingerprint acquisition processes including skin deformations and print smearing, we developed a noncontact 3D fingerprint scanner employing structured light illumination that, in order to be backwards compatible with existing 2D fingerprint recognition systems, requires a method of unwrapping the 3D scans into 2D equivalent prints. For the latter purpose of virtually flattening a 3D print, this paper introduces a fit-sphere unwrapping algorithm. Taking advantage of detailed 3D information, the proposed method defuses the unwrapping distortion by controlling the distances between neighboring points. Experimental results will demonstrate the high quality and recognition performance of the 3D unwrapped prints versus traditionally collected 2D prints. Furthermore, by classifying the 3D database into high- and low-quality data sets, we demonstrate that the relationship between quality and recognition performance holding for conventional 2D prints is achieved for 3D unwrapped fingerprints.", "title": "" }, { "docid": "522938687849ccc9da8310ac9d6bbf9e", "text": "Machine learning models, especially Deep Neural Networks, are vulnerable to adversarial examples—malicious inputs crafted by adding small noises to real examples, but fool the models. Adversarial examples transfer from one model to another, enabling black-box attacks to real-world applications. In this paper, we propose a strong attack algorithm named momentum iterative fast gradient sign method (MI-FGSM) to discover adversarial examples. MI-FGSM is an extension of iterative fast gradient sign method (I-FGSM) but improves the transferability significantly. Besides, we study how to attack an ensemble of models efficiently. Experiments demonstrate the effectiveness of the proposed algorithm. We hope that MI-FGSM can serve as a benchmark attack algorithm for evaluating the robustness of various models and defense methods.", "title": "" }, { "docid": "7a2d4032d79659a70ed2f8a6b75c4e71", "text": "In recent years, transition-based parsers have shown promise in terms of efficiency and accuracy. Though these parsers have been extensively explored for multiple Indian languages, there is still considerable scope for improvement by properly incorporating syntactically relevant information. In this article, we enhance transition-based parsing of Hindi and Urdu by redefining the features and feature extraction procedures that have been previously proposed in the parsing literature of Indian languages. We propose and empirically show that properly incorporating syntactically relevant information like case marking, complex predication and grammatical agreement in an arc-eager parsing model can significantly improve parsing accuracy. Our experiments show an absolute improvement of ∼2% LAS for parsing of both Hindi and Urdu over a competitive baseline which uses rich features like part-of-speech (POS) tags, chunk tags, cluster ids and lemmas. We also propose some heuristics to identify ezafe constructions in Urdu texts which show promising results in parsing these constructions.", "title": "" }, { "docid": "2f471c24ccb38e70627eba6383c003e0", "text": "We present an algorithm that enables casual 3D photography. Given a set of input photos captured with a hand-held cell phone or DSLR camera, our algorithm reconstructs a 3D photo, a central panoramic, textured, normal mapped, multi-layered geometric mesh representation. 3D photos can be stored compactly and are optimized for being rendered from viewpoints that are near the capture viewpoints. They can be rendered using a standard rasterization pipeline to produce perspective views with motion parallax. When viewed in VR, 3D photos provide geometrically consistent views for both eyes. Our geometric representation also allows interacting with the scene using 3D geometry-aware effects, such as adding new objects to the scene and artistic lighting effects.\n Our 3D photo reconstruction algorithm starts with a standard structure from motion and multi-view stereo reconstruction of the scene. The dense stereo reconstruction is made robust to the imperfect capture conditions using a novel near envelope cost volume prior that discards erroneous near depth hypotheses. We propose a novel parallax-tolerant stitching algorithm that warps the depth maps into the central panorama and stitches two color-and-depth panoramas for the front and back scene surfaces. The two panoramas are fused into a single non-redundant, well-connected geometric mesh. We provide videos demonstrating users interactively viewing and manipulating our 3D photos.", "title": "" }, { "docid": "a51a3e1ae86e4d178efd610d15415feb", "text": "The availability of semantically annotated image and video assets constitutes a critical prerequisite for the realisation of intelligent knowledge management services pertaining to realistic user needs. Given the extend of the challenges involved in the automatic extraction of such descriptions, manually created metadata play a significant role, further strengthened by their deployment in training and evaluation tasks related to the automatic extraction of content descriptions. The different views taken by the two main approaches towards semantic content description, namely the Semantic Web and MPEG-7, as well as the traits particular to multimedia content due to the multiplicity of information levels involved, have resulted in a variety of image and video annotation tools, adopting varying description aspects. Aiming to provide a common framework of reference and furthermore to highlight open issues, especially with respect to the coverage and the interoperability of the produced metadata, in this chapter we present an overview of the state of the art in image and video annotation tools.", "title": "" }, { "docid": "5b040ceee42483eb40e1da3bf0575d70", "text": "This survey gives a comprehensive review of recent advances related to the topic of VoIP QoE (Quality of user' Experience). It starts by providing some insight into the QoE arena and outlines the principal building blocks of a VoIP application. The sources of impairments over data IP networks are identified and distinguished from signal-oriented sources of quality degradation observed over telecom networks. An overview of existing subjective and objective methodologies for the assessment of the QoE of voice conversations is then presented outlining how subjective and objective speech quality methodologies have evolved to consider time-varying QoS transport networks. A description of practical procedures for measuring VoIP QoE and illustrative results is then given. Utilization methodology of several speech quality assessment frameworks is summarized. A survey of emerging single-ended parametric-model speech quality assessment algorithms dedicated to VoIP service is then given. In particular, after presenting a primitive single-ended parametric-model algorithm especially conceived for the evaluation of VoIP conversations, new artificial assessors of VoIP service are detailed. In particular, we describe speech quality assessment algorithms that consider, among others, packet loss burstiness, unequal importance of speech wave, and transient loss of connectivity. The following section concentrates on the integration of VoIP service over mobile data networks. The impact of quality-affecting phenomena, such as handovers and CODEC changeover are enumerated and some primary subjective results are summarized. The survey concludes with a review of open issues relating to automatic assessment of VoIP.", "title": "" }, { "docid": "69f72b8eadadba733f240fd652ca924e", "text": "We address the problem of finding descriptive explanations of facts stored in a knowledge graph. This is important in high-risk domains such as healthcare, intelligence, etc. where users need additional information for decision making and is especially crucial for applications that rely on automatically constructed knowledge bases where machine learned systems extract facts from an input corpus and working of the extractors is opaque to the end-user. We follow an approach inspired from information retrieval and propose a simple and efficient, yet effective solution that takes into account passage level as well as document level properties to produce a ranked list of passages describing a given input relation. We test our approach using Wikidata as the knowledge base and Wikipedia as the source corpus and report results of user studies conducted to study the effectiveness of our proposed model.", "title": "" }, { "docid": "d90407926b8dc5454902875d66b2404b", "text": "In many machine learning tasks it is desirable that a model's prediction transforms in an equivariant way under transformations of its input. Convolutional neural networks (CNNs) implement translational equivariance by construction; for other transformations, however, they are compelled to learn the proper mapping. In this work, we develop Steerable Filter CNNs (SFCNNs) which achieve joint equivariance under translations and rotations by design. The proposed architecture employs steerable filters to efficiently compute orientation dependent responses for many orientations without suffering interpolation artifacts from filter rotation. We utilize group convolutions which guarantee an equivariant mapping. In addition, we generalize He's weight initialization scheme to filters which are defined as a linear combination of a system of atomic filters. Numerical experiments show a substantial enhancement of the sample complexity with a growing number of sampled filter orientations and confirm that the network generalizes learned patterns over orientations. The proposed approach achieves state-of-the-art on the rotated MNIST benchmark and on the ISBI 2012 2D EM segmentation challenge.", "title": "" }, { "docid": "5eb304f9287785a65dd159e42a51eb8c", "text": "The forensic examination following rape has two primary purposes: to provide health care and to collect evidence. Physical injuries need treatment so that they heal without adverse consequences. The pattern of injuries also has a forensic significance in that injuries are linked to the outcome of legal proceedings. This literature review investigates the variables related to genital injury prevalence and location that are reported in a series of retrospective reviews of medical records. The author builds the case that the prevalence and location of genital injury provide only a partial description of the nature of genital trauma associated with sexual assault and suggests a multidimensional definition of genital injury pattern. Several of the cited studies indicate that new avenues of investigation, such as refined measurement strategies for injury severity and skin color, may lead to advancements in health care, forensic, and criminal justice science.", "title": "" }, { "docid": "b68a62f6c4078e9666a8a3b9489fcf84", "text": "Reviews the criticism on the 4P Marketing Mix framework as the basis of traditional and virtual marketing planning. Argues that the customary marketing management approach, based on the popular Marketing Mix 4Ps paradigm, is inadequate in the case of virtual marketing. Identifies two main limitations of the Marketing Mix when applied in online environments namely the role of the Ps in a virtual commercial setting and the lack of any strategic elements in the model. Identifies the critical factors of the Web marketing and argues that the basis for successful E-Commerce is the full integration of the virtual activities into the company’s physical strategy, marketing plan and organisational processes. The 4S elements of the Web Marketing Mix framework offer the basis for developing and commercialising Business to Consumer online projects. The model was originally developed for educational purposes and has been tested and refined by means of three case studies.", "title": "" }, { "docid": "e831e47d09429ef0838366ffb07ed353", "text": "This paper studies the effects of boosting in the context of different classification methods for text categorization, including Decision Trees, Naive Bayes, Support Vector Machines (SVMs) and a Rocchio-style classifier. We identify the inductive biases of each classifier and explore how boosting, as an error-driven resampling mechanism, reacts to those biases. Our experiments on the Reuters-21578 benchmark show that boosting is not effective in improving the performance of the base classifiers on common categories. However, the effect of boosting for rare categories varies across classifiers: for SVMs and Decision Trees, we achieved a 13-17% performance improvement in macro-averaged F1 measure, but did not obtain substantial improvement for the other two classifiers. This interesting finding of boosting on rare categories has not been reported before.", "title": "" }, { "docid": "6bfcf02bea2e2c2ebb387c215487bb78", "text": "Healthcare is a sector where decisions usually have very high-risk and high-cost associated with them. One bad choice can cost a person's life. With diseases like Swine Flu on the rise, which have symptoms quite similar to common cold, it's very difficult for people to differentiate between medical conditions. We propose a novel method for recognition of diseases and prediction of their cure time based on the symptoms. We do this by assigning different coefficients to each symptom of a disease, and filtering the dataset with the severity score assigned to each symptom by the user. The diseases are identified based on a numerical value calculated in the fashion mentioned above. For predicting the cure time of a disease, we use reinforcement learning. Our algorithm takes into account the similarity between the condition of the current user and other users who have suffered from the same disease, and uses the similarity scores as weights in prediction of cure time. We also predict the current medical condition of user relative to people who have suffered from same disease.", "title": "" }, { "docid": "a8d02f362ba8210488e4dea1a1bf9b6f", "text": "BACKGROUND\nThe AMNOG regulation, introduced in 2011 in Germany, changed the game for new drugs. Now, the industry is required to submit a dossier to the GBA (the central decision body in the German sickness fund system) to show additional benefit. After granting the magnitude of the additional benefit by the GBA, the manufacturer is entitled to negotiate the reimbursement price with the GKV-SV (National Association of Statutory Health Insurance Funds). The reimbursement price is defined as a discount on the drug price at launch. As the price or discount negotiations between the manufacturers and the GKV-SV takes place behind closed doors, the factors influencing the results of the negotiation are not known.\n\n\nOBJECTIVES\nThe aim of this evaluation is to identify factors influencing the results of the AMNOG price negotiation process.\n\n\nMETHODS\nThe analysis was based on a dataset containing detailed information on all assessments until the end of 2015. A descriptive analysis was followed by an econometric analysis of various potential factors (benefit rating, size of target population, deviating from appropriate comparative therapy and incorporation of HRQoL-data).\n\n\nRESULTS\nUntil December 2015, manufacturers and the GKV-SV finalized 96 negotiations in 193 therapeutic areas, based on assessment conducted by the GBA. The GBA has granted an additional benefit to 100/193 drug innovations. Negotiated discount was significantly higher for those drugs without additional benefit (p = 0.030) and non-orphan drugs (p = 0.015). Smaller population size, no deviation from recommended appropriate comparative therapy and the incorporation of HRQoL-data were associated with a lower discount on the price at launch. However, neither a uni- nor the multivariate linear regression showed enough power to predict the final discount.\n\n\nCONCLUSIONS\nAlthough the AMNOG regulation implemented binding and strict rules for the benefit assessment itself, the outcome of the discount negotiations are still unpredictable. Obviously, negotiation tactics, the current political situation and soft factors seem to play a more influential role for the outcome of the negotiations than the five hard and known factors analyzed in this study. Further research is needed to evaluate additional factors.", "title": "" }, { "docid": "5d04dd7d174cc1b1517035d26785c70f", "text": "Folksonomies have become a powerful tool to describe, discover, search, and navigate online resources (e.g., pictures, videos, blogs) on the Social Web. Unlike taxonomies and ontologies, which impose a hierarchical categorisation on content, folksonomies directly allow end users to freely create and choose the categories (in this case, tags) that best describe a piece of information. However, the freedom afforded to users comes at a cost: as tags are defined informally, the retrieval of information becomes more challenging. Different solutions have been proposed to help users discover content in this highly dynamic setting. However, they have proved to be effective only for users who have already heavily used the system (active users) and who are interested in popular items (i.e., items tagged by many other users). In this thesis we explore principles to help both active users and more importantly new or inactive users (cold starters) to find content they are interested in even when this content falls into the long tail of medium-to-low popularity items (cold start items). We investigate the tagging behaviour of users on content and show how the similarities between users and tags can be used to produce better recommendations. We then analyse how users create new content on social tagging websites and show how preferences of only a small portion of active users (leaders), responsible for the vast majority of the tagged content, can be used to improve the recommender system’s scalability. We also investigate the growth of the number of users, items and tags in the system over time. We then show how this information can be used to decide whether the benefits of an update of the data structures modelling the system outweigh the corresponding cost. In this work we formalize the ideas introduced above and we describe their implementation. To demonstrate the improvements of our proposal in recommendation efficacy and efficiency, we report the results of an extensive evaluation conducted on three different social tagging websites: CiteULike, Bibsonomy and MovieLens. Our results demonstrate that our approach achieves higher accuracy than state-of-the-art systems for cold start users and for users searching for cold start items. Moreover, while accuracy of our technique is comparable to other techniques for active users, the computational cost that it requires is much smaller. In other words our approach is more scalable and thus more suitable for large and quickly growing settings.", "title": "" }, { "docid": "7aee32c6d166c8d48c3f666bfc9d381d", "text": "We propose a generalized approach to decoupling shading from visibility sampling in graphics pipelines, which we call decoupled sampling. Decoupled sampling enables stochastic supersampling of motion and defocus blur at reduced shading cost, as well as controllable or adaptive shading rates which trade off shading quality for performance. It can be thought of as a generalization of multisample antialiasing (MSAA) to support complex and dynamic mappings from visibility to shading samples, as introduced by motion and defocus blur and adaptive shading. It works by defining a many-to-one hash from visibility to shading samples, and using a buffer to memoize shading samples and exploit reuse across visibility samples. Decoupled sampling is inspired by the Reyes rendering architecture, but like traditional graphics pipelines, it shades fragments rather than micropolygon vertices, decoupling shading from the geometry sampling rate. Also unlike Reyes, decoupled sampling only shades fragments after precise computation of visibility, reducing overshading.\n We present extensions of two modern graphics pipelines to support decoupled sampling: a GPU-style sort-last fragment architecture, and a Larrabee-style sort-middle pipeline. We study the architectural implications of decoupled sampling and blur, and derive end-to-end performance estimates on real applications through an instrumented functional simulator. We demonstrate high-quality motion and defocus blur, as well as variable and adaptive shading rates.", "title": "" }, { "docid": "56a072fc480c64e6a288543cee9cd5ac", "text": "The performance of object detection has recently been significantly improved due to the powerful features learnt through convolutional neural networks (CNNs). Despite the remarkable success, there are still several major challenges in object detection, including object rotation, within-class diversity, and between-class similarity, which generally degenerate object detection performance. To address these issues, we build up the existing state-of-the-art object detection systems and propose a simple but effective method to train rotation-invariant and Fisher discriminative CNN models to further boost object detection performance. This is achieved by optimizing a new objective function that explicitly imposes a rotation-invariant regularizer and a Fisher discrimination regularizer on the CNN features. Specifically, the first regularizer enforces the CNN feature representations of the training samples before and after rotation to be mapped closely to each other in order to achieve rotation-invariance. The second regularizer constrains the CNN features to have small within-class scatter but large between-class separation. We implement our proposed method under four popular object detection frameworks, including region-CNN (R-CNN), Fast R- CNN, Faster R- CNN, and R- FCN. In the experiments, we comprehensively evaluate the proposed method on the PASCAL VOC 2007 and 2012 data sets and a publicly available aerial image data set. Our proposed methods outperform the existing baseline methods and achieve the state-of-the-art results.", "title": "" }, { "docid": "b15a12d0421227b01047dbe962070aae", "text": "This paper investigates the behaviour of small and medium sized enterprises (SMEs) within the heritage tourism supply chain (HTSC), in two emerging heritage regions. SMEs are conceptualised as implementers, working within the constraints of government level tourism structures and the heritage tourism supply chain. The research employs a case study approach, focusing on two emerging regions in Northern Ireland. In-depth interviews were carried out with small business owners and community associations operating within the regions. The research identifies SME dissatisfaction with the supply chain and the processes in place for the delivery of the tourism product. To overcome the perceived inadequacies of the heritage tourism supply chain SMEs engage in entrepreneurial behaviour by attempting to deliver specific products and services to meet the need of tourists. The challenge for tourism organisations is how they can integrate the entrepreneurial, innovative activities of SMEs into the heritage tourism system. © 2016 Published by Elsevier Ltd.", "title": "" }, { "docid": "3010767db829d5a47a3e1539b76848d0", "text": "The utilization of fermentation media derived from waste and by-product streams from biodiesel and confectionery industries could lead to highly efficient production of bacterial cellulose. Batch fermentations with the bacterial strain Komagataeibacter sucrofermentans DSM (Deutsche Sammlung von Mikroorganismen) 15973 were initially carried out in synthetic media using commercial sugars and crude glycerol. The highest bacterial cellulose concentration was achieved when crude glycerol (3.2 g/L) and commercial sucrose (4.9 g/L) were used. The combination of crude glycerol and sunflower meal hydrolysates as the sole fermentation media resulted in bacterial cellulose production of 13.3 g/L. Similar results (13 g/L) were obtained when flour-rich hydrolysates produced from confectionery industry waste streams were used. The properties of bacterial celluloses developed when different fermentation media were used showed water holding capacities of 102-138 g · water/g · dry bacterial cellulose, viscosities of 4.7-9.3 dL/g, degree of polymerization of 1889.1-2672.8, stress at break of 72.3-139.5 MPa and Young's modulus of 0.97-1.64 GPa. This study demonstrated that by-product streams from the biodiesel industry and waste streams from confectionery industries could be used as the sole sources of nutrients for the production of bacterial cellulose with similar properties as those produced with commercial sources of nutrients.", "title": "" }, { "docid": "cca9972ce9d49d1347274b446e6be00b", "text": "Miura folding is famous all over the world. It is an element of the ancient Japanese tradition of origami and reaches as far as astronautical engineering through the construction of solar panels. This article explains how to achieve the Miura folding, and describes its application to maps. The author also suggests in this context that nature may abhor the right angle, according to observation of the wing base of a dragonfly. AMS Subject Classification: 51M05, 00A09, 97A20", "title": "" } ]
scidocsrr
10dbb273fa7ff8ad5ad34fe65cbfc184
Cloudlets: bringing the cloud to the mobile user
[ { "docid": "4c9391f334ca2640e07b63b5a9764045", "text": "The mobile phone landscape changed last year with the introduction of smart phones running Android, a platform marketed by Google. Android phones are the first credible threat to the iPhone market. Not only did Google target the same consumers as iPhone, it also aimed to win the hearts and minds of mobile application developers. On the basis of market share and the number of available apps, Android is a success.", "title": "" } ]
[ { "docid": "f1b46d6bbbe16fc6d2e4eaa717fc6e78", "text": "Estimation of human body poses from video is an important problem in computer vision with many applications. Most existing methods for video pose estimation are offline in nature, where all frames in the video are used in the process to estimate the body pose in each frame. In this work, we describe a fast online video upper body pose estimation method (CDBN-MODEC) that is based on a conditional dynamic Bayesian network model, which predicts upper body pose in a frame without using information from future frames. Our method combines fast single image based pose estimation methods with the temporal correlation of poses between frames. We collect a new high frame rate upper body pose dataset that better reflects practical scenarios calling for fast online video pose estimation. When evaluated on this dataset and the VideoPose2 benchmark dataset, CDBN-MODEC achieves improvements in both performance and running efficiency over several state-of-art online video pose estimation methods.", "title": "" }, { "docid": "17faf590307caf41095530fcec1069c7", "text": "Fine-grained visual recognition typically depends on modeling subtle difference from object parts. However, these parts often exhibit dramatic visual variations such as occlusions, viewpoints, and spatial transformations, making it hard to detect. In this paper, we present a novel attention-based model to automatically, selectively and accurately focus on critical object regions with higher importance against appearance variations. Given an image, two different Convolutional Neural Networks (CNNs) are constructed, where the outputs of two CNNs are correlated through bilinear pooling to simultaneously focus on discriminative regions and extract relevant features. To capture spatial distributions among the local regions with visual attention, soft attention based spatial LongShort Term Memory units (LSTMs) are incorporated to realize spatially recurrent yet visually selective over local input patterns. All the above intuitions equip our network with the following novel model: two-stream CNN layers, bilinear pooling layer, spatial recurrent layer with location attention are jointly trained via an end-to-end fashion to serve as the part detector and feature extractor, whereby relevant features are localized and extracted attentively. We show the significance of our network against two well-known visual recognition tasks: fine-grained image classification and person re-identification.", "title": "" }, { "docid": "1997b8a0cac1b3beecfd79b3e206d7e4", "text": "Scatterplots are well established means of visualizing discrete data values with two data variables as a collection of discrete points. We aim at generalizing the concept of scatterplots to the visualization of spatially continuous input data by a continuous and dense plot. An example of a continuous input field is data defined on an n-D spatial grid with respective interpolation or reconstruction of in-between values. We propose a rigorous, accurate, and generic mathematical model of continuous scatterplots that considers an arbitrary density defined on an input field on an n-D domain and that maps this density to m-D scatterplots. Special cases are derived from this generic model and discussed in detail: scatterplots where the n-D spatial domain and the m-D data attribute domain have identical dimension, 1-D scatterplots as a way to define continuous histograms, and 2-D scatterplots of data on 3-D spatial grids. We show how continuous histograms are related to traditional discrete histograms and to the histograms of isosurface statistics. Based on the mathematical model of continuous scatterplots, respective visualization algorithms are derived, in particular for 2-D scatterplots of data from 3-D tetrahedral grids. For several visualization tasks, we show the applicability of continuous scatterplots. Since continuous scatterplots do not only sample data at grid points but interpolate data values within cells, a dense and complete visualization of the data set is achieved that scales well with increasing data set size. Especially for irregular grids with varying cell size, improved results are obtained when compared to conventional scatterplots. Therefore, continuous scatterplots are a suitable extension of a statistics visualization technique to be applied to typical data from scientific computation.", "title": "" }, { "docid": "1d1a9e0820bd8586af1788cd7630a00e", "text": "This letter proposes a method for the generation of temporal action proposals for the segmentation of long uncut video sequences. The presence of consecutive multiple actions in video sequences makes the temporal segmentation a challenging problem due to the unconstrained nature of actions in space and time. To address this issue, we exploit the nonaction segments present between the actual human actions in uncut videos. From the long uncut video, we compute the energy of consecutive nonoverlapping motion history images (MHIs), which provides spatiotemporal information of motion. Our proposals from MHIs (PMHI) are based on clustering the MHIs into actions and nonaction segments by detecting minima from the energy of MHIs. PMHI efficiently segments the long uncut videos into a small number of nonoverlapping temporal action proposals. The strength of PMHI is that it is unsupervised, which alleviates the requirement for any training data. Our temporal action proposal method outperforms the existing proposal methods on the Multi-view Human Action video (MuHAVi)-uncut and Computer Vision and Pattern recognition (CVPR) 2012 Change Detection datasets with an average recall rate of 86.1% and 86.0%, respectively.", "title": "" }, { "docid": "7084fd27fcb249eff69e1b21f32abd0a", "text": "I review briefly different aspects of the MOND paradigm, with emphasis on phenomenology, epitomized here by many MOND laws of galactic motion–analogous to Kepler's laws of planetary motion. I then comment on the possible roots of MOND in cosmology, possibly the deepest and most far reaching aspect of MOND. This is followed by a succinct account of existing underlying theories. I also reflect on the implications of MOND's successes for the dark matter (DM) paradigm: MOND predictions imply that baryons alone accurately determine the full field of each and every individual galactic object. This conflicts with the expectations in the DM paradigm because of the haphazard formation and evolution of galactic objects and the very different influences that baryons and DM are subject to during the evolution, as evidenced, e.g., by the very small baryon-to-DM fraction in galaxies (compared with the cosmic value). All this should disabuse DM advocates of the thought that DM will someday be able to reproduce MOND: it is inconceivable that the modicum of baryons left over in galaxies can be made to determine everything if a much heavier DM component is present.", "title": "" }, { "docid": "1ec731b5c586596705053309729d8427", "text": "In this work the design and application of a fuzzy logic controller to DC-servomotor is investigated. The proposed strategy is intended to improve the performance of the original control system by use of a fuzzy logic controller (FLC) as the motor load changes. Computer simulation demonstrates that FLC is effective in position control of a DC-servomotor comparing with conventional one.", "title": "" }, { "docid": "6c1b18d0873266f99a210910354b836d", "text": "Ethereum has emerged as a dynamic platform for exchanging cryptocurrency tokens. While token crowdsales cannot simultaneously guarantee buyers both certainty of valuation and certainty of participation, we show that if each token buyer specifies a desired purchase quantity at each valuation then everyone can successfully participate. Our implementation introduces smart contract techniques which recruit outside participants in order to circumvent computational complexity barriers. 1 A crowdsale dilemma This year has witnessed the remarkable rise of token crowdsales. Token incentives enable new community structures by employing novel combinations of currency rewards, software use rights, protocol governance, and traditional equity. Excluding Bitcoin, the total market cap of the token market surged over 60 billion USD in June 20171. Most tokens originate on the Ethereum network, and, at times, the network has struggled to keep up with purchase demands. On several occasions, single crowdsales have consumed the network’s entire bandwidth for consecutive hours. Token distributions can take many forms. Bitcoin, for example, continues to distribute tokens through a competitive, computational process known as mining. In this exposition, we shall concern ourselves exclusively https://coinmarketcap.com/charts/", "title": "" }, { "docid": "0d1e889a69ea17e43c5f65bac38bba79", "text": "In this paper we utilize the notion of affordances to model relations between task, object and a grasp to address the problem of task-specific robotic grasping. We use convolutional neural networks for encoding and detecting object affordances, class and orientation, which we utilize to formulate grasp constraints. Our approach applies to previously unseen objects from a fixed set of classes and facilitates reasoning about which tasks an object affords and how to grasp it for that task. We evaluate affordance detection on full-view and partial-view synthetic data and compute task-specific grasps for objects that belong to ten different classes and afford five different tasks. We demonstrate the feasibility of our approach by employing an optimization-based grasp planner to compute task-specific grasps.", "title": "" }, { "docid": "481018ae479f8a6b8669972156d234d6", "text": "AIM\nThis paper is a report of a discussion of the arguments surrounding the role of the initial literature review in grounded theory.\n\n\nBACKGROUND\nResearchers new to grounded theory may find themselves confused about the literature review, something we ourselves experienced, pointing to the need for clarity about use of the literature in grounded theory to help guide others about to embark on similar research journeys.\n\n\nDISCUSSION\nThe arguments for and against the use of a substantial topic-related initial literature review in a grounded theory study are discussed, giving examples from our own studies. The use of theoretically sampled literature and the necessity for reflexivity are also discussed. Reflexivity is viewed as the explicit quest to limit researcher effects on the data by awareness of self, something seen as integral both to the process of data collection and the constant comparison method essential to grounded theory.\n\n\nCONCLUSION\nA researcher who is close to the field may already be theoretically sensitized and familiar with the literature on the study topic. Use of literature or any other preknowledge should not prevent a grounded theory arising from the inductive-deductive interplay which is at the heart of this method. Reflexivity is needed to prevent prior knowledge distorting the researcher's perceptions of the data.", "title": "" }, { "docid": "c635f2ad65cd74c137910661aeb0ab3d", "text": "Scholarly research on the topic of leadership has witnessed a dramatic increase over the last decade, resulting in the development of diverse leadership theories. To take stock of established and developing theories since the beginning of the new millennium, we conducted an extensive qualitative review of leadership theory across 10 top-tier academic publishing outlets that included The Leadership Quarterly, Administrative Science Quarterly, American Psychologist, Journal of Management, Academy of Management Journal, Academy of Management Review, Journal of Applied Psychology, Organizational Behavior and Human Decision Processes, Organizational Science, and Personnel Psychology. We then combined two existing frameworks (Gardner, Lowe, Moss, Mahoney, & Cogliser, 2010; Lord & Dinh, 2012) to provide a processoriented framework that emphasizes both forms of emergence and levels of analysis as a means to integrate diverse leadership theories. We then describe the implications of the findings for future leadership research and theory.", "title": "" }, { "docid": "af459f8f89bd1f27595dd3c9be4baf13", "text": "The recent successes in applying deep learning techniques to solve standard computer vision problems has aspired researchers to propose new computer vision problems in different domains. As previously established in the field, training data itself plays a significant role in the machine learning process, especially deep learning approaches which are data hungry. In order to solve each new problem and get a decent performance, a large amount of data needs to be captured which may in many cases pose logistical difficulties. Therefore, the ability to generate de novo data or expand an existing dataset, however small, in order to satisfy data requirement of current networks may be invaluable. Herein, we introduce a novel way to partition an action video clip into action, subject and context. Each part is manipulated separately and reassembled with our proposed video generation technique. Furthermore, our novel human skeleton trajectory generation along with our proposed video generation technique, enables us to generate unlimited action recognition training data. These techniques enables us to generate video action clips from an small set without costly and time-consuming data acquisition. Lastly, we prove through extensive set of experiments on two small human action recognition datasets, that this new data generation technique can improve the performance of current action recognition neural nets.", "title": "" }, { "docid": "04fd45380cc99b4b650318c0df7627a6", "text": "Research and development of recommender systems has been a vibrant field for over a decade, having produced proven metho ds for “preference-aware” computing. Recommenders use commu nity opinion histories to help users identify interesting i tems from a considerably large search space (e.g., inventory from Amaz on [7], movies from Netflix [9]). Personalization, recommendation , a d the “human side\" of data-centric applications are even beco ming important topics in the data management community [3]. A popular recommendation method used heavily in practice is collaborative filtering, consisting of two phases: (1) An offline model-buildingphase that uses community opinions of items (e.g., movie ratings, “Diggs” [6]) to build a model storing meaning ful correlations between users and items. (2) An on-demandrecommendationphase that uses the model to produce a set of recommended items when requested from a user or application. To be effective, recommender systems must evolve with their content. In current update-intensive systems (e.g., socia l networks, online news sites), the restriction that a model be generate d offline is a significant drawback, as it hinders the system’s ability to evolve quickly. For instance, new users enter the system cha nging the collective opinions over items, or the system adds ne w items quickly (e.g., news posts, Facebook postings), which w dens the recommendation pool. These updates affect the recommen der model, that in turn affect the system’s recommendation qual ity in terms of providing accurate answers to recommender queries . In such systems, a completely real-time recommendation process is paramount. Unfortunately, most traditional state-of-the -art recommenders are “hand-built\", implemented as custom software notbuilt for a real-time recommendation process [1]. Further, for so me", "title": "" }, { "docid": "e065cabd0cc5e95493a3ede4e3d1eeee", "text": "In this paper we introduced an alternative view of text mining and we review several alternative views proposed by different authors. We propose a classification of text mining techniques into two main groups: techniques based on inductive inference, that we call text data mining (TDM, comprising most of the existing proposals in the literature), and techniques based on deductive or abductive inference, that we call text knowledge mining (TKM). To our knowledge, the TKM view of text mining is new though, as we shall show, several existing techniques could be considered in this group. We discuss about the possibilities and challenges of TKM techniques. We also discuss about the application of existing theories in possible future research in this field.", "title": "" }, { "docid": "10b851c1d0113549764b80434c4bac5e", "text": "In this paper, a simplified thermal model for variable speed self cooled induction motors is proposed and experimentally verified. The thermal model is based on simple equations that are compared with more complex equations well known in literature. The proposed thermal model allows to predict the over temperature in the main parts of the motor, starting from the measured or the estimated losses in the machine. In the paper the description of the thermal model set up is reported in detail. Finally, the model is used to define the correct power derating for a variable speed PWM induction motor drive.", "title": "" }, { "docid": "94c29fd22dad51451815c1033aa4f53c", "text": "ing Automatic abstracting and text summarization are now used synonymously that aim to generate abstracts or summaries of texts. This area of NLP research is becoming more common in the web and digital library environment. In simple abstracting or summarization systems, parts of text – sentences or paragraphs – are selected automatically based on some linguistic and/or statistical criteria to produce the abstract or summary. More sophisticated systems may merge two or more", "title": "" }, { "docid": "eb99e0c5e9682cf2665a2e495ca3502a", "text": "Recently introduced 3D vertical flash memory is expected to be a disruptive technology since it overcomes scaling challenges of conventional 2D planar flash memory by stacking up cells in the vertical direction. However, 3D vertical flash memory suffers from a new problem known as fast detrapping, which is a rapid charge loss problem. In this paper, we propose a scheme to compensate the effect of fast detrapping by intentional inter-cell interference (ICI). In order to properly control the intentional ICI, our scheme relies on a coding technique that incorporates the side information of fast detrapping during the encoding stage. This technique is closely connected to the well-known problem of coding in a memory with defective cells. Numerical results show that the proposed scheme can effectively address the problem of fast detrapping.", "title": "" }, { "docid": "98a69bf140c17ec1b86ebb15233666c1", "text": "In this paper we propose a novel two-step procedure to recognize textual entailment. Firstly, we build a joint Restricted Boltzmann Machines (RBM) layer to learn the joint representation of the text-hypothesis pairs. Then the reconstruction error is calculated by comparing the original representation with reconstructed representation derived from the joint layer for each pair to recognize textual entailment. The joint RBM training data is automatically generated from a large news corpus. Experiment results show the contribution of the idea to the performance on textual entailment.", "title": "" }, { "docid": "190bc8482b4bdc8662be25af68adb2c0", "text": "The goal of all vitreous surgery is to perform the desired intraoperative intervention with minimum collateral damage in the most efficient way possible. An understanding of the principles of fluidics is of importance to all vitreoretinal surgeons to achieve these aims. Advances in technology mean that surgeons are being given increasing choice in the settings they are able to select for surgery. Manufacturers are marketing systems with aspiration driven by peristaltic, Venturi and hybrid pumps. Increasingly fast cut rates are offered with optimised, and in some cases surgeon-controlled, duty cycles. Function-specific cutters are becoming available and narrow-gauge instrumentation is evolving to meet surgeon demands with higher achievable flow rates. In parallel with the developments in outflow technology, infusion systems are advancing with lowering flow resistance and intraocular pressure control to improve fluidic stability during surgery. This review discusses the important aspects of fluidic technology so that surgeons can select the optimum machine parameters to carry out safe and effective surgery.", "title": "" }, { "docid": "e35194cb3fdd3edee6eac35c45b2da83", "text": "The availability of high-resolution Digital Surface Models of coastal environments is of increasing interest for scientists involved in the study of the coastal system processes. Among the range of terrestrial and aerial methods available to produce such a dataset, this study tests the utility of the Structure from Motion (SfM) approach to low-altitude aerial imageries collected by Unmanned Aerial Vehicle (UAV). The SfM image-based approach was selected whilst searching for a rapid, inexpensive, and highly automated method, able to produce 3D information from unstructured aerial images. In particular, it was used to generate a dense point cloud and successively a high-resolution Digital Surface Models (DSM) of a beach dune system in Marina di Ravenna (Italy). The quality of the elevation dataset produced by the UAV-SfM was initially evaluated by comparison with point cloud generated by a Terrestrial Laser Scanning (TLS) surveys. Such a comparison served to highlight an average difference in the vertical values of 0.05 m (RMS = 0.19 m). However, although the points cloud comparison is the best approach to investigate the absolute or relative correspondence between UAV and TLS OPEN ACCESS Remote Sens. 2013, 5 6881 methods, the assessment of geomorphic features is usually based on multi-temporal surfaces analysis, where an interpolation process is required. DSMs were therefore generated from UAV and TLS points clouds and vertical absolute accuracies assessed by comparison with a Global Navigation Satellite System (GNSS) survey. The vertical comparison of UAV and TLS DSMs with respect to GNSS measurements pointed out an average distance at cm-level (RMS = 0.011 m). The successive point by point direct comparison between UAV and TLS elevations show a very small average distance, 0.015 m, with RMS = 0.220 m. Larger values are encountered in areas where sudden changes in topography are present. The UAV-based approach was demonstrated to be a straightforward one and accuracy of the vertical dataset was comparable with results obtained by TLS technology.", "title": "" }, { "docid": "76034cd981a64059f749338a2107e446", "text": "We examine how financial assurance structures and the clearly defined financial transaction at the core of monetized network hospitality reduce uncertainty for Airbnb hosts and guests. We apply the principles of social exchange and intrinsic and extrinsic motivation to a qualitative study of Airbnb hosts to 1) describe activities that are facilitated by the peer-to-peer exchange platform and 2) how the assurance of the initial financial exchange facilitates additional social exchanges between hosts and guests. The study illustrates that the financial benefits of hosting do not necessarily crowd out intrinsic motivations for hosting but instead strengthen them and even act as a gateway to further social exchange and interpersonal interaction. We describe the assurance structures in networked peer-to-peer exchange, and explain how such assurances can reconcile contention between extrinsic and intrinsic motivations. We conclude with implications for design and future research.", "title": "" } ]
scidocsrr
7cbfba347195d542f66bcd2bd76c4667
Global Contrast Based Salient Region Detection
[ { "docid": "37a8fe29046ec94d54e62f202a961129", "text": "Detection of salient image regions is useful for applications like image segmentation, adaptive compression, and region-based image retrieval. In this paper we present a novel method to determine salient regions in images using low-level features of luminance and color. The method is fast, easy to implement and generates high quality saliency maps of the same size and resolution as the input image. We demonstrate the use of the algorithm in the segmentation of semantically meaningful whole objects from digital images.", "title": "" }, { "docid": "33e9975e16ece06500b89aced4c903eb", "text": "We present a novel image resizing method which attempts to ensure that important local regions undergo a geometric similarity transformation, and at the same time, to preserve image edge structure. To accomplish this, we define handles to describe both local regions and image edges, and assign a weight for each handle based on an importance map for the source image. Inspired by conformal energy, which is widely used in geometry processing, we construct a novel quadratic distortion energy to measure the shape distortion for each handle. The resizing result is obtained by minimizing the weighted sum of the quadratic distortion energies of all handles. Compared to previous methods, our method allows distortion to be diffused better in all directions, and important image edges are well-preserved. The method is efficient, and offers a closed form solution.", "title": "" }, { "docid": "c0dbb410ebd6c84bd97b5f5e767186b3", "text": "A new hypothesis about the role of focused attention is proposed. The feature-integration theory of attention suggests that attention must be directed serially to each stimulus in a display whenever conjunctions of more than one separable feature are needed to characterize or distinguish the possible objects presented. A number of predictions were tested in a variety of paradigms including visual search, texture segregation, identification and localization, and using both separable dimensions (shape and color) and local elements or parts of figures (lines, curves, etc. in letters) as the features to be integrated into complex wholes. The results were in general consistent with the hypothesis. They offer a new set of criteria for distinguishing separable from integral features and a new rationale for predicting which tasks will show attention limits and which will not.", "title": "" } ]
[ { "docid": "3653e29e71d70965317eb4c450bc28da", "text": "This paper comprises an overview of different aspects for wire tension control devices and algorithms according to the state of industrial use and state of research. Based on a typical winding task of an orthocyclic winding scheme, possible new principles for an alternative piezo-electric actuator and an electromechanical tension control will be derived and presented.", "title": "" }, { "docid": "3f0286475580e4c5663023593ef12aff", "text": "ABSRACT Sliding mode control has received much attention due to its major advantages such as guaranteed stability, robustness against parameter variations, fast dynamic response and simplicity in the implementation and therefore has been widely applied to control nonlinear systems. This paper discus the sliding mode control technic for controlling hydropower system and generalized a model which can be used to simulate a hydro power plant using MATLAB/SIMULINK. This system consist hydro turbine connected to a generator coaxially, which is connected to grid. Simulation of the system can be done using various simulation tools, but SIMULINK is preferred because of simplicity and useful basic function blocks. The Simulink program is used to obtain the systematic dynamic model of the system and testing the operation with different PID controllers, SMC controller with additional integral action.", "title": "" }, { "docid": "f59215c348efef1114a5062d5d7b268d", "text": "Detection and accommodation of outliers are crucial in a number of contexts, in which collected data from a given environment is subsequently used for assessing its running conditions or for data-based decision-making. Although a significant number of studies on this subject can be found in literature, a comprehensive empirical assessment in the context of local online detection in wireless sensor networks is still missing. The present work aims at filling this gap by offering an empirical evaluation of two state-of-the-art online detection methods. The first methodology is based on a Least Squares-Support Vector Machine technique, along with a sliding window-based learning algorithm, while the second approach relies on Principal Component Analysis and on the robust orthonormal projection approximation subspace tracking with rank-1 modification. The performance and implementability of these methods are evaluated using a generated non-stationary time-series and a test-bed consisting of a benchmark three-tank system and a wireless sensor network, where deployed algorithms are implemented under a multi-agent framework.", "title": "" }, { "docid": "b0bb2be3c9f85d434125d374a85f6360", "text": "This paper presents a part of a generic approach for transposition models of any data source to the ontology and vice versa using Model Driven Architecture (MDA). We present the general process of our approach by focusing on the transposition from data source to ontology: Data source is based on a unique and single model also for ontology, the cooperation of software engineering and knowledge engineering is then reduced to the determination of transition between software model and ontology. Our approach is based on a transition managed by meta modeling: From any data source, we extract a model which will be transposed to an ontology model by a succession of treatment steps. Data source model is validating against a metamodel then it will undergo transformations to generate the model of ontology which is validating against another metamodel.", "title": "" }, { "docid": "f25bcd3e62cd4e97d090659934ce26a6", "text": "Dementia is increasingly being recognized in cases of Parkinson's disease (PD); such cases are termed PD dementia (PDD). The spread of fibrillar α-synuclein (α-syn) pathology from the brainstem to limbic and neocortical structures seems to be the strongest neuropathological correlate of emerging dementia in PD. In addition, up to 50% of patients with PDD also develop sufficient numbers of amyloid-β plaques and tau-containing neurofibrillary tangles for a secondary diagnosis of Alzheimer's disease, and these pathologies may act synergistically with α-syn pathology to confer a worse prognosis. An understanding of the relationships between these three distinct pathologies and their resultant clinical phenotypes is crucial for the development of effective disease-modifying treatments for PD and PDD.", "title": "" }, { "docid": "4560ace656a6bf362a657e70faca6b9d", "text": "Many approaches have been introduced to enable Latent Dirichlet Allocation (LDA) models to be updated in an online manner. This includes inferring new documents into the model, passing parameter priors to the inference algorithm or a mixture of both, leading to more complicated and computationally expensive models. We present a method to match and compare the resulting LDA topics of different models with light weight easy to use similarity measures. We address the on-line problem by keeping the model inference simple and matching topics solely by their high probability word lists.", "title": "" }, { "docid": "def650b2d565f88a6404997e9e93d34f", "text": "Quality uncertainty and high search costs for identifying relevant information from an ocean of information may prevent customers from making purchases. Recognizing potential negative impacts of this search cost for quality information and relevant information, firms began to invest in creating a virtual community that enables consumers to share their opinions and experiences to reduce quality uncertainty, and in developing recommendation systems that help customers identify goods in which they might have an interest. However, not much is known regarding the effectiveness of these efforts. In this paper, we empirically investigate the impacts of recommendations and consumer feedbacks on sales based on data gathered from Amazon.com. Our results indicate that more recommendations indeed improve sales at Amazon.com; however, consumer ratings are not found to be related to sales. On the other hand, number of consumer reviews is positively associated with sales. We also find that recommendations work better for less-popular books than for more-popular books. This is consistent with the search cost argument: a consumer’s search cost for less-popular books may be higher, and thus they may rely more on recommendations to locate a product of interest.", "title": "" }, { "docid": "af78c57378a472c8f7be4eb354feb442", "text": "Mutations in the human sonic hedgehog gene ( SHH) are the most frequent cause of autosomal dominant inherited holoprosencephaly (HPE), a complex brain malformation resulting from incomplete cleavage of the developing forebrain into two separate hemispheres and ventricles. Here we report the clinical and molecular findings in five unrelated patients with HPE and their relatives with an identified SHH mutation. Three new and one previously reported SHH mutations were identified, a fifth proband was found to carry a reciprocal subtelomeric rearrangement involving the SHH locus in 7q36. An extremely wide intrafamilial phenotypic variability was observed, ranging from the classical phenotype with alobar HPE accompanied by typical severe craniofacial abnormalities to very mild clinical signs of choanal stenosis or solitary median maxillary central incisor (SMMCI) only. Two families were initially ascertained because of microcephaly in combination with developmental delay and/or mental retardation and SMMCI, the latter being a frequent finding in patients with an identified SHH mutation. In other affected family members a delay in speech acquisition and learning disabilities were the leading clinical signs. Conclusion: mutational analysis of the sonic hedgehog gene should not only be considered in patients presenting with the classical holoprosencephaly phenotype but also in those with two or more clinical signs of the wide phenotypic spectrum of associated abnormalities, especially in combination with a positive family history.", "title": "" }, { "docid": "e3978d849b1449c40299841bfd70ea69", "text": "New generations of network intrusion detection systems create the need for advanced pattern-matching engines. This paper presents a novel scheme for pattern-matching, called BFPM, that exploits a hardware-based programmable statemachine technology to achieve deterministic processing rates that are independent of input and pattern characteristics on the order of 10 Gb/s for FPGA and at least 20 Gb/s for ASIC implementations. BFPM supports dynamic updates and is one of the most storage-efficient schemes in the industry, supporting two thousand patterns extracted from Snort with a total of 32 K characters in only 128 KB of memory.", "title": "" }, { "docid": "1efdb6ff65c1aa8f8ecb95b4d466335f", "text": "This paper provides a linguistic and pragmatic analysis of the phenomenon of irony in order to represent how Twitter’s users exploit irony devices within their communication strategies for generating textual contents. We aim to measure the impact of a wide-range of pragmatic phenomena in the interpretation of irony, and to investigate how these phenomena interact with contexts local to the tweet. Informed by linguistic theories, we propose for the first time a multi-layered annotation schema for irony and its application to a corpus of French, English and Italian tweets.We detail each layer, explore their interactions, and discuss our results according to a qualitative and quantitative perspective.", "title": "" }, { "docid": "e8bbbc1864090b0246735868faa0e11f", "text": "A pre-trained deep convolutional neural network (DCNN) is the feed-forward computation perspective which is widely used for the embedded vision systems. In the DCNN, the 2D convolutional operation occupies more than 90% of the computation time. Since the 2D convolutional operation performs massive multiply-accumulation (MAC) operations, conventional realizations could not implement a fully parallel DCNN. The RNS decomposes an integer into a tuple of L integers by residues of moduli set. Since no pair of modulus have a common factor with any other, the conventional RNS decomposes the MAC unit into circuits with different sizes. It means that the RNS could not utilize resources of an FPGA with uniform size. In this paper, we propose the nested RNS (NRNS), which recursively decompose the RNS. It can decompose the MAC unit into circuits with small sizes. In the DCNN using the NRNS, a 48-bit MAC unit is decomposed into 4-bit ones realized by look-up tables of the FPGA. In the system, we also use binary to NRNS converters and NRNS to binary converters. The binary to NRNS converter is realized by on-chip BRAMs, while the NRNS to binary one is realized by DSP blocks and BRAMs. Thus, a balanced usage of FPGA resources leads to a high clock frequency with less hardware. The ImageNet DCNN using the NRNS is implemented on a Xilinx Virtex VC707 evaluation board. As for the performance per area GOPS (Giga operations per second) per a slice, the proposed one is 5.86 times better than the existing best realization.", "title": "" }, { "docid": "1b2144bca7146dcb8f99990159be47f6", "text": "We propose an object detection system that depends on position-sensitive grid feature maps. State-of-the-art object detection networks rely on convolutional neural networks pre-trained on a large auxiliary data set (e.g., ILSVRC 2012) designed for an image-level classification task. The image-level classification task favors translation invariance, while the object detection task needs localization representations that are translation variant to an extent. To address this dilemma, we construct position-sensitive convolutional layers, called grid convolutional layers that activate the object’s specific locations in the feature maps in the form of grids. With end-to-end training, the region of interesting grid pooling layer shepherds the last set of convolutional layers to learn specialized grid feature maps. Experiments on the PASCAL VOC 2007 data set show that our method outperforms the strong baselines faster region-based convolutional neural network counterpart and region-based fully convolutional networks by a large margin. Our method applied to ResNet-50 improves the mean average precision from 74.8%/74.2% to 79.4% without any other tricks. In addition, our approach achieves similar results on different networks (ResNet-101) and data sets (PASCAL VOC 2012 and MS COCO).", "title": "" }, { "docid": "2c69eb4be7bc2bed32cfbbbe3bc41a5d", "text": "The Sapienza University Networking framework for underwater Simulation Emulation and real-life Testing (SUNSET) is a toolkit for the implementation and testing of protocols for underwater sensor networks. SUNSET enables a radical new way of performing experimental research on underwater communications. It allows protocol designers and implementors to easily realize their solutions and to evaluate their performance through simulation, in-lab emulation and trials at sea in a direct and transparent way, and independently of specific underwater hardware platforms. SUNSET provides a complete toolchain of predeployment and deployment time tools able to identify risks, malfunctioning and under-performing solutions before incurring the expense of going to sea. Novel underwater systems can therefore be rapidly and easily investigated. Heterogeneous underwater communication technologies from different vendors can be used, allowing the evaluation of the impact of different combinations of hardware and software on the overall system performance. Using SUNSET, underwater devices can be reconfigured and controlled remotely in real time, using acoustic links. This allows the performance investigation of underwater systems under different settings and configurations and significantly reduces the cost and complexity of at-sea trials. This paper describes the architectural concept of SUNSET and presents some exemplary results of its use in the field. The SUNSET framework has been extensively validated during more than fifteen at-sea experimental campaigns in the past four years. Several of these have been conducted jointly with the NATO STO Centre for Maritime Research and Experimentation (CMRE) under a collaboration between the University of Rome and CMRE.", "title": "" }, { "docid": "8d5d2f266181d456d4f71df26075a650", "text": "Integrated architectures in the automotive and avionic domain promise improved resource utilization and enable a better tactic coordination of application subsystems compared to federated systems. In order to support safety-critical application subsystems, an integrated architecture needs to support fault-tolerant strategies that enable the continued operation of the system in the presence of failures. The basis for the implementation and validation of fault-tolerant strategies is a fault hypothesis that identifies the fault containment regions, specifies the failure modes and provides realistic failure rate assumptions. This paper describes a fault hypothesis for integrated architectures, which takes into account the collocation of multiple software components on shared node computers. We argue in favor of a differentiation of fault containment regions for hardware and software faults. In addition, the fault hypothesis describes the assumptions concerning the respective frequencies of transient and permanent failures in consideration of recent semiconductor trends", "title": "" }, { "docid": "2f01e912a6fbafca1e791ef18fb51ceb", "text": "Visualizing the result of users' opinion mining on twitter using social network graph can play a crucial role in decision-making. Available data visualizing tools, such as NodeXL, use a specific file format as an input to construct and visualize the social network graph. One of the main components of the input file is the sentimental score of the users' opinion. This motivates us to develop a free and open source system that can take the opinion of users in raw text format and produce easy-to-interpret visualization of opinion mining and sentiment analysis result on a social network. We use a public machine learning library called LingPipe Library to classify the sentiments of users' opinion into positive, negative and neutral classes. Our proposed system can be used to analyze and visualize users' opinion on the network level to determine sub-social structures (sub-groups). Moreover, the proposed system can also identify influential people in the social network by using node level metrics such as betweenness centrality. In addition to the network level and node level analysis, our proposed method also provides an efficient filtering mechanism by either time and date, or the sentiment score. We tested our proposed system using user opinions about different Samsung products and related issues that are collected from five official twitter accounts of Samsung Company. The test results show that our proposed system will be helpful to analyze and visualize the opinion of users at both network level and node level.", "title": "" }, { "docid": "04165f38c90c84e17d87bb4ac7f43f37", "text": "Globalisation is becoming a force that is revolutionising international trade, particularly that of animals and animal products. There is increasing interest in animal welfare worldwide, and as part of its 2001-2005 Strategic Plan the World Organisation for Animal Health (OIE) identified the development of international standards on animal welfare as a priority. The OIE's scientific approach to standard-setting provides the foundation for the development, and acceptance by all OIE Member Countries, of these animal welfare guidelines. The paper discusses how these guidelines on animal welfare can be implemented, both within the provisions of World Trade Organization (WTO) agreements and within the framework of voluntary codes of conduct. Even if animal welfare guidelines are not covered by any WTO agreements in the future, bi- and multilateral agreements, voluntary corporate codes, and transparent labelling of products should result in a progressive acceptance of OIE guidelines. Ultimately, consumer demands and demonstrable gains in animal production will result in an incremental evolution in animal welfare consciousness and adherence to international standards.", "title": "" }, { "docid": "02dc406798fa3207c0860682f76a70c6", "text": "This paper proposes automatic machine learning approach (AutoML) of deep neural networks (DNNs) using multi-objective evolutionary algorithms (MOEAs) for the accuracy and run-time speed simultaneously. Traditional methods for optimizing DNNs have used Bayesian reasoning or reinforcement learning to improve the performance. Recently, evolutionary approaches for high accuracy has been adopted using a lot of GPUs. However, real-world applications require rapid inference to deploy on embedded devices while maintaining high accuracy. To consider both accuracy and speed at the same time, we propose Neuro-Evolution with Multiobjective Optimization (NEMO) algorithm. Experimental results show that proposed NEMO finds faster and more accurate DNNs than hand-crafted DNN architectures. The proposed technique is verified for the image classification problems such as MNIST, CIFAR-10 and human status recognition.", "title": "" }, { "docid": "f9d8954e2061b5466e655552a5e13a24", "text": "Sports tracking applications are increasingly available on the market, and research has recently picked up this topic. Tracking a user's running track and providing feedback on the performance are among the key features of such applications. However, little attention has been paid to the accuracy of the applications' localization measurements. In evaluating the nine currently most popular running applications, we found tremendous differences in the GPS measurements. Besides this finding, our study contributes to the scientific knowledge base by qualifying the findings of previous studies concerning accuracy with smartphones' GPS components.", "title": "" }, { "docid": "2a10978fdd01c7c19d957fb4224016bf", "text": "To my parents and my girlfriend. Abstract Techniques of Artificial Intelligence and Human-Computer Interaction have empowered computer music systems with the ability to perform with humans via a wide spectrum of applications. However, musical interaction between humans and machines is still far less musical than the interaction between humans since most systems lack any representation or capability of musical expression. This thesis contributes various techniques, especially machine-learning algorithms, to create artificial musicians that perform expressively and collaboratively with humans. The current system focuses on three aspects of expression in human-computer collaborative performance: 1) expressive timing and dynamics, 2) basic improvisation techniques, and 3) facial and body gestures. Timing and dynamics are the two most fundamental aspects of musical expression and also the main focus of this thesis. We model the expression of different musicians as co-evolving time series. Based on this representation, we develop a set of algorithms, including a sophisticated spectral learning method, to discover regularities of expressive musical interaction from rehearsals. Given a learned model, an artificial performer generates its own musical expression by interacting with a human performer given a pre-defined score. The results show that, with a small number of rehearsals, we can successfully apply machine learning to generate more expressive and human-like collaborative performance than the baseline automatic accompaniment algorithm. This is the first application of spectral learning in the field of music. Besides expressive timing and dynamics, we consider some basic improvisation techniques where musicians have the freedom to interpret pitches and rhythms. We developed a model that trains a different set of parameters for each individual measure and focus on the prediction of the number of chords and the number of notes per chord. Given the model prediction, an improvised score is decoded using nearest-neighbor search, which selects the training example whose parameters are closest to the estimation. Our result shows that our model generates more musical, interactive, and natural collaborative improvisation than a reasonable baseline based on mean estimation. Although not conventionally considered to be \" music, \" body and facial movements are also important aspects of musical expression. We study body and facial expressions using a humanoid saxophonist robot. We contribute the first algorithm to enable a robot to perform an accompaniment for a musician and react to human performance with gestural and facial expression. The current system uses rule-based performance-motion mapping and separates robot motions into three groups: finger motions, …", "title": "" }, { "docid": "58164d01b603ffac40eaf61063dd415a", "text": "T he ability to construct a musical theory from examples presents a great intellectual challenge that, if successfully met, could foster a range of new creative applications. Inspired by this challenge, we sought to apply machine-learning methods to the problem of musical style modeling. Our work so far has produced examples of musical generation and applications to a computer-aided composition system. Machine learning consists of deriving a mathematical model, such as a set of stochastic rules, from a set of musical examples. The act of musical composition involves a highly structured mental process. Although it is complex and difficult to formalize, it is clearly far from being a random activity. Our research seeks to capture some of the regularity apparent in the composition process by using statistical and information-theoretic tools to analyze musical pieces. The resulting models can be used for inference and prediction and, to a certain extent, to generate new works that imitate the style of the great masters.", "title": "" } ]
scidocsrr
c92887fcef8c52bd58aa9347d4b5e836
Scalable process discovery and conformance checking
[ { "docid": "e3a7b1302e70b003acac4c15057908a7", "text": "modeling business processes a petri net-oriented approach modeling business processes a petri net oriented approach modeling business processes: a petri net-oriented approach modeling business processes a petri net oriented approach modeling business processes a petri net oriented approach modeling business processes: a petri net-oriented approach modeling business processes a petri net oriented approach a petri net-based software process model for developing modeling business processes a petri net oriented approach petri nets and business process management dagstuhl modeling business processes a petri net oriented approach killer app for petri nets process mining a petri net approach to analysis and composition of web information gathering and process modeling in a petri net modeling business processes a petri net oriented approach an ontology-based evaluation of process modeling with business process modeling in inspire using petri nets document about nc fairlane manuals is available on print from business process modeling to the specification of modeling of adaptive cyber physical systems using aspect petri net theory and the modeling of systems tbsh towards agent-based modeling and verification of a discussion of object-oriented process modeling modeling and simulation versions of business process using workflow modeling for virtual enterprise: a petri net simulation of it service processes with petri-nets george mason university the volgenau school of engineering process-oriented business performance management with syst 620 / ece 673 discrete event systems general knowledge questions answers on india tool-based business process modeling using the som approach income/w f a petri net based approach to w orkflow segment 2 exam study guide world history jbacs specifying business processes over objects rd.springer english june exam 2013 question paper 3 nulet", "title": "" } ]
[ { "docid": "db6a91e0216440a4573aee6c78c78cbf", "text": "ObjectiveHeart rate monitoring using wrist type Photoplethysmographic (PPG) signals is getting popularity because of construction simplicity and low cost of wearable devices. The task becomes very difficult due to the presence of various motion artifacts. The objective is to develop algorithms to reduce the effect of motion artifacts and thus obtain accurate heart rate estimation. MethodsProposed heart rate estimation scheme utilizes both time and frequency domain analyses. Unlike conventional single stage adaptive filter, multi-stage cascaded adaptive filtering is introduced by using three channel accelerometer data to reduce the effect of motion artifacts. Both recursive least squares (RLS) and least mean squares (LMS) adaptive filters are tested. Moreover, singular spectrum analysis (SSA) is employed to obtain improved spectral peak tracking. The outputs from the filter block and SSA operation are logically combined and used for spectral domain heart rate estimation. Finally, a tracking algorithm is incorporated considering neighbouring estimates. ResultsThe proposed method provides an average absolute error of 1.16 beat per minute (BPM) with a standard deviation of 1.74 BPM while tested on publicly available database consisting of recordings from 12 subjects during physical activities. ConclusionIt is found that the proposed method provides consistently better heart rate estimation performance in comparison to that recently reported by TROIKA, JOSS and SPECTRAP methods. SignificanceThe proposed method offers very low estimation error and a smooth heart rate tracking with simple algorithmic approach and thus feasible for implementing in wearable devices to monitor heart rate for fitness and clinical purpose.", "title": "" }, { "docid": "abc0a63a3b1ab80e37dedfd88b0f80b0", "text": "Scholars in academia are involved in various social relationships such as advisor-advisee relationships. The analysis of such relationship can provide invaluable information for understanding the interactions among scholars as well as providing many researcher-specific applications such as advisor recommendation and academic rising star identification. However, in most cases, high quality advisor-advisee relationship dataset is unavailable. To address this problem, we propose Shifu, a deep-learning-based advisor-advisee relationship identification method which takes into account both the local properties and network characteristics. In particular, we explore how to crawl advisor-advisee pairs from PhDtree project and extract their publication information by matching them with DBLP dataset as the experimental dataset. To the best of our knowledge, no prior effort has been made to address the scientific collaboration network features for relationship identification by exploiting deep learning. Our experiments demonstrate that the proposed method outperforms other state-of-the-art machine learning methods in precision (94%). Furthermore, we apply Shifu to the entire DBLP dataset and obtain a large-scale advisor-advisee relationship dataset.", "title": "" }, { "docid": "4dca30abbc390ef2bec26861dbe244e3", "text": "In 1997, the National Institute of Standards and Technology (NIST) initiated a process to select a symmetric-key encryption algorithm to be used to protect sensitive (unclassified) Federal information in furtherance of NIST's statutory responsibilities. In 1998, NIST announced the acceptance of 15 candidate algorithms and requested the assistance of the cryptographic research community in analyzing the candidates. This analysis included an initial examination of the security and efficiency characteristics for each algorithm. NIST reviewed the results of this preliminary research and selected MARS, RC™, Rijndael, Serpent and Twofish as finalists. Having reviewed further public analysis of the finalists, NIST has decided to propose Rijndael as the Advanced Encryption Standard (AES). The research results and rationale for this selection are documented in this report.", "title": "" }, { "docid": "3d34dc15fa11e723a52b21dc209a939f", "text": "Valuable information can be hidden in images, however, few research discuss data mining on them. In this paper, we propose a general framework based on the decision tree for mining and processing image data. Pixel-wised image features were extracted and transformed into a database-like table which allows various data mining algorithms to make explorations on it. Each tuple of the transformed table has a feature descriptor formed by a set of features in conjunction with the target label of a particular pixel. With the label feature, we can adopt the decision tree induction to realize relationships between attributes and the target label from image pixels, and to construct a model for pixel-wised image processing according to a given training image dataset. Both experimental and theoretical analyses were performed in this study. Their results show that the proposed model can be very efficient and effective for image processing and image mining. It is anticipated that by using the proposed model, various existing data mining and image processing methods could be worked on together in different ways. Our model can also be used to create new image processing methodologies, refine existing image processing methods, or act as a powerful image filter.", "title": "" }, { "docid": "321049dbe0d9bae5545de3d8d7048e01", "text": "ShopTalk, a proof-of-concept system designed to assist individuals with visual impairments with finding shelved products in grocery stores, is built on the assumption that simple verbal route directions and layout descriptions can be used to leverage the O&M skills of independent visually impaired travelers to enable them to navigate the store and retrieve shelved products. This paper introduces ShopTalk and summarizes experiments performed in a real-world supermarket.", "title": "" }, { "docid": "0fddd08dfdf2c545381b5a7580ba717d", "text": "Deep neural networks (DNNs) trained on large-scale datasets have recently achieved impressive improvements in face recognition. But a persistent challenge remains to develop methods capable of handling large pose variations that are relatively under-represented in training data. This paper presents a method for learning a feature representation that is invariant to pose, without requiring extensive pose coverage in training data. We first propose to use a synthesis network for generating non-frontal views from a single frontal image, in order to increase the diversity of training data while preserving accurate facial details that are critical for identity discrimination. Our next contribution is a multi-source multi-task DNN that seeks a rich embedding representing identity information, as well as information such as pose and landmark locations. Finally, we propose a Siamese network to explicitly disentangle identity and pose, by demanding alignment between the feature reconstructions through various combinations of identity and pose features obtained from two images of the same subject. Experiments on face datasets in both controlled and wild scenarios, such as MultiPIE, LFW and 300WLP, show that our method consistently outperforms the state-of-the-art, especially on images with large head pose variations.", "title": "" }, { "docid": "d5b304f3ee80b07a85e1c75264cce9b1", "text": "Personal robotic assistants help reducing the manual efforts being put by humans in their day-to-day tasks. In this paper, we develop a voice-controlled personal assistant robot. The human voice commands are given to the robotic assistant remotely, by using a smart mobile phone. The robot can perform different movements, turns, start/stop operations and relocate an object from one place to another. The voice commands are processed in real-time, using an online cloud server. The speech signal commands converted to text form are communicated to the robot over a Bluetooth network. The personal assistant robot is developed on a micro-controller based platform and can be aware of its current location. The effectiveness of the voice control communicated over a distance is measured through several experiments. Performance evaluation is carried out with encouraging results of the initial experiments. Possible improvements are also discussed towards potential applications in home, hospitals and industries.", "title": "" }, { "docid": "c8547eb5f34fff57d6ffa107acad6ae1", "text": "Depression is a common mental disorder and one of the main causes of disease burden worldwide. Several studies in depression address the relation between non-verbal cues and different levels of depression. Manual coding of non-verbal cues is the common practice for running such studies, which is time consuming and non-objective. Recent research has looked into automatic detection of cues associated with depression. However, most of the work has focussed on facial cues such as facial expressions, gaze and head pose. Few studies have looked into multimodal features for analysis of depression, mainly focusing on facial movements, head movements and vocal prosody. Body gestures are an understudied modality in that field. We propose to investigate assessment of depression using automatic detection of nonverbal signals of body gestures. Moreover, we propose the use of multimodal fusion of features to incorporate body as well as face and head for better inference of depression level. Automatic analysis of such body cues can serve as a tool for experimental psychologists. Also, it can assist physicians in diagnosing by providing quantitative measures after or during face to face sessions or telemedicine sessions or even in systems like a virtual coach.", "title": "" }, { "docid": "51f66b4ff06999f6ce7df45a1db1d8f7", "text": "Smart homes with advanced building technologies can react to sensor triggers in a variety of preconfigured ways. These rules are usually only visible within designated configuration interfaces. For this reason inhabitants who are not actively involved in the configuration process can be taken by surprise by the effects of such rules, such as for example the unexpected automated actions of lights or shades. To provide these inhabitants with better means to understand their home, as well as to increase their motivation to actively engage with its configuration, we propose Casalendar, a visualization that integrates the status of smart home technologies into the familiar interface of a calendar. We present our design and initial findings about the application of a temporal metaphor in smart home interfaces.", "title": "" }, { "docid": "9e37c463a38a3efe746d9af7e8872dc6", "text": "OBJECTIVES\nTo examine the relationship of corporal punishment with children's behavior problems while accounting for neighborhood context and while using stronger statistical methods than previous literature in this area, and to examine whether different levels of corporal punishment have different effects in different neighborhood contexts.\n\n\nDESIGN\nLongitudinal cohort study.\n\n\nSETTING\nGeneral community.\n\n\nPARTICIPANTS\n1943 mother-child pairs from the National Longitudinal Survey of Youth.\n\n\nMAIN OUTCOME MEASURE\nInternalizing and externalizing behavior problem scales of the Behavior Problems Index.\n\n\nRESULTS AND CONCLUSIONS\nParental use of corporal punishment was associated with a 0.71 increase (P<.05) in children's externalizing behavior problems even when several parenting behaviors, neighborhood quality, and all time-invariant variables were accounted for. The association of corporal punishment and children's externalizing behavior problems was not dependent on neighborhood context. The research found no discernible relationship between corporal punishment and internalizing behavior problems.", "title": "" }, { "docid": "da7058526e9b76988e20dae598124c53", "text": "53BP1 is known as a mediator in DNA damage response and a regulator of DNA double-stranded breaks (DSBs) repair. 53BP1 was recently reported to be a centrosomal protein and a binding partner of mitotic polo-like kinase 1 (Plk1). The stability of 53BP1, in response to DSBs, is regulated by its phosphorylation, deubiquitination, and ubiquitination. During mitosis, 53BP1 is stabilized by phosphorylation at S380, a putative binding region with polo-box domain of Plk1, and deubiquitination by ubiquitin-specific protease 7 (USP7). In the absence of DSBs, 53BP1 is abundant in the nucleoplasm; DSB formation results in its rapid localization to the damaged chromatin. Mitotic 53BP1 is also localized at the centrosome and spindle pole. 53BP1 depletion induces mitotic defects such as disorientation of spindle poles attributed to extra centrosomes or mispositioning of centrosomes, leading to phenotypes similar to those in USP7-deficient cells. Here, we discuss how 53BP1 controls the centrosomal integrity through its interaction with USP7 and centromere protein F by regulation of its stability and its physiology in response to DNA damage.", "title": "" }, { "docid": "b29947243b1ad21b0529a6dd8ef3c529", "text": "We define a multiresolution spline technique for combining two or more images into a larger image mosaic. In this procedure, the images to be splined are first decomposed into a set of band-pass filtered component images. Next, the component images in each spatial frequency hand are assembled into a corresponding bandpass mosaic. In this step, component images are joined using a weighted average within a transition zone which is proportional in size to the wave lengths represented in the band. Finally, these band-pass mosaic images are summed to obtain the desired image mosaic. In this way, the spline is matched to the scale of features within the images themselves. When coarse features occur near borders, these are blended gradually over a relatively large distance without blurring or otherwise degrading finer image details in the neighborhood of th e border.", "title": "" }, { "docid": "c5118bfd338ed2879477023b69fff911", "text": "The paper describes a study and an experimental verification of remedial strategies against failures occurring in the inverter power devices of a permanent-magnet synchronous motor drive. The basic idea of this design consists in incorporating a fourth inverter pole, with the same topology and capabilities of the other conventional three poles. This minimal redundant hardware, appropriately connected and controlled, allows the drive to face a variety of power device fault conditions while maintaining a smooth torque production. The achieved results also show the industrial feasibility of the proposed fault-tolerant control, that could fit many practical applications.", "title": "" }, { "docid": "e777794833a060f99e11675952cd3342", "text": "In this paper we propose a novel method to utilize the skeletal structure not only for supporting force but for releasing heat by latent heat.", "title": "" }, { "docid": "2fed3f693a52ca9852c9238d3c9abf36", "text": "A thin artificial magnetic conductor (AMC) structure is designed and breadboarded for radar cross-section (RCS) Reduction applications. The design presented in this paper shows the advantage of geometrical simplicity while simultaneously reducing the overall thickness (for the current design ). The design is very pragmatic and is based on a combination of AMC and perfect electric conductor (PEC) cells in a chessboard like configuration. An array of Sievenpiper's mushrooms constitutes the AMC part, while the PEC part is formed by full metallic patches. Around the operational frequency of the AMC-elements, the reflection of the AMC and PEC have opposite phase, so for any normal incident plane wave the reflections cancel out, thus reducing the RCS. The same applies to specular reflections for off-normal incidence angles. A simple basic model has been implemented in order to verify the behavior of this structure, while Ansoft-HFSS software has been used to provide a more thorough analysis. Both bistatic and monostatic measurements have been performed to validate the approach.", "title": "" }, { "docid": "159d1eb55da1d457d20beb7fec14fe42", "text": "With the growing demand in e-learning, numerous research works have been done to enhance teaching quality in e-learning environments. Among these studies, researchers have indicated that adaptive learning is a critical requirement for promoting the learning performance of students. Adaptive learning provides adaptive learning materials, learning strategies and/or courses according to a student’s learning style. Hence, the first step for achieving adaptive learning environments is to identify students’ learning styles. This paper proposes a learning style classification mechanism to classify and then identify students’ learning styles. The proposed mechanism improves k-nearest neighbor (k-NN) classification and combines it with genetic algorithms (GA). To demonstrate the viability of the proposed mechanism, the proposed mechanism is implemented on an open-learning management system. The learning behavioral features of 117 elementary school students are collected and then classified by the proposed mechanism. The experimental results indicate that the proposed classification mechanism can effectively classify and identify students’ learning styles. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d9c4e90d9538c99206cc80bea2c1f808", "text": "Practical aspects of a real time auto parking controller are considered. A parking algorithm which can guarantee to find a parking path with any initial positions is proposed. The algorithm is theoretically proved and successfully applied to the OSU-ACT in the DARPA Urban Challenge 2007.", "title": "" }, { "docid": "a0850b5f8b2d994b50bb912d6fca3dfb", "text": "In this paper we describe the development of an accurate, smallfootprint, large vocabulary speech recognizer for mobile devices. To achieve the best recognition accuracy, state-of-the-art deep neural networks (DNNs) are adopted as acoustic models. A variety of speedup techniques for DNN score computation are used to enable real-time operation on mobile devices. To reduce the memory and disk usage, on-the-fly language model (LM) rescoring is performed with a compressed n-gram LM. We were able to build an accurate and compact system that runs well below real-time on a Nexus 4 Android phone.", "title": "" }, { "docid": "bab429bf74fe4ce3f387a716964a867f", "text": "We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only \"virtually\" adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.", "title": "" }, { "docid": "6af03ef289e32106ba737f2a23b11a4a", "text": "Based on perceptual and computational attention modeling studies, we formulate measures of saliency for an audiovisual stream. Audio saliency is captured by signal modulations and related multi-frequency band features, extracted through nonlinear operators and energy tracking. Visual saliency is measured by means of a spatiotemporal attention model driven by various feature cues (intensity, color, motion). Audio and video curves are integrated in a single attention curve, where events may be enhanced, suppressed or vanished. The presence of salient events is signified on this audiovisual curve by geometrical features such as local extrema, sharp transition points and level sets. An audiovisual saliency-based movie summarization algorithm is proposed and evaluated. The algorithm is shown to perform very well in terms of summary informativeness and enjoyability for movie clips of various genres.", "title": "" } ]
scidocsrr
392d4aae5eefff54935bb4132fb2a984
Efficient Machine Learning for Big Data: A Review
[ { "docid": "0713b8668b5faf037b4553517151f9ab", "text": "Deep learning is currently an extremely active research area in machine learning and pattern recognition society. It has gained huge successes in a broad area of applications such as speech recognition, computer vision, and natural language processing. With the sheer size of data available today, big data brings big opportunities and transformative potential for various sectors; on the other hand, it also presents unprecedented challenges to harnessing data and information. As the data keeps getting bigger, deep learning is coming to play a key role in providing big data predictive analytics solutions. In this paper, we provide a brief overview of deep learning, and highlight current research efforts and the challenges to big data, as well as the future trends.", "title": "" } ]
[ { "docid": "630e8f538d566af9375c231dd5195a99", "text": "The investigation of the human microbiome is the most rapidly expanding field in biomedicine. Early studies were undertaken to better understand the role of microbiota in carbohydrate digestion and utilization. These processes include polysaccharide degradation, glycan transport, glycolysis, and short-chain fatty acid production. Recent research has demonstrated that the intricate axis between gut microbiota and the host metabolism is much more complex. Gut microbiota—depending on their composition—have disease-promoting effects but can also possess protective properties. This review focuses on disorders of metabolic syndrome, with special regard to obesity as a prequel to type 2 diabetes, type 2 diabetes itself, and type 1 diabetes. In all these conditions, differences in the composition of the gut microbiota in comparison to healthy people have been reported. Mechanisms of the interaction between microbiota and host that have been characterized thus far include an increase in energy harvest, modulation of free fatty acids—especially butyrate—of bile acids, lipopolysaccharides, gamma-aminobutyric acid (GABA), an impact on toll-like receptors, the endocannabinoid system and “metabolic endotoxinemia” as well as “metabolic infection.” This review will also address the influence of already established therapies for metabolic syndrome and diabetes on the microbiota and the present state of attempts to alter the gut microbiota as a therapeutic strategy.", "title": "" }, { "docid": "3144f076574e5e67a6c69862cc8e2063", "text": "As the number of alerts generated by collaborative applications grows, users receive more unwanted alerts. FeedMe is a general alert management system based on XML feed protocols such as RSS and ATOM. In addition to traditional rule-based alert filtering, FeedMe uses techniques from machine-learning to infer alert preferences based on user feedback. In this paper, we present and evaluate a new collaborative naïve Bayes filtering algorithm. Using FeedMe, we collected alert ratings from 33 users over 29 days. We used the data to design and verify the accuracy of the filtering algorithm and provide insights into alert prediction.", "title": "" }, { "docid": "63355bf6cec82c1f7cb570408cecc694", "text": "We explore innovation, openness, and the duration of intellectual property protection in markets characterized by platforms and their ecosystems of complementary applications. We find that competition among application developers can reduce innovation while competition among platforms can increase innovation. Developers can be better off submitting to platform control as opposed to producing for an unsponsored platform. Although a social planner would open a platform sooner and to a greater degree than would a private platform sponsor, a platform sponsor’s ability to control downstream innovation gives it reason to behave more like a social planner. However, if platforms are to perform this role, platform sponsors need longer duration rights than application developers. Results can inform antitrust and intellectual property regulation, technological innovation, competition policy, and intellectual property strategy.", "title": "" }, { "docid": "33b09a4689b3e948fc8a072c0d9672c2", "text": "This review article identifies and discusses some of main issues and potential problems – paradoxes and pathologies – around the communication of recorded information, and points to some possible solutions. The article considers the changing contexts of information communication, with some caveats about the identification of ‘pathologies of information’, and analyses the changes over time in the way in which issues of the quantity and quality of information available have been regarded. Two main classes of problems and issues are discussed. The first comprises issues relating to the quantity and diversity of information available: information overload, information anxiety, etc. The second comprises issues relating to the changing information environment with the advent of Web 2.0: loss of identity and authority, emphasis on micro-chunking and shallow novelty, and the impermanence of information. A final section proposes some means of solution of problems and of improvements to the situation.", "title": "" }, { "docid": "b0c8f427b4c447f31f5a7ec4681f500d", "text": "Embedding large graphs in low dimensional spaces has recently attracted significant interest due to its wide applications such as graph visualization, link prediction and node classification. Existing methods focus on computing the embedding for static graphs. However, many graphs in practical applications are dynamic and evolve constantly over time. Naively applying existing embedding algorithms to each snapshot of dynamic graphs independently usually leads to unsatisfactory performance in terms of stability, flexibility and efficiency. In this work, we present an efficient algorithm DynGEM based on recent advances in deep autoencoders for graph embeddings, to address this problem. The major advantages of DynGEM include: (1) the embedding is stable over time, (2) it can handle growing dynamic graphs, and (3) it has better running time than using static embedding methods on each snapshot of a dynamic graph. We test DynGEM on a variety of tasks including graph visualization, graph reconstruction, link prediction and anomaly detection (on both synthetic and real datasets). Experimental results demonstrate the superior stability and scalability of our approach.", "title": "" }, { "docid": "c5f05fd620e734506874c8ec9e839535", "text": "Superficial vein thrombosis is a rare pathology that was first described by Mordor, although his description of phlebitis was observed exclusively at the thoracic wall. In 1955, Braun-Falco described penile thrombosis and later superficial penile vein thrombosis was first reported by Helm and Hodge. Mondor's disease of the penis is a rare entity with a reported incidence of 1.39%. It is described most of the time as a self-limited disease however it causes great morbidity to the patient who suffers from it. The pathogenesis of Mondor's disease is unknown. Its diagnosis is based on clinical signs such as a cordlike induration on the dorsal face of the penis, and imaging studies, doppler ultrasound is the instrument of choice. Treatment is primarily symptomatic but some cases may require surgical management however an accurate diagnostic resolves almost every case. We will describe the symptoms, diagnosis, and treatment of superficial thrombophlebitis of the dorsal vein of the penis.", "title": "" }, { "docid": "3fb6cec95fcaa0f8b6c6e4f649591b35", "text": "This paper presents the performance of DSP, image and 3D applications on recent general-purpose microprocessors using streaming SIMD ISA extensions (integer and oating point). The 9 benchmarks benchmark we use for this evaluation have been optimized for DLP and caches use with SIMD extensions and data prefetch. The result of these cumulated optimizations is a speedup that ranges from 1.9 to 7.1. All the benchmarks were originaly computation bound and 7 becomes memory bandwidth bound with the addition of SIMD and data prefetch. Quadrupling the memory bandwidth has no eeect on original kernels but improves the performance of SIMD kernels by 15-55%.", "title": "" }, { "docid": "6dfe8b18e3d825b2ecfa8e6b353bbb99", "text": "In the last decade tremendous effort has been put in the study of the Apollonian circle packings. Given the great variety of mathematics it exhibits, this topic has attracted experts from different fields: number theory, homogeneous dynamics, expander graphs, group theory, to name a few. The principle investigator (PI) contributed to this program in his PhD studies. The scenery along the way formed the horizon of the PI at his early mathematical career. After his PhD studies, the PI has successfully applied tools and ideas from Apollonian circle packings to the studies of topics from various fields, and will continue this endeavor in his proposed research. The proposed problems are roughly divided into three categories: number theory, expander graphs, geometry. Each of which will be discussed in depth in later sections. Since Apollonian circle packing provides main inspirations for this proposal, let’s briefly review how it comes up and what has been done. We start with four mutually circles, with one circle bounding the other three. We can repeatedly inscribe more and more circles into curvilinear triangular gaps as illustrated in Figure 1, and we call the resultant set an Apollonian circle packing, which consists of infinitely many circles.", "title": "" }, { "docid": "2d62232cfe79a122d661ae7f05a4f883", "text": "The main purpose of this paper is to examine some (potential) applications of quantum computation in AI and to review the interplay between quantum theory and AI. For the readers who are not familiar with quantum computation, a brief introduction to it is provided, and a famous but simple quantum algorithm is introduced so that they can appreciate the power of quantum computation. Also, a (quite personal) survey of quantum computation is presented in order to give the readers a (unbalanced) panorama of the field. The author hopes that this paper will be a useful map for AI researchers who are going to explore further and deeper connections between AI and quantum computation as well as quantum theory although some parts of the map are very rough and other parts are empty, and waiting for the readers to fill in.", "title": "" }, { "docid": "141e927711efe3ee66b0512322bfee9c", "text": "Reputation systems have become an indispensable component of modern E-commerce systems, as they help buyers make informed decisions in choosing trustworthy sellers. To attract buyers and increase the transaction volume, sellers need to earn reasonably high reputation scores. This process usually takes a substantial amount of time. To accelerate this process, sellers can provide price discounts to attract users, but the underlying difficulty is that sellers have no prior knowledge on buyers’ preferences over price discounts. In this article, we develop an online algorithm to infer the optimal discount rate from data. We first formulate an optimization framework to select the optimal discount rate given buyers’ discount preferences, which is a tradeoff between the short-term profit and the ramp-up time (for reputation). We then derive the closed-form optimal discount rate, which gives us key insights in applying a stochastic bandits framework to infer the optimal discount rate from the transaction data with regret upper bounds. We show that the computational complexity of evaluating the performance metrics is infeasibly high, and therefore, we develop efficient randomized algorithms with guaranteed performance to approximate them. Finally, we conduct experiments on a dataset crawled from eBay. Experimental results show that our framework can trade 60% of the short-term profit for reducing the ramp-up time by 40%. This reduction in the ramp-up time can increase the long-term profit of a seller by at least 20%.", "title": "" }, { "docid": "045162dbad88cd4d341eed216779bb9b", "text": "BACKGROUND\nCrocodile oil and its products are used as ointments for burns and scalds in traditional medicines. A new ointment formulation - crocodile oil burn ointment (COBO) was developed to provide more efficient wound healing activity. The purpose of the study was to evaluate the burn healing efficacy of this new formulation by employing deep second-degree burns in a Wistar rat model. The analgesic and anti-inflammatory activities of COBO were also studied to provide some evidences for its further use.\n\n\nMATERIALS AND METHODS\nThe wound healing potential of this formulation was evaluated by employing a deep second-degree burn rat model and the efficiency was comparatively assessed against a reference ointment - (1% wt/wt) silver sulfadiazine (SSD). After 28 days, the animals were euthanized and the wounds were removed for transversal and longitudinal histological studies. Acetic acid-induced writhing in mice was used to evaluate the analgesic activity and its anti-inflammatory activity was observed in xylene -induced edema in mice.\n\n\nRESULTS\nCOBO enhanced the burn wound healing (20.5±1.3 d) as indicated by significant decrease in wound closure time compared with the burn control (25.0±2.16 d) (P<0.01). Hair follicles played an importance role in the physiological functions of the skin, and their growth in the wound could be revealed for the skin regeneration situation. Histological results showed that the hair follicles were well-distributed in the post-burn skin of COBO treatment group, and the amounts of total, active, primary and secondary hair follicles in post-burn 28-day skin of COBO treatment groups were more than those in burn control and SSD groups. On the other hand, the analgesic and anti-inflammatory activity of COBO were much better than those of control group, while they were very close to those of moist exposed burn ointment (MEBO).\n\n\nCONCLUSIONS\nCOBO accelerated wound closure, reduced inflammation, and had analgesic effects compared with SSD in deep second degree rat burn model. These findings suggest that COBO would be a potential therapy for treating human burns. Abbreviations: COBO, crocodile oil burn ointment; SSD, silver sulfadiazine; MEBO, moist exposed burn ointment; TCM, traditional Chinese medicine; CHM, Chinese herbal medicine; GC-MS, gas chromatography-mass spectrometry.", "title": "" }, { "docid": "097c9810e636b9cc3ec274ef6c30333d", "text": "Emotion Recognition has expanding significance in helping human-PC collaboration issues. It is a difficult task to understand how other people feel but it becomes even worse to perceive these emotions through a computer. With the advancement in technology and increase in application of artificial intelligence, it has become a necessity to automatically recognize the emotions of the user for the human-computer interactions. The need for emotion recognition keeps increasing and it has become applicable in various fields now days. This paper explores the way to recognize different human emotions from our body through wireless signals.", "title": "" }, { "docid": "2f0c2f19a8ad34d9335fff1515af2a65", "text": "In this paper, we present a system to detect symbols on roads (e.g. arrows, speed limits, bus lanes and other pictograms) with a common monoscopic or stereoscopic camera system. No manual labeling of images is necessary since the exact definitions of the symbols in the legal instructions for road paintings are used. With those vector graphics an Optical Character Recognition (OCR) System is trained. If only a monoscopic camera is used, the vanishing point is estimated and an inverse perspective transformation is applied to obtain a distortion free top-view. In case of the stereoscopic camera setup, the 3D reconstruction is projected to a ground plane. TESSERACT, a common OCR system is used to classify the symbols. If odometry or position information is available, a spatial filtering and mapping is possible. The obtained information can be used on one side to improve localization, on the other side to provide further information for planning or generation of planning maps.", "title": "" }, { "docid": "c8e679ff3a99c2e596756a69d22c54a1", "text": "Convolutional Neural Networks (CNNs) have been successfully applied to many computer vision tasks, such as image classification. By performing linear combinations and element-wise nonlinear operations, these networks can be thought of as extracting solely first-order information from an input image. In the past, however, second-order statistics computed from handcrafted features, e.g., covariances, have proven highly effective in diverse recognition tasks. In this paper, we introduce a novel class of CNNs that exploit second-order statistics. To this end, we design a series of new layers that (i) extract a covariance matrix from convolutional activations, (ii) compute a parametric, second-order transformation of a matrix, and (iii) perform a parametric vectorization of a matrix. These operations can be assembled to form a Covariance Descriptor Unit (CDU), which replaces the fully-connected layers of standard CNNs. Our experiments demonstrate the benefits of our new architecture, which outperform the first-order CNNs, while relying on up to 90% fewer parameters.", "title": "" }, { "docid": "acf514a4aa34487121cc853e55ceaed4", "text": "Stereotype threat spillover is a situational predicament in which coping with the stress of stereotype confirmation leaves one in a depleted volitional state and thus less likely to engage in effortful self-control in a variety of domains. We examined this phenomenon in 4 studies in which we had participants cope with stereotype and social identity threat and then measured their performance in domains in which stereotypes were not \"in the air.\" In Study 1 we examined whether taking a threatening math test could lead women to respond aggressively. In Study 2 we investigated whether coping with a threatening math test could lead women to indulge themselves with unhealthy food later on and examined the moderation of this effect by personal characteristics that contribute to identity-threat appraisals. In Study 3 we investigated whether vividly remembering an experience of social identity threat results in risky decision making. Finally, in Study 4 we asked whether coping with threat could directly influence attentional control and whether the effect was implemented by inefficient performance monitoring, as assessed by electroencephalography. Our results indicate that stereotype threat can spill over and impact self-control in a diverse array of nonstereotyped domains. These results reveal the potency of stereotype threat and that its negative consequences might extend further than was previously thought.", "title": "" }, { "docid": "33fed2809c57080110b00e5b3994d19a", "text": "Suppose we are given a set of generators for a group G of permutations of a colored set A. The color automorphism problem for G involves finding generators for the subgroup of G which stabilizes the color classes. Testing isomorphism of graphs of valence ≤ t is polynomial-time reducible to the color automorphism problem for groups with small simple sections. The algorithm for the latter problem involves several divide-and-conquer tricks. The problem is solved sequentially on the G-orbits. An orbit is broken into a minimal set of blocks permuted by G. The hypothesis on G guarantees the existence of a 'large' subgroup P which acts as a p-group on the blocks. A similar process is repeated for each coset of P on G. Some results on primitive permutation groups are used to show that the algorithm runs in polynomial time.", "title": "" }, { "docid": "ac53cbf7b760978a4a4c7fa80095fd31", "text": "Aggregation queries on data streams are evaluated over evolving and often overlapping logical views called windows. While the aggregation of periodic windows were extensively studied in the past through the use of aggregate sharing techniques such as Panes and Pairs, little to no work has been put in optimizing the aggregation of very common, non-periodic windows. Typical examples of non-periodic windows are punctuations and sessions which can implement complex business logic and are often expressed as user-defined operators on platforms such as Google Dataflow or Apache Storm. The aggregation of such non-periodic or user-defined windows either falls back to expensive, best-effort aggregate sharing methods, or is not optimized at all.\n In this paper we present a technique to perform efficient aggregate sharing for data stream windows, which are declared as user-defined functions (UDFs) and can contain arbitrary business logic. To this end, we first introduce the concept of User-Defined Windows (UDWs), a simple, UDF-based programming abstraction that allows users to programmatically define custom windows. We then define semantics for UDWs, based on which we design Cutty, a low-cost aggregate sharing technique. Cutty improves and outperforms the state of the art for aggregate sharing on single and multiple queries. Moreover, it enables aggregate sharing for a broad class of non-periodic UDWs. We implemented our techniques on Apache Flink, an open source stream processing system, and performed experiments demonstrating orders of magnitude of reduction in aggregation costs compared to the state of the art.", "title": "" }, { "docid": "af12993c21eb626a7ab8715da1f608c9", "text": "Today, both the military and commercial sectors are placing an increased emphasis on global communications. This has prompted the development of several low earth orbit satellite systems that promise worldwide connectivity and real-time voice communications. This article provides a tutorial overview of the IRIDIUM low earth orbit satellite system and performance results obtained via simulation. First, it presents an overview of key IRIDIUM design parameters and features. Then, it examines the issues associated with routing in a dynamic network topology, focusing on network management and routing algorithm selection. Finally, it presents the results of the simulation and demonstrates that the IRIDIUM system is a robust system capable of meeting published specifications.", "title": "" }, { "docid": "89af4054eb70309acab13bdb283bde3b", "text": "How to model distribution of sequential data, including but not limited to speech and human motions, is an important ongoing research problem. It has been demonstrated that model capacity can be significantly enhanced by introducing stochastic latent variables in the hidden states of recurrent neural networks. Simultaneously, WaveNet, equipped with dilated convolutions, achieves astonishing empirical performance in natural speech generation task. In this paper, we combine the ideas from both stochastic latent variables and dilated convolutions, and propose a new architecture to model sequential data, termed as Stochastic WaveNet, where stochastic latent variables are injected into the WaveNet structure. We argue that Stochastic WaveNet enjoys powerful distribution modeling capacity and the advantage of parallel training from dilated convolutions. In order to efficiently infer the posterior distribution of the latent variables, a novel inference network structure is designed based on the characteristics of WaveNet architecture. State-of-the-art performances on benchmark datasets are obtained by Stochastic WaveNet on natural speech modeling and high quality human handwriting samples can be generated as well.", "title": "" }, { "docid": "f0c4c1a82eee97d19012421614ee5d5f", "text": "Although the widespread use of gaming for leisure purposes has been well documented, the use of games to support cultural heritage purposes, such as historical teaching and learning, or for enhancing museum visits, has been less well considered. The state-of-the-art in serious game technology is identical to that of the state-of-the-art in entertainment games technology. As a result the field of serious heritage games concerns itself with recent advances in computer games, real-time computer graphics, virtual and augmented reality and artificial intelligence. On the other hand, the main strengths of serious gaming applications may be generalised as being in the areas of communication, visual expression of information, collaboration mechanisms, interactivity and entertainment. In this report, we will focus on the state-of-the-art with respect to the theories, methods and technologies used in serious heritage games. We provide an overview of existing literature of relevance to the domain, discuss the strengths and weaknesses of the described methods and point out unsolved problems and challenges. In addition, several case studies illustrating the application of methods and technologies used in cultural heritage are presented.", "title": "" } ]
scidocsrr
436aefe1990c4ec97ed54ecc485fbbff
Using Language Models for Information Retrieval
[ { "docid": "9fc2d92c42400a45cb7bf6c998dc9236", "text": "This paper presents a new probabilistic model of information retrieval. The most important modeling assumption made is that documents and queries are defined by an ordered sequence of single terms. This assumption is not made in well-known existing models of information retrieval, but is essential in the field of statistical natural language processing. Advances already made in statistical natural language processing will be used in this paper to formulate a probabilistic justification for using tf×idf term weighting. The paper shows that the new probabilistic interpretation of tf×idf term weighting might lead to better understanding of statistical ranking mechanisms, for example by explaining how they relate to coordination level ranking. A pilot experiment on the TREC collection shows that the linguistically motivated weighting algorithm outperforms the popular BM25 weighting algorithm.", "title": "" } ]
[ { "docid": "cb07917dd2a885bb45e8ccca94156c4d", "text": "In this article, I summarise the ontological theory of informational privacy (an approach based on information ethics) and then discuss four types of interesting challenges confronting any theory of informational privacy: (1) parochial ontologies and non-Western approaches to informational privacy; (2) individualism and the anthropology of informational privacy; (3) the scope and limits of informational privacy; and (4) public, passive and active informational privacy. I argue that the ontological theory of informational privacy can cope with such challenges fairly successfully. In the conclusion, I discuss some of the work that lies ahead.", "title": "" }, { "docid": "3882687dfa4f053d6ae128cf09bb8994", "text": "In recent years, we have seen tremendous progress in the field of object detection. Most of the recent improvements have been achieved by targeting deeper feedforward networks. However, many hard object categories such as bottle, remote, etc. require representation of fine details and not just coarse, semantic representations. But most of these fine details are lost in the early convolutional layers. What we need is a way to incorporate finer details from lower layers into the detection architecture. Skip connections have been proposed to combine high-level and low-level features, but we argue that selecting the right features from low-level requires top-down contextual information. Inspired by the human visual pathway, in this paper we propose top-down modulations as a way to incorporate fine details into the detection framework. Our approach supplements the standard bottom-up, feedforward ConvNet with a top-down modulation (TDM) network, connected using lateral connections. These connections are responsible for the modulation of lower layer filters, and the top-down network handles the selection and integration of contextual information and lowlevel features. The proposed TDM architecture provides a significant boost on the COCO benchmark, achieving 28.6 AP for VGG16 and 35.2 AP for ResNet101 networks. Using InceptionResNetv2, our TDM model achieves 37.3 AP, which is the best single-model performance to-date on the COCO testdev benchmark, without any bells and whistles.", "title": "" }, { "docid": "2ddf3153ec8432d226c419748b5b4828", "text": "Visualized data often have dubious origins and quality. Different forms of uncertainty and errors are also introduced as the data are derived, transformed, interpolated, and finally rendered. This paper surveys uncertainty visualization techniques that present data so that users are made aware of the locations and degree of uncertainties in their data. The techniques include adding glyphs, adding geometry, modifying geometry, modifying attributes, animation, sonification, and psychovisual approaches. We present our results in uncertainty visualization for environmental visualization, surface interpolation, global illumination with radiosity, flow visualization, and figure animation. We also present a classification of the possibilities in uncertainty visualization and locate our contributions within this classification.", "title": "" }, { "docid": "6adfcf6aec7b33a82e3e5e606c93295d", "text": "Cyber security is a serious global concern. The potential of cyber terrorism has posed a threat to national security; meanwhile the increasing prevalence of malware and incidents of cyber attacks hinder the utilization of the Internet to its greatest benefit and incur significant economic losses to individuals, enterprises, and public organizations. This paper presents some recent advances in intrusion detection, feature selection, and malware detection. In intrusion detection, stealthy and low profile attacks that include only few carefully crafted packets over an extended period of time to delude firewalls and the intrusion detection system (IDS) have been difficult to detect. In protection against malware (trojans, worms, viruses, etc.), how to detect polymorphic and metamorphic versions of recognized malware using static scanners is a great challenge. We present in this paper an agent based IDS architecture that is capable of detecting probe attacks at the originating host and denial of service (DoS) attacks at the boundary controllers. We investigate and compare the performance of different classifiers implemented for intrusion detection purposes. Further, we study the performance of the classifiers in real-time detection of probes and DoS attacks, with respect to intrusion data collected on a real operating network that includes a variety of simulated attacks. Feature selection is as important for IDS as it is for many other modeling problems. We present several techniques for feature selection and compare their performance in the IDS application. It is demonstrated that, with appropriately chosen features, both probes and DoS attacks can be detected in real time or near real time at the originating host or at the boundary controllers. We also briefly present some encouraging recent results in detecting polymorphic and metamorphic malware with advanced static, signature-based scanning techniques.", "title": "" }, { "docid": "df114396d546abfc9b6f1767e3bab8db", "text": "I briefly highlight the salient properties of modified-inertia formulations of MOND, contrasting them with those of modified-gravity formulations, which describe practically all theories propounded to date. Future data (e.g. the establishment of the Pioneer anomaly as a new physics phenomenon) may prefer one of these broad classes of theories over the other. I also outline some possible starting ideas for modified inertia. 1 Modified MOND inertia vs. modified MOND gravity MOND is a modification of non-relativistic dynamics involving an acceleration constant a 0. In the formal limit a 0 → 0 standard Newtonian dynamics is restored. In the deep MOND limit, a 0 → ∞, a 0 and G appear in the combination (Ga 0). Much of the NR phenomenology follows from this simple prescription, including the asymptotic flatness of rotation curves, the mass-velocity relations (baryonic Tully-fisher and Faber Jackson relations), mass discrepancies in LSB galaxies, etc.. There are many realizations (theories) that embody the above dictates, relativistic and non-relativistic. The possibly very significant fact that a 0 ∼ cH 0 ∼ c(Λ/3) 1/2 may hint at the origin of MOND, and is most probably telling us that a. MOND is an effective theory having to do with how the universe at large shapes local dynamics, and b. in a Lorentz universe (with H 0 = 0, Λ = 0) a 0 = 0 and standard dynamics holds. We can broadly classify modified theories into two classes (with the boundary not so sharply defined): In modified-gravity (MG) formulations the field equation of the gravitational field (potential, metric) is modified; the equations of motion of other degrees of freedom (DoF) in the field are not. In modified-inertia (MI) theories the opposite it true. More precisely, in theories derived from an action modifying inertia is tantamount to modifying the kinetic (free) actions of the non-gravitational degrees of freedom. Local, relativistic theories in which the kinetic", "title": "" }, { "docid": "571f07c7c8ba724d3e266788e5dac622", "text": "The memory system is a fundamental performance and energy bottleneck in almost all computing systems. Recent system design, application, and technology trends that require more capacity, bandwidth, efficiency, and predictability out of the memory system make it an even more important system bottleneck. At the same time, DRAM technology is experiencing difficult technology scaling challenges that make the maintenance and enhancement of its capacity, energy-efficiency, and reliability significantly more costly with conventional techniques. In this paper, after describing the demands and challenges faced by the memory system, we examine some promising research and design directions to overcome challenges posed by memory scaling. Specifically, we survey three key solution directions: 1) enabling new DRAM architectures, functions, interfaces, and better integration of the DRAM and the rest of the system, 2) designing a memory system that employs emerging memory technologies and takes advantage of multiple different technologies, 3) providing predictable performance and QoS to applications sharing the memory system. We also briefly describe our ongoing related work in combating scaling challenges of NAND flash memory.", "title": "" }, { "docid": "7525b24d3e0c6332cdc3eb58c7677b63", "text": "OBJECTIVE\nTo compare the efficacy of 2 intensified insulin regimens, continuous subcutaneous insulin infusion (CSII) and multiple daily injections (MDI), by using the short-acting insulin analog lispro in type 1 diabetic patients.\n\n\nRESEARCH DESIGN AND METHODS\nA total of 41 C-peptide-negative type 1 diabetic patients (age 43.5+/-10.3 years; 21 men and 20 women, BMI 24.0+/-2.4 kg/m2, diabetes duration 20.0+/-11.3 years) on intensified insulin therapy (MDI with regular insulin or lispro, n = 9, CSII with regular insulin, n = 32) were included in an open-label randomized crossover study comparing two 4-month periods of intensified insulin therapy with lispro: one period by MDI and the other by CSII. Blood glucose (BG) was monitored before and after each of the 3 meals each day.\n\n\nRESULTS\nThe basal insulin regimen had to be optimized in 75% of the patients during the MDI period (mean number of NPH injections per day = 2.65). HbA1c values were lower when lispro was used in CSII than in MDI (7.89+/-0.77 vs. 8.24+/-0.77%, P<0.001). BG levels were lower with CSII (165+/-27 vs. 175+/-33 mg/dl, P<0.05). The SD of all the BG values (73+/-15 vs. 82+/-18 mg/dl, P<0.01) was lower with CSII. The frequency of hypoglycemic events, defined as BG levels <60 mg/dl, did not differ significantly between the 2 modalities (CSII 3.9+/-4.2 per 14 days vs. MDI 4.3+/-3.9 per 14 days). Mean insulin doses were significantly lower with CSII than with MDI (38.5+/-9.8 vs. 47.3+/-14.9 U/day. respectively, P< 0.0001).\n\n\nCONCLUSIONS\nWhen used with external pumps versus MDI, lispro provides better glycemic control and stability with much lower doses of insulin and does not increase the frequency of hypoglycemic episodes.", "title": "" }, { "docid": "6a4815ee043e83994e4345b6f4352198", "text": "Object detection – the computer vision task dealing with detecting instances of objects of a certain class (e.g ., ’car’, ’plane’, etc.) in images – attracted a lot of attention from the community during the last 5 years. This strong interest can be explained not only by the importance this task has for many applications but also by the phenomenal advances in this area since the arrival of deep convolutional neural networks (DCNN). This article reviews the recent literature on object detection with deep CNN, in a comprehensive way, and provides an in-depth view of these recent advances. The survey covers not only the typical architectures (SSD, YOLO, Faster-RCNN) but also discusses the challenges currently met by the community and goes on to show how the problem of object detection can be extended. This survey also reviews the public datasets and associated state-of-the-art algorithms.", "title": "" }, { "docid": "3fa0911a8e65461a0c1014cc481293bb", "text": "Researchers are using emerging technologies to develop novel play environments, while established computer and console game markets continue to grow rapidly. Even so, evaluating the success of interactive play environments is still an open research challenge. Both subjective and objective techniques fall short due to limited evaluative bandwidth; there remains no corollary in play environments to task performance with productivity systems. This paper presents a method of modeling user emotional state, based on a user's physiology, for users interacting with play technologies. Modeled emotions are powerful because they capture usability and playability through metrics relevant to ludic experience; account for user emotion; are quantitative and objective; and are represented continuously over a session. Furthermore, our modeled emotions show the same trends as reported emotions for fun, boredom, and excitement; however, the modeled emotions revealed differences between three play conditions, while the differences between the subjective reports failed to reach significance.", "title": "" }, { "docid": "99100c269525cea2e4c2d29f12afc5e9", "text": "We do things in the world by exploiting our knowledge of what causes what. But in trying to reason formally about causality, there is a difficulty: to reason with certainty we need complete knowledge of all the relevant events and circumstances, whereas in everyday reasoning tasks we need a more serviceable but looser notion that does not make such demands on our knowledge. In this work the notion of “causal complex” is introduced for a complete set of events and conditions necessary for the causal consequent to occur, and the term “cause” is used for the makeshift, nonmonotonic notion we require for everyday tasks such as planning and language understanding. Like all interesting concepts, neither of these can be defined with necessary and sufficient conditions, but they can be more or less tightly constrained by necessary conditions or sufficient conditions. The issue of how to distinguish between what is in a causal complex from what is outside it is discussed, and within a causal complex, how to distinugish the eventualities that deserve to be called “causes” from those that do not, in particular circumstances. One particular modal, the word “would”, is examined from the standpoint of its underlying causal content, as a linguistic motivation for this enterprise.", "title": "" }, { "docid": "a95a46cdf179f9501b7409da9975767f", "text": "Gibson's ecological theory of perception has received considerable attention within psychology literature, as well as in computer vision and robotics. However, few have applied Gibson's approach to agent-based models of human movement, because the ecological theory requires that individuals have a vision-based mental model of the world, and for large numbers of agents this becomes extremely expensive computationally. Thus, within current pedestrian models, path evaluation is based on calibration from observed data or on sophisticated but deterministic route-choice mechanisms; there is little open-ended behavioural modelling of human-movement patterns. One solution which allows individuals rapid concurrent access to the visual information within an environment is an `exosomatic visual architecture', where the connections between mutually visible locations within a configuration are prestored in a lookup table. Here we demonstrate that, with the aid of an exosomatic visual architecture, it is possible to develop behavioural models in which movement rules originating from Gibson's principle of affordance are utilised. We apply large numbers of agents programmed with these rules to a built-environment example and show that, by varying parameters such as destination selection, field of view, and steps taken between decision points, it is possible to generate aggregate movement levels very similar to those found in an actual building context. DOI:10.1068/b12850", "title": "" }, { "docid": "0aed520f94adb72aa23ae09e2267b364", "text": "John Perry has argued that language, thought and experience often contain unarticulated constituents. I argue that this idea holds the key to explaining away the intuitive appeal of the A-theory of time and the endurance theory of persistence. The A-theory has seemed intuitively appealing because the nature of temporal experience makes it natural for us to use one-place predicates like past to deal with what are really two-place relations, one of whose constituents is unarticulated. The endurance view can be treated in a similar way; the temporal boundaries of temporal parts of objects are unarticulated in experience and this makes it seem that the very same entity exists at different times.", "title": "" }, { "docid": "9aafccebe8bd126c22cf49d7ab652801", "text": "Smartphones today store large amounts of data that can be confidential, private or sensitive. To protect such data, all mobile OSs have a phone lock mechanism, a mechanism that requires user authentication before granting access to applications and data on the phone. iPhone’s unlocking secret (a.k.a., passcode in Apple’s terminology) is also used to derive a key for encrypting data on the device. Recently, Apple has introduced Touch ID, that allows a fingerprint-based authentication to be used for unlocking an iPhone. The intuition behind the technology was that its usability would allow users to use stronger passcodes for locking their iOS devices, without substantially sacrificing usability. To this date, it is unclear, however, if users take advantage of Touch ID technology and if they, indeed, employ stronger passcodes. It is the main objective and the contribution of this paper to fill this knowledge gap. In order to answer this question, we conducted three user studies (a) an in-person survey with 90 participants, (b) interviews with 21 participants, and (c) an online survey with 374 Amazon Mechanical Turks. Overall, we found that users do not take an advantage of Touch ID and use weak unlocking secrets, mainly 4-digit PINs, similarly to those users who do not use Touch ID. To our surprise, we found that more than 30% of the participants in each group did not know that they could use passwords instead of 4-digit PINs. Some other participants indicated that they adopted PINs due to better usability, in comparison to passwords. Most of the participants agreed that Touch ID, indeed, offers usability benefits, such as convenience, speed and ease of use. Finally, we found that there is a disconnect between users’ desires for security that their passcodes have to offer and the reality. In particular, only 12% of participants correctly estimated the security their passcodes provide.", "title": "" }, { "docid": "c77fec3ea0167df15cfd4105a7101a1e", "text": "This paper is about extending the reach and endurance of outdoor localisation using stereo vision. At the heart of the localisation is the fundamental task of discovering feature correspondences between recorded and live images. One aspect of this problem involves deciding where to look for correspondences in an image and the second is deciding what to look for. This latter point, which is the main focus of our paper, requires understanding how and why the appearance of visual features can change over time. In particular, such knowledge allows us to better deal with abrupt and challenging changes in lighting. We show how by instantiating a parallel image processing stream which operates on illumination-invariant images, we can substantially improve the performance of an outdoor visual navigation system. We will demonstrate, explain and analyse the effect of the RGB to illumination-invariant transformation and suggest that for little cost it becomes a viable tool for those concerned with having robots operate for long periods outdoors.", "title": "" }, { "docid": "fad164e21c7ec013450a8b96d75d9457", "text": "Pinterest is a visual discovery tool for collecting and organizing content on the Web with over 70 million users. Users “pin” images, videos, articles, products, and other objects they find on the Web, and organize them into boards by topic. Other users can repin these and also follow other users or boards. Each user organizes things differently, and this produces a vast amount of human-curated content. For example, someone looking to decorate their home might pin many images of furniture that fits their taste. These curated collections produce a large number of associations between pins, and we investigate how to leverage these associations to surface personalized content to users. Little work has been done on the Pinterest network before due to lack of availability of data. We first performed an analysis on a representative sample of the Pinterest network. After analyzing the network, we created recommendation systems, suggesting pins that users would be likely to repin or like based on their previous interactions on Pinterest. We created recommendation systems using four approaches: a baseline recommendation system using the power law distribution of the images; a content-based filtering algorithm; and two collaborative filtering algorithms, one based on one-mode projection of a bipartite graph, and the second using a label propagation approach.", "title": "" }, { "docid": "4f2ebb2640a36651fd8c01f3eeb0e13e", "text": "This paper addresses pixel-level segmentation of a human body from a single image. The problem is formulated as a multi-region segmentation where the human body is constrained to be a collection of geometrically linked regions and the background is split into a small number of distinct zones. We solve this problem in a Bayesian framework for jointly estimating articulated body pose and the pixel-level segmentation of each body part. Using an image likelihood function that simultaneously generates and evaluates the image segmentation corresponding to a given pose, we robustly explore the posterior body shape distribution using a data-driven, coarse-to-fine Metropolis Hastings sampling scheme that includes a strongly data-driven proposal term.", "title": "" }, { "docid": "8eace30c00d9b118635dc8a2e383f36b", "text": "Wafer Level Packaging (WLP) has the highest potential for future single chip packages because the WLP is intrinsically a chip size package. The package is completed directly on the wafer then singulated by dicing for the assembly. All packaging and testing operations of the dice are replaced by whole wafer fabrication and wafer level testing. Therefore, it becomes more cost-effective with decreasing die size or increasing wafer size. However, due to the intrinsic mismatch of the coefficient of thermal expansion (CTE) between silicon chip and plastic PCB material, solder ball reliability subject to temperature cycling becomes the weakest point of the technology. In this paper some fundamental principles in designing WLP structure to achieve the robust reliability are demonstrated through a comprehensive study of a variety of WLP technologies. The first principle is the 'structural flexibility' principle. The more flexible a WLP structure is, the less the stresses that are applied on the solder balls will be. Ball on polymer WLP, Cu post WLP, polymer core solder balls are such examples to achieve better flexibility of overall WLP structure. The second principle is the 'local enhancement' at the interface region of solder balls where fatigue failures occur. Polymer collar WLP, and increasing solder opening size are examples to reduce the local stress level. In this paper, the reliability improvements are discussed through various existing and tested WLP technologies at silicon level and ball level, respectively. The fan-out wafer level packaging is introduced, which is expected to extend the standard WLP to the next stage with unlimited potential applications in future.", "title": "" }, { "docid": "5f3dfafe3d696333f753475aaf201234", "text": "This paper presents an effective scheduling scheme called semi-persistent scheduling for VoIP service in LTE system. The main challenges of effectively supporting VoIP service in LTE system are 1) the tight delay requirement combined with the frequent arrival of small packets of VoIP traffic and 2) the scarcity of radio resources along with control channel restriction in LTE system. Simulation results show that semi-persistent scheduling can support high system capacity while at the same time guaranteeing the QoS requirements such as packet delay and packet loss rate of VoIP. Furthermore, semi- persistent scheduling requires less control signaling overhead which is very important for efficient resources utilization in a practical system.", "title": "" }, { "docid": "cb9ba3aaafccae2cd7ea5e32479d2099", "text": "Partial least squares-based structural equation modeling (PLS-SEM) is extensively used in the field of information systems, as well as in many other fields where multivariate statistical methods are employed. One of the most fundamental issues in PLS-SEM is that of minimum sample size estimation. The “10-times rule” has been a favorite due to its simplicity of application, even though it tends to yield imprecise estimates. We propose two related methods, based on mathematical equations, as alternatives for minimum sample size estimation in PLSSEM: the inverse square root method, and the gamma-exponential method. Based on three Monte Carlo experiments, we demonstrate that both methods are fairly accurate. The inverse square root method is particularly attractive in terms of its simplicity of application.", "title": "" }, { "docid": "d18c53be23600c9b0ae2efa215c7c4af", "text": "The problem of large-scale image search has been traditionally addressed with the bag-of-visual-words (BOV). In this article, we propose to use as an alternative the Fisher kernel framework. We first show why the Fisher representation is well-suited to the retrieval problem: it describes an image by what makes it different from other images. One drawback of the Fisher vector is that it is high-dimensional and, as opposed to the BOV, it is dense. The resulting memory and computational costs do not make Fisher vectors directly amenable to large-scale retrieval. Therefore, we compress Fisher vectors to reduce their memory footprint and speed-up the retrieval. We compare three binarization approaches: a simple approach devised for this representation and two standard compression techniques. We show on two publicly available datasets that compressed Fisher vectors perform very well using as little as a few hundreds of bits per image, and significantly better than a very recent compressed BOV approach.", "title": "" } ]
scidocsrr
183e7e9156304b9a39d92f64b2229bb6
Synchronization of Parallel Single-Phase Inverters With Virtual Oscillator Control
[ { "docid": "e3d1282b2ed8c9724cf64251df7e14df", "text": "This paper describes and evaluates the feasibility of control strategies to be adopted for the operation of a microgrid when it becomes isolated. Normally, the microgrid operates in interconnected mode with the medium voltage network; however, scheduled or forced isolation can take place. In such conditions, the microgrid must have the ability to operate stably and autonomously. An evaluation of the need of storage devices and load shedding strategies is included in this paper.", "title": "" }, { "docid": "a5911891697a1b2a407f231cf0ad6c28", "text": "In this paper, a new control method for the parallel operation of inverters operating in an island grid or connected to an infinite bus is described. Frequency and voltage control, including mitigation of voltage harmonics, are achieved without the need for any common control circuitry or communication between inverters. Each inverter supplies a current that is the result of the voltage difference between a reference ac voltage source and the grid voltage across a virtual complex impedance. The reference ac voltage source is synchronized with the grid, with a phase shift, depending on the difference between rated and actual grid frequency. A detailed analysis shows that this approach has a superior behavior compared to existing methods, regarding the mitigation of voltage harmonics, short-circuit behavior and the effectiveness of the frequency and voltage control, as it takes the R to X line impedance ratio into account. Experiments show the behavior of the method for an inverter feeding a highly nonlinear load and during the connection of two parallel inverters in operation.", "title": "" } ]
[ { "docid": "c20ec248b9d6ebabb5e4940ac6a15602", "text": "The purpose of this paper is to provide an overview of second language acquisition (SLA) research over the past several decades, and to highlight the ways in which it has retained its original applied and linguistic interests, and enhanced them by addressing questions about acquisition processes. As the paper will illustrate, SLA research has become increasingly bi-directional and multi-faceted in its applications. These many applications to and from the study of SLA reflect the robustness and vitality of the field. Disciplines Bilingual, Multilingual, and Multicultural Education Comments Postprint version. Published in Handbook of Research in Second Language Teaching and Learning, edited by Eli Hinkel (Mahway, N.J.: L. Erlbaum Associates, Inc., 2005), pages 263-280. This book chapter is available at ScholarlyCommons: http://repository.upenn.edu/gse_pubs/34 Second Language Acquisition Research and Applied Linguistics Teresa Pica Abstract The purpose of this paper is to provide an overview of second language acquisition (SLA) research over the past several decades, and to highlight the ways in which it has retained its original applied and linguistic interests, and enhanced them by addressing questions about acquisition processes. As the paper will illustrate, SLA research has become increasingly bi-directional and multifaceted in its applications. These many applications to and from the study of SLA reflect the robustness and vitality of the field.The purpose of this paper is to provide an overview of second language acquisition (SLA) research over the past several decades, and to highlight the ways in which it has retained its original applied and linguistic interests, and enhanced them by addressing questions about acquisition processes. As the paper will illustrate, SLA research has become increasingly bi-directional and multifaceted in its applications. These many applications to and from the study of SLA reflect the robustness and vitality of the field. INTRODUCTION Research on second language acquisition (SLA) has expanded enormously since its inception. Studies of SLA have increased in quantity as researchers have addressed a wider range of topics, asked new questions and worked within multiple methodologies. At the same time, the field has become increasingly bidirectional and multi-faceted in its applications. As new theories and research have emerged on language, and even more so, on learning, their application to the study of SLA has been fruitful. It has led to long needed explanations about developmental regularities and persistent difficulties, and has opened up new lines of research on the processes and sequences of second language (L2) development. The application of newer findings from the study of SLA to educational concerns has both informed and sustained long standing debates about the role of the learner's consciousness in the SLA process, and about the nature of the learner's input needs and requirements. A modest, but increasing number of SLA research findings has had direct application to instructional decisions. Most other findings have served as a resource to inform teaching practice. The many applications to and from the study of SLA. are therefore the focus of this paper.", "title": "" }, { "docid": "8697592c3b1725376473f204248b5865", "text": "Deploying drones over the Cloud is an emerging research area motivated by the emergence of Cloud Robotics and the Internet-of-Drones (IoD) paradigms. This paper contributes to IoD and to the deployment of drones over the cloud. It presents, Dronemap Planner, an innovative service-oriented cloud based drone management system that provides access to drones through web services (SOAP and REST), schedule missions and promotes collaboration between drones. A modular cloud proxy server was developed; it acts as a moderator between drones and users. Communication between drones, users and the Dronemap Planner cloud is provided through the MAVLink protocol, which is supported by commodity drones. To demonstrate the effectiveness of Dronemap Planner, we implemented and validated it using simulated and real MAVLink-enabled drones, and deployed it on a public cloud server. Experimental results show that Dronemap Planner is efficient in virtualizing the access to drones over the Internet, and provides developers with appropriate APIs to easily program drones' applications.", "title": "" }, { "docid": "2683c65d587e8febe45296f1c124e04d", "text": "We present a new autoencoder-type architecture, that is trainable in an unsupervised mode, sustains both generation and inference, and has the quality of conditional and unconditional samples boosted by adversarial learning. Unlike previous hybrids of autoencoders and adversarial networks, the adversarial game in our approach is set up directly between the encoder and the generator, and no external mappings are trained in the process of learning. The game objective compares the divergences of each of the real and the generated data distributions with the canonical distribution in the latent space. We show that direct generator-vs-encoder game leads to a tight coupling of the two components, resulting in samples and reconstructions of a comparable quality to some recently-proposed more complex architectures.", "title": "" }, { "docid": "a560892a1cd4fdefc3271d426a3ff936", "text": "We present a variant of hierarchical marking menus where items are selected using a series of inflection-free simple marks, rather than the single \"zig-zag\" compound mark used in the traditional design. Theoretical analysis indicates that this simple mark approach has the potential to significantly increase the number of items in a marking menu that can be selected efficiently and accurately. A user experiment is presented that compares the simple and compound mark techniques. Results show that the simple mark technique allows for significantly more accurate and faster menu selections overall, but most importantly also in menus with a large number of items where performance of the compound mark technique is particularly poor. The simple mark technique also requires significantly less physical input space to perform the selections, making it particularly suitable for small footprint pen-based input devices. Visual design alternatives are also discussed.", "title": "" }, { "docid": "85e9cc77891dee5491cf750d812530a6", "text": "Total power losses in a distribution network can be minimized by installing Distributed Generator (DG) with correct size. In line with this objective, most of the researchers have used multiple types of optimization technique to regulate the DG's output to compute its optimal size. In this paper, a comparative studies of a new proposed Rank Evolutionary Particle Swarm Optimization (REPSO) method with Evolutionary Particle Swarm Optimization (EPSO) and Traditional Particle Swarm Optimization (PSO) is conducted. Both REPSO and EPSO are using the concept of Evolutionary Programming (EP) in Particle Swarm Optimization (PSO) process. The implementation of EP in PSO allows the entire particles to move toward the optimal value faster. A test on determining optimum size of DGs in 69 bus radial distribution system reveals the superiority of REPSO over PSO and EPSO.", "title": "" }, { "docid": "f22bb0a0d3618ce05802e883da1c772f", "text": "OBJECTIVE: Obesity has increased at an alarming rate in recent years and is now a worldwide health problem. We investigated the effects of long-term feeding with tea catechins, which are naturally occurring polyphenolic compounds widely consumed in Asian countries, on the development of obesity in C57BL/6J mice.DESIGN: We measured body weight, adipose tissue mass and liver fat content in mice fed diets containing either low-fat (5% triglyceride (TG)), high-fat (30% TG), or high-fat supplemented with 0.1–0.5% (w/w) tea catechins for 11 months. The β-oxidation activities and related mRNA levels were measured after 1 month of feeding.RESULTS: Supplementation with tea catechins resulted in a significant reduction of high-fat diet-induced body weight gain, visceral and liver fat accumulation, and the development of hyperinsulinemia and hyperleptinemia. Feeding with tea catechins for 1 month significantly increased acyl-CoA oxidase and medium chain acyl-CoA dehydrogenase mRNA expression as well as β-oxidation activity in the liver.CONCLUSION: The stimulation of hepatic lipid metabolism might be a factor responsible for the anti-obesity effects of tea catechins. The present results suggest that long-term consumption of tea catechins is beneficial for the suppression of diet-induced obesity, and it may reduce the risk of associated diseases including diabetes and coronary heart disease.", "title": "" }, { "docid": "890ca9c443b69af5f16dea007435e8c5", "text": "One of the most challenging task in face recognition is to identify people with varied poses. Namely, the test faces have significantly different poses compared with the registered faces. In this paper, we propose a high-level feature learning scheme to extract pose-invariant identity feature for face recognition. First, we build a single-hidden-layer neural network with sparse constraint, to extract pose-invariant feature in a supervised fashion. Second, we further enhance the discriminative capability of the proposed feature by using multiple random faces as the target values for multiple encoders. By enforcing the target values to be unique for input faces over different poses, the learned high-level feature that is represented by the neurons in the hidden layer is pose free and only relevant to the identity information. Finally, we conduct face identification on CMU Multi-PIE, and verification on Labeled Faces in the Wild (LFW) databases, where identification rank-1 accuracy and face verification accuracy with ROC curve are reported. These experiments demonstrate that our model is superior to other state-of-the-art approaches on handling pose variations.", "title": "" }, { "docid": "f3bed3ce038e087b08164b8468397dc4", "text": "transection of the lumbosacral spinal roots innervating the bladder as well as the hypogastric nerves.  The residual, low amplitude evoked contraction during L2 spinal root stimulation is likely due to low number of direct projections from the L2 ventral horn to the bladder.1  Recording results suggests hypogastric efferent fibers mainly contribute to bladder storage function.2  These refined electroneurogram recording methods may be suitable by monitoring sensory and motor activity in the transferred nerves after bladder reinnervation. Ekta Tiwari, Mary F. Barbe Michel A. Lemay, Danielle M. Salvadeo, Matthew M. Wood, Michael Mazzei, Luke V. Musser, Zdenka J. Delalic, Alan S. Braverman, and Michael R. Ruggieri, Sr.", "title": "" }, { "docid": "ca7870fd17c25a8ef2931cb39c062018", "text": "This paper offers an active inference account of choice behaviour and learning. It focuses on the distinction between goal-directed and habitual behaviour and how they contextualise each other. We show that habits emerge naturally (and autodidactically) from sequential policy optimisation when agents are equipped with state-action policies. In active inference, behaviour has explorative (epistemic) and exploitative (pragmatic) aspects that are sensitive to ambiguity and risk respectively, where epistemic (ambiguity-resolving) behaviour enables pragmatic (reward-seeking) behaviour and the subsequent emergence of habits. Although goal-directed and habitual policies are usually associated with model-based and model-free schemes, we find the more important distinction is between belief-free and belief-based schemes. The underlying (variational) belief updating provides a comprehensive (if metaphorical) process theory for several phenomena, including the transfer of dopamine responses, reversal learning, habit formation and devaluation. Finally, we show that active inference reduces to a classical (Bellman) scheme, in the absence of ambiguity.", "title": "" }, { "docid": "159cd44503cb9def6276cb2b9d33c40e", "text": "In the airline industry, data analysis and data mining are a prerequisite to push customer relationship management (CRM) ahead. Knowledge about data mining methods, marketing strategies and airline business processes has to be combined to successfully implement CRM. This paper is a case study and gives an overview about distinct issues, which have to be taken into account in order to provide a first solution to run CRM processes. We do not focus on each individual task of the project; rather we give a sketch about important steps like data preparation, customer valuation and segmentation and also explain the limitation of the solutions.", "title": "" }, { "docid": "7e2bbd260e58d84a4be8b721cdf51244", "text": "Obesity is characterised by altered gut microbiota, low-grade inflammation and increased endocannabinoid (eCB) system tone; however, a clear connection between gut microbiota and eCB signalling has yet to be confirmed. Here, we report that gut microbiota modulate the intestinal eCB system tone, which in turn regulates gut permeability and plasma lipopolysaccharide (LPS) levels. The impact of the increased plasma LPS levels and eCB system tone found in obesity on adipose tissue metabolism (e.g. differentiation and lipogenesis) remains unknown. By interfering with the eCB system using CB(1) agonist and antagonist in lean and obese mouse models, we found that the eCB system controls gut permeability and adipogenesis. We also show that LPS acts as a master switch to control adipose tissue metabolism both in vivo and ex vivo by blocking cannabinoid-driven adipogenesis. These data indicate that gut microbiota determine adipose tissue physiology through LPS-eCB system regulatory loops and may have critical functions in adipose tissue plasticity during obesity.", "title": "" }, { "docid": "a488f95646d7c8a47a0dd9816c12c1ee", "text": "During the past 20 years there has been a dramatic resurgence or emergence of epidemic arboviral diseases affecting both humans and domestic animals. These epidemics have been caused primarily by viruses thought to be under control such as dengue, Japanese encephalitis, yellow fever, and Venezuelan equine encephalitis, or viruses that have expanded their geographic distribution such as West Nile and Rift Valley fever. Several of these viruses are presented as case studies to illustrate the changing epidemiology. The factors responsible for the dramatic resurgence of arboviral diseases in the waning years of the 20th century are discussed, as is the need for rebuilding the public health infrastructure to deal with epidemic vector-borne diseases in the 21st century.", "title": "" }, { "docid": "cf31b8eb971e89d4521c4a70cf181bc3", "text": "In this paper we address the problem of scalable, native and adaptive query processing over Linked Stream Data integrated with Linked Data. Linked Stream Data consists of data generated by stream sources, e.g., sensors, enriched with semantic descriptions, following the standards proposed for Linked Data. This enables the integration of stream data with Linked Data collections and facilitates a wide range of novel applications. Currently available systems use a “black box” approach which delegates the processing to other engines such as stream/event processing engines and SPARQL query processors by translating to their provided languages. As the experimental results described in this paper show, the need for query translation and data transformation, as well as the lack of full control over the query execution, pose major drawbacks in terms of efficiency. To remedy these drawbacks, we present CQELS (Continuous Query Evaluation over Linked Streams), a native and adaptive query processor for unified query processing over Linked Stream Data and Linked Data. In contrast to the existing systems, CQELS uses a “white box” approach and implements the required query operators natively to avoid the overhead and limitations of closed system regimes. CQELS provides a flexible query execution framework with the query processor dynamically adapting to the changes in the input data. During query execution, it continuously reorders operators according to some heuristics to achieve improved query execution in terms of delay and complexity. Moreover, external disk access on large Linked Data collections is reduced with the use of data encoding and caching of intermediate query results. To demonstrate the efficiency of our approach, we present extensive experimental performance evaluations in terms of query execution time, under varied query types, dataset sizes, and number of parallel queries. These results show that CQELS outperforms related approaches by orders of magnitude.", "title": "" }, { "docid": "244be1e978813811e3f5afc1941cd4f5", "text": "In this paper we introduce a new publicly available dataset for verification against textual sources, FEVER: Fact Extraction and VERification. It consists of 185,445 claims generated by altering sentences extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims are classified as SUPPORTED, REFUTED or NOTENOUGHINFO by annotators achieving 0.6841 in Fleiss κ. For the first two classes, the annotators also recorded the sentence(s) forming the necessary evidence for their judgment. To characterize the challenge of the dataset presented, we develop a pipeline approach and compare it to suitably designed oracles. The best accuracy we achieve on labeling a claim accompanied by the correct evidence is 31.87%, while if we ignore the evidence we achieve 50.91%. Thus we believe that FEVER is a challenging testbed that will help stimulate progress on claim verification against textual sources.", "title": "" }, { "docid": "44ff2f64ddfb20009c8b8356bc63e1c0", "text": "This work proposes multiclass deep learning classification of Alzheimer's disease (AD) using novel texture and other associated features extracted from structural MRI. Two distinct learning models (Model 1 and 2) are presented where both include subcortical area specific feature extraction, feature selection and stacked auto-encoder (SAE) deep neural network (DNN). The models learn highly complex and subtle differences in spatial atrophy patterns using white matter volumes, gray matter volumes, cortical surface area, cortical thickness, and different types of Fractal Brownian Motion co-occurrence matrices for texture as features to classify AD from cognitive normal (CN) and mild cognitive impairment (MCI) in dementia patients. A five layer SAE with state-of-the-art dropout learning is trained on a publicly available ADNI dataset and the model performances are evaluated at two levels: one using in-house tenfold cross validation and another using the publicly available CADDementia competition. The in-house evaluations of our two models achieve 56.6% and 58.0% tenfold cross validation accuracies using 504 ADNI subjects. For the public domain evaluation, we are the first to report DNN to CADDementia and our methods yield competitive classification accuracies of 51.4% and 56.8%. Further, both of our proposed models offer higher True Positive Fraction (TPF) for AD class when compared to the top-overall ranked algorithm while Model 1 also ties for top diseased class sensitivity at 58.2% in the CADDementia challenge. Finally, Model 2 achieves strong disease class sensitivity with improvement in specificity and overall accuracy. Our algorithms have the potential to provide a rapid, objective, and non-invasive assessment of AD.", "title": "" }, { "docid": "f7ce06365e2c74ccbf8dcc04277cfb9d", "text": "In this paper, we propose an enhanced method for detecting light blobs (LBs) for intelligent headlight control (IHC). The main function of the IHC system is to automatically convert high-beam headlights to low beam when vehicles are found in the vicinity. Thus, to implement the IHC, it is necessary to detect preceding or oncoming vehicles. Generally, this process of detecting vehicles is done by detecting LBs in the images. Previous works regarding LB detection can largely be categorized into two approaches by the image type they use: low-exposure (LE) images or autoexposure (AE) images. While they each have their own strengths and weaknesses, the proposed method combines them by integrating the use of the partial region of the AE image confined by the lane detection information and the LE image. Consequently, the proposed method detects headlights at various distances and taillights at close distances using LE images while handling taillights at distant locations by exploiting the confined AE images. This approach enhances the performance of detecting the distant LBs while maintaining low false detections.", "title": "" }, { "docid": "4e122b71c30c6c0721d5065adcf0b52c", "text": "License plate recognition usually contains three steps, namely license plate detection/localization, character segmentation and character recognition. When reading characters on a license plate one by one after license plate detection step, it is crucial to accurately segment the characters. The segmentation step may be affected by many factors such as license plate boundaries (frames). The recognition accuracy will be significantly reduced if the characters are not properly segmented. This paper presents an efficient algorithm for character segmentation on a license plate. The algorithm follows the step that detects the license plates using an AdaBoost algorithm. It is based on an efficient and accurate skew and slant correction of license plates, and works together with boundary (frame) removal of license plates. The algorithm is efficient and can be applied in real-time applications. The experiments are performed to show the accuracy of segmentation.", "title": "" }, { "docid": "ef8be5104f9bc4a0f4353ed236b6afb8", "text": "State-of-the-art human pose estimation methods are based on heat map representation. In spite of the good performance, the representation has a few issues in nature, such as non-differentiable postprocessing and quantization error. This work shows that a simple integral operation relates and unifies the heat map representation and joint regression, thus avoiding the above issues. It is differentiable, efficient, and compatible with any heat map based methods. Its effectiveness is convincingly validated via comprehensive ablation experiments under various settings, specifically on 3D pose estimation, for the first time.", "title": "" }, { "docid": "244a517d3a1c456a602ecc01fb99a78f", "text": "Most literature on time series classification assumes that the beginning and ending points of the pattern of interest can be correctly identified, both during the training phase and later deployment. In this work, we argue that this assumption is unjustified, and this has in many cases led to unwarranted optimism about the performance of the proposed algorithms. As we shall show, the task of correctly extracting individual gait cycles, heartbeats, gestures, behaviors, etc., is generally much more difficult than the task of actually classifying those patterns. We propose to mitigate this problem by introducing an alignment-free time series classification framework. The framework requires only very weakly annotated data, such as “in this ten minutes of data, we see mostly normal heartbeats...,” and by generalizing the classic machine learning idea of data editing to streaming/continuous data, allows us to build robust, fast and accurate classifiers. We demonstrate on several diverse real-world problems that beyond removing unwarranted assumptions and requiring essentially no human intervention, our framework is both significantly faster and significantly more accurate than current state-of-the-art approaches.", "title": "" }, { "docid": "41c99f4746fc299ae886b6274f899c4b", "text": "The disruptive power of blockchain technologies represents a great opportunity to re-imagine standard practices of providing radio access services by addressing critical areas such as deployment models that can benefit from brand new approaches. As a starting point for this debate, we look at the current limits of infrastructure sharing, and specifically at the Small-Cell-as-a-Service trend, asking ourselves how we could push it to its natural extreme: a scenario in which any individual home or business user can become a service provider for mobile network operators (MNOs), freed from all the scalability and legal constraints that are inherent to the current modus operandi. We propose the adoption of smart contracts to implement simple but effective Service Level Agreements (SLAs) between small cell providers and MNOs, and present an example contract template based on the Ethereum blockchain.", "title": "" } ]
scidocsrr
95c633764ae1f9fac53726052e41e32e
Excess Volatility of Corporate Bonds
[ { "docid": "7bd5c80bef79c689047cf6b177e1ed22", "text": "We examine the default probabilities predicted by “structural” models of risky corporate debt. Two types of models are examined: those with “exogenous” default boundaries, typified by Longstaff and Schwartz (1995); and those with “endogenous” default boundaries, typified by Leland and Toft (1996). We focus on default probabilities rather than credit spreads because (i) they are not affected by additional factors such as liquidity, tax differences, and recovery rates; and (ii) prediction of the relative likelihood of default is often stated as the objective of bond ratings. We examine the ability of these models to capture the actual average default frequencies across bonds with different ratings reported in Moody’s (2001) corporate bond default data, 1970-2000.", "title": "" } ]
[ { "docid": "2f2cab35a8cf44c4564c0e26e0490f29", "text": "In this paper, we propose a synthetic generationmethod for time-series data based on generative adversarial networks (GANs) and apply it to data augmentation for biosinal classification. GANs are a recently proposed framework for learning a generative model, where two neural networks, one generating synthetic data and the other discriminating synthetic and real data, are trained while competing with each other. In the proposed method, each neural network in GANs is developed based on a recurrent neural network using long short-term memories, thereby allowing the adaptation of the GANs framework to time-series data generation. In the experiments, we confirmed the capability of the proposed method for generating synthetic biosignals using the electrocardiogram and electroencephalogram datasets. We also showed the effectiveness of the proposed method for data augmentation in the biosignal classification problem.", "title": "" }, { "docid": "6d728174d576ac785ff093f4cdc16e1b", "text": "The stress-inducible protein heme oxygenase-1 provides protection against oxidative stress. The anti-inflammatory properties of heme oxygenase-1 may serve as a basis for this cytoprotection. We demonstrate here that carbon monoxide, a by-product of heme catabolism by heme oxygenase, mediates potent anti-inflammatory effects. Both in vivo and in vitro, carbon monoxide at low concentrations differentially and selectively inhibited the expression of lipopolysaccharide-induced pro-inflammatory cytokines tumor necrosis factor-α, interleukin-1β, and macrophage inflammatory protein-1β and increased the lipopolysaccharide-induced expression of the anti-inflammatory cytokine interleukin-10. Carbon monoxide mediated these anti-inflammatory effects not through a guanylyl cyclase–cGMP or nitric oxide pathway, but instead through a pathway involving the mitogen-activated protein kinases. These data indicate the possibility that carbon monoxide may have an important protective function in inflammatory disease states and thus has potential therapeutic uses.", "title": "" }, { "docid": "30e9afa44756fa1b050945e9f3e1863e", "text": "A 8-year-old Chinese boy with generalized pustular psoriasis (GPP) refractory to cyclosporine and methylprednisolone was treated successfully with two infusions of infliximab 3.3 mg/kg. He remained in remission for 21 months. Direct sequencing of IL36RN gene showed a homozygous mutation, c.115 + 6T>C. Juvenile GPP is a rare severe form of psoriasis occasionally associated with life-threatening complications. Like acitretin, cyclosporine and methotrexate, infliximab has been reported to be effective for juvenile GPP in case reports. However, there is a lack of data in the optimal treatment course of infliximab for juvenile GPP. Prolonged administration of these medications may cause toxic or fatal complications. We suggest that short-term infliximab regimen should be recommended as a choice for acute juvenile GPP refractory to traditional systemic therapies. WBC count and CRP are sensitive parameters to reflect the disease activity and evaluate the effectiveness of treatment. Monitoring CD4 T lymphocyte count, preventing and correcting CD4 lymphocytopenia are important in the treatment course of juvenile GPP.", "title": "" }, { "docid": "f01e41cda3fc8dc0385a1d376cd887ce", "text": "This paper reports a planar induction motor that can output 70 N translational thrust and 9 Nm torque with a response time of 10 ms. The motor consists of three linear induction armatures with vector control drivers and three optical mouse sensors. First, an idea to combine multiple linear induction elements is proposed. The power distribution to each element is derived from the position and orientation of that element. A discussion of the developed system and its measured characteristics follow. The experimental results highlight the potential of its direct drive features.", "title": "" }, { "docid": "8f799fc7625b593694c8b3d85216d27b", "text": "With the integration of deep learning into the traditional field of reinforcement learning in the recent decades, the spectrum of applications that artificial intelligence caters is currently very broad. As using AI to play games is a traditional application of reinforcement learning, the project’s objective is to implement a deep reinforcement learning agent that can defeat a video game. Since it is often difficult to determine which algorithms are appropriate given the wide selection of state-of-the-art techniques in the discipline, proper comparisons and investigations of the algorithms are a prerequisite to implementing such an agent. As a result, this paper serves as a platform for exploring the possibility and effectiveness of using conventional state-of-the-art methods, such as Deep Q Networks and its variants, such as Double Deep Q Networks, are appropriate for game playing, with Deep Q Networks successful in playing a randomized map, further work in this project is needed in order for a comprehensive view of the discipline. Such work in the near future includes the investigation of the use of deep reinforcement learning on games unreported in the literature, or potential improvement to existing deep reinforcement learning techniques. In spite of the technical difficulties encountered and minor amendments to the project schedule, the project is still currently on schedule, ie. approximately 50% complete.", "title": "" }, { "docid": "c04cf54a40cd84961657bf50153ff68b", "text": "Neural IR models, such as DRMM and PACRR, have achieved strong results by successfully capturing relevance matching signals. We argue that the context of these matching signals is also important. Intuitively, when extracting, modeling, and combining matching signals, one would like to consider the surrounding text(local context) as well as other signals from the same document that can contribute to the overall relevance score. In this work, we highlight three potential shortcomings caused by not considering context information and propose three neural ingredients to address them: a disambiguation component, cascade k-max pooling, and a shuffling combination layer. Incorporating these components into the PACRR model yields Co-PACER, a novel context-aware neural IR model. Extensive comparisons with established models on TREC Web Track data confirm that the proposed model can achieve superior search results. In addition, an ablation analysis is conducted to gain insights into the impact of and interactions between different components. We release our code to enable future comparisons.", "title": "" }, { "docid": "6e5e6b361d113fa68b2ca152fbf5b194", "text": "Spectral learning algorithms have recently become popular in data-rich domains, driven in part by recent advances in large scale randomized SVD, and in spectral estimation of Hidden Markov Models. Extensions of these methods lead to statistical estimation algorithms which are not only fast, scalable, and useful on real data sets, but are also provably correct. Following this line of research, we propose four fast and scalable spectral algorithms for learning word embeddings – low dimensional real vectors (called Eigenwords) that capture the “meaning” of words from their context. All the proposed algorithms harness the multi-view nature of text data i.e. the left and right context of each word, are fast to train and have strong theoretical properties. Some of the variants also have lower sample complexity and hence higher statistical power for rare words. We provide theory which establishes relationships between these algorithms and optimality criteria for the estimates they provide. We also perform thorough qualitative and quantitative evaluation of Eigenwords showing that simple linear approaches give performance comparable to or superior than the state-of-the-art non-linear deep learning based methods.", "title": "" }, { "docid": "6a993cdfbb701b43bb1cf287380e5b2e", "text": "There is a growing need for real-time human pose estimation from monocular RGB images in applications such as human computer interaction, assisted living, video surveillance, people tracking, activity recognition and motion capture. For the task, depth sensors and multi-camera systems are usually more expensive and difficult to set up than conventional RGB video cameras. Recent advances in convolutional neural network research have allowed to replace of traditional methods with more efficient convolutional neural network based methods in many computer vision tasks. This thesis presents a method for real-time multi-person human pose estimation from video by utilizing convolutional neural networks. The method is aimed for use case specific applications, where good accuracy is essential and variation of the background and poses is limited. This enables to use a generic network architecture, which is both accurate and fast. The problem is divided into two phases: (1) pretraining and (2) fine-tuning. In pretraining, the network is learned with highly diverse input data from publicly available datasets, while in fine-tuning it is trained with application specific data recorded with Kinect. The method considers the whole system, including person detector, pose estimator and an automatic way to record application specific training material for fine-tuning. The method can be also thought of as a replacement for Kinect, and it can be used for higher level tasks such as gesture control, games, person tracking and action recognition.", "title": "" }, { "docid": "c49ed75ce48fb92db6e80e4fe8af7127", "text": "The One Class Classification (OCC) problem is different from the conventional binary/multi-class classification problem in the sense that in OCC, the negative class is either not present or not properly sampled. The problem of classifying positive (or target) cases in the absence of appropriately-characterized negative cases (or outliers) has gained increasing attention in recent years. Researchers have addressed the task of OCC by using different methodologies in a variety of application domains. In this paper we formulate a taxonomy with three main categories based on the way OCC has been envisaged, implemented and applied by various researchers in different application domains. We also present a survey of current state-of-the-art OCC algorithms, their importance, applications and limitations.", "title": "" }, { "docid": "043b4305f9f3c239b0f2061b8afa0648", "text": "Proliferation of information is a major confront faced by e-commerce industry. To ease the customers from this information proliferation, Recommender Systems (RS) were introduced. To improve the computational time of a RS for large scale data, the process of recommendation can be implemented on a scalable, fault tolerant and a distributed processing framework. This paper proposes a Content-Based RS implemented on scalable, fault tolerant and distributed framework of Hadoop Map Reduce. To generate recommendations with improved computational time, the proposed technique of Map Reduce Content-Based Recommendation (MRCBR) is implemented using Hadoop Map Reduce which follows the traditional process of content-based recommendation. MRCBR technique comprises of user profiling and document feature extraction which uses the vector space model followed by computing similarity to generate recommendation for the target user. Recommendations generated for the target user is a set of Top N documents. The proposed technique of recommendation is executed on a cluster of Hadoop and is tested for News dataset. News items are collected using RSS feeds and are stored in MongoDB. Computational time of MRCBR is evaluated with a Speedup factor and performance is evaluated with the standard evaluation metric of Precision, Recall and F-Measure.", "title": "" }, { "docid": "fdf1004b2acefd083b4e41b27a65bc02", "text": "Assuring safety of autonomous vehicles operating in an open environment requires reliable situation awareness, action planning and prediction of actions of other vehicles and objects. Factors that also have to be considered are certainty and completeness of available information and trust in information sources and other entities. The paper discusses the problem of autonomous vehicle safety assurance and proposes dynamic situation assessment to cope with the problem of environment dynamics and incomplete and uncertain situation knowledge. The approach is presented for a simple example of a simulated autonomous vehicle. The situation awareness model and autonomous vehicle control system architecture is presented. The problems of justifying system safety are discussed.", "title": "" }, { "docid": "eaefba9984e024ba62f99b875f3194ad", "text": "Image restoration algorithms are typically evaluated by some distortion measure (e.g. PSNR, SSIM, IFC, VIF) or by human opinion scores that quantify perceived perceptual quality. In this paper, we prove mathematically that distortion and perceptual quality are at odds with each other. Specifically, we study the optimal probability for correctly discriminating the outputs of an image restoration algorithm from real images. We show that as the mean distortion decreases, this probability must increase (indicating worse perceptual quality). As opposed to the common belief, this result holds true for any distortion measure, and is not only a problem of the PSNR or SSIM criteria. However, as we show experimentally, for some measures it is less severe (e.g. distance between VGG features). We also show that generative-adversarial-nets (GANs) provide a principled way to approach the perception-distortion bound. This constitutes theoretical support to their observed success in low-level vision tasks. Based on our analysis, we propose a new methodology for evaluating image restoration methods, and use it to perform an extensive comparison between recent super-resolution algorithms.", "title": "" }, { "docid": "eead063c20e32f53ec8a5e81dbac951c", "text": "We are currently experiencing the fourth Industrial Revolution in terms of cyber physical systems. These systems are industrial automation systems that enable many innovative functionalities through their networking and their access to the cyber world, thus changing our everyday lives significantly. In this context, new business models, work processes and development methods that are currently unimaginable will arise. These changes will also strongly influence the society and people. Family life, globalization, markets, etc. will have to be redefined. However, the Industry 4.0 simultaneously shows characteristics that represent the challenges regarding the development of cyber-physical systems, reliability, security and data protection. Following a brief introduction to Industry 4.0, this paper presents a prototypical application that demonstrates the essential aspects.", "title": "" }, { "docid": "29d6c5dc42ec12320a75c3fe2f48d305", "text": "Many large software systems originate from untyped scripting language code. While good for initial development, the lack of static type annotations can impact code-quality and performance in the long run. We present an approach for integrating untyped code and typed code in the same system to allow an initial prototype to smoothly evolve into an efficient and robust program. We introduce like types , a novel intermediate point between dynamic and static typing. Occurrences of like types variables are checked statically within their scope but, as they may be bound to dynamic values, their usage is checked dynamically. Thus like types provide some of the benefits of static typing without decreasing the expressiveness of the language. We provide a formal account of like types in a core object calculus and evaluate their applicability in the context of a new scripting language.", "title": "" }, { "docid": "499d11cefeb1b086f4749310de71385f", "text": "Non-volatile RAM (NVRAM) will fundamentally change in-memory databases as data structures do not have to be explicitly backed up to hard drives or SSDs, but can be inherently persistent in main memory. To guarantee consistency even in the case of power failures, programmers need to ensure that data is flushed from volatile CPU caches where it would be susceptible to power outages to NVRAM.\n In this paper, we present the NVC-Hashmap, a lock-free hashmap that is used for unordered dictionaries and delta indices in in-memory databases. The NVC-Hashmap is then evaluated in both stand-alone and integrated database benchmarks and compared to a B+-Tree based persistent data structure.", "title": "" }, { "docid": "abba5d320a4b6bf2a90ba2b836019660", "text": "We aim at segmenting small organs (e.g., the pancreas) from abdominal CT scans. As the target often occupies a relatively small region in the input image, deep neural networks can be easily confused by the complex and variable background. To alleviate this, researchers proposed a coarse-to-fine approach [46], which used prediction from the first (coarse) stage to indicate a smaller input region for the second (fine) stage. Despite its effectiveness, this algorithm dealt with two stages individually, which lacked optimizing a global energy function, and limited its ability to incorporate multi-stage visual cues. Missing contextual information led to unsatisfying convergence in iterations, and that the fine stage sometimes produced even lower segmentation accuracy than the coarse stage. This paper presents a Recurrent Saliency Transformation Network. The key innovation is a saliency transformation module, which repeatedly converts the segmentation probability map from the previous iteration as spatial weights and applies these weights to the current iteration. This brings us two-fold benefits. In training, it allows joint optimization over the deep networks dealing with different input scales. In testing, it propagates multi-stage visual information throughout iterations to improve segmentation accuracy. Experiments in the NIH pancreas segmentation dataset demonstrate the state-of-the-art accuracy, which outperforms the previous best by an average of over 2%. Much higher accuracies are also reported on several small organs in a larger dataset collected by ourselves. In addition, our approach enjoys better convergence properties, making it more efficient and reliable in practice.", "title": "" }, { "docid": "b7957cc83988e0be2da64f6d9837419c", "text": "Description: A revision of the #1 text in the Human Computer Interaction field, Interaction Design, the third edition is an ideal resource for learning the interdisciplinary skills needed for interaction design, human-computer interaction, information design, web design and ubiquitous computing. The authors are acknowledged leaders and educators in their field, with a strong global reputation. They bring depth of scope to the subject in this new edition, encompassing the latest technologies and devices including social networking, Web 2.0 and mobile devices. The third edition also adds, develops and updates cases, examples and questions to bring the book in line with the latest in Human Computer Interaction. Interaction Design offers a cross-disciplinary, practical and process-oriented approach to Human Computer Interaction, showing not just what principles ought to apply to Interaction Design, but crucially how they can be applied. The book focuses on how to design interactive products that enhance and extend the way people communicate, interact and work. Motivating examples are included to illustrate both technical, but also social and ethical issues, making the book approachable and adaptable for both Computer Science and non-Computer Science users. Interviews with key HCI luminaries are included and provide an insight into current and future trends.", "title": "" }, { "docid": "3860b804b7a84ee5eb5bc06c40647b16", "text": "The publicly available Pima Indian diabetic database (PIDD) at the UCIrvine Machine Learning Lab has become a standard for testing data mining algorithms to see their accuracy in predicting diabetic status from the 8 variables given. Looking at the 392 complete cases, guessing all are non-diabetic gives an accuracy of 65.1%. Since 1988, many dozens of publications using various algorithms have resulted in accuracy rates of 66% to 81%. Rough sets as a data mining predictive tool has been used in medical areas since the late 1980s, but not applied to the PIDD to our knowledge. When we apply rough sets to PIDD using ROSETTA software, there are many different options within the software to choose from. The predictive accuracy was 73.8% with a 95% CI of (71.3%, 76.3%) with one of the methods we used. Rough sets are a useful addition to the analysis of diabetic databases.", "title": "" }, { "docid": "30f12cbec518ef3b58f8d19d94780169", "text": "AMNESIA is a tool that detects and prevents SQL injection attacks by combining static analysis and runtime monitoring. Empirical evaluation has shown that AMNESIA is both effective and efficient against SQL injection.", "title": "" }, { "docid": "dc867c305130e728aaaa00fef5b8b688", "text": "Large scale surveillance video analysis is one of the most important components in the future artificial intelligent city. It is a very challenging but practical system, consists of multiple functionalities such as object detection, tracking, identification and behavior analysis. In this paper, we try to address three tasks hosted in NVIDIA AI City Challenge contest. First, a system that transforming the image coordinate to world coordinate has been proposed, which is useful to estimate the vehicle speed on the road. Second, anomalies like car crash event and stalled vehicles can be found by the proposed anomaly detector framework. Third, multiple camera vehicle re-identification problem has been investigated and a matching algorithm is explained. All these tasks are based on our proposed online single camera multiple object tracking (MOT) system, which has been evaluated on the widely used MOT16 challenge benchmark. We show that it achieves the best performance compared to the state-of-the-art methods. Besides of MOT, we evaluate the proposed vehicle re-identification model on VeRi-776 dataset and it outperforms all other methods with a large margin.", "title": "" } ]
scidocsrr
7cfdb5292c5bfde54e68a749c3296f98
AC-Powered Pulse Generator
[ { "docid": "eabb9e04ff7609bf6754431b9ce6718f", "text": "Electric phenomena play an important role in biophysics. Bioelectric processes control the ion transport processes across membranes, and are the basis for information transfer along neurons. These electrical effects are generally triggered by chemical processes. However, it is also possible to control such cell functions and transport processes by applying pulsed electric fields. This area of bioengineering, bioelectrics, offers new applications for pulsed power technology. One such application is prevention of biofouling, an effect that is based on reversible electroporation of cell membranes. Pulsed electric fields of several kV/cm amplitude and submicrosecond duration have been found effective in preventing the growth of aquatic nuisance species on surfaces. Reversible electroporation is also used for medical applications, e.g. for delivery of chemotherapeutic drugs into tumor cells, for gene therapy, and for transdermal drug delivery. Higher electric fields cause irreversible membrane damage. Pulses in the microsecond range with electric field intensities in the tens of kV/cm are being used for bacterial decontamination of water and liquid food. A new type of field-cell interaction, \"Intracellular Electromanipulation\", by means of nanosecond pulses at electric fields exceeding 50 kV/cm has been recently added to known bioelectric effects. It is based on capacitive coupling to cell substructures, has therefore the potential to affect transport processes across subcellular membranes, and may be used for gene transfer into cell nuclei. There are also indications that it triggers intracellular processes, such as programmed cell death, an effect, which can be used for cancer treatment. In order to generate the required electric fields for these processes, high voltage, high current sources are required. The pulse duration needs to be short to prevent thermal effects. Pulse power technology is the enabling technology for bioelectrics. The field of bioelectrics, therefore opens up a new research area for pulse power engineers, with fascinating applications in biology and medicine.", "title": "" } ]
[ { "docid": "39e550b269a66f31d467269c6389cde0", "text": "The artificial intelligence community has seen a recent resurgence in the area of neural network study. Inspired by the workings of the brain and nervous system, neural networks have solved some persistent problems in vision and speech processing. However, the new systems may offer an alternative approach to decision-making via high level pattern recognition. This paper will describe the distinguishing features of neurally inspired systems, and present popular systems in a discrete-time, algorithmic framework. Examples of applications to decision problems will appear, and guidelines for their use in operations research will be established.", "title": "" }, { "docid": "1cb2d77cbe4c164e0a9a9481cd268d01", "text": "Visual analytics (VA) system development started in academic research institutions where novel visualization techniques and open source toolkits were developed. Simultaneously, small software companies, sometimes spin-offs from academic research institutions, built solutions for specific application domains. In recent years we observed the following trend: some small VA companies grew exponentially; at the same time some big software vendors such as IBM and SAP started to acquire successful VA companies and integrated the acquired VA components into their existing frameworks. Generally the application domains of VA systems have broadened substantially. This phenomenon is driven by the generation of more and more data of high volume and complexity, which leads to an increasing demand for VA solutions from many application domains. In this paper we survey a selection of state-of-the-art commercial VA frameworks, complementary to an existing survey on open source VA tools. From the survey results we identify several improvement opportunities as future research directions.", "title": "" }, { "docid": "e2ba4f88f4b1a8afcf51882bc7cfa634", "text": "The embodied and situated approach to artificial intelligence (AI) has matured and become a viable alternative to traditional computationalist approaches with respect to the practical goal of building artificial agents, which can behave in a robust and flexible manner under changing real-world conditions. Nevertheless, some concerns have recently been raised with regard to the sufficiency of current embodied AI for advancing our scientific understanding of intentional agency. While from an engineering or computer science perspective this limitation might not be relevant, it is of course highly relevant for AI researchers striving to build accurate models of natural cognition. We argue that the biological foundations of enactive cognitive science can provide the conceptual tools that are needed to diagnose more clearly the shortcomings of current embodied AI. In particular, taking an enactive perspective points to the need for AI to take seriously the organismic roots of autonomous agency and sense-making. We identify two necessary systemic requirements, namely constitutive autonomy and adaptivity, which lead us to introduce two design principles of enactive AI. It is argued that the development of such enactive AI poses a significant challenge to current methodologies. However, it also provides a promising way of eventually overcoming the current limitations of embodied AI, especially in terms of providing fuller models of natural embodied cognition. Finally, some practical implications and examples of the two design principles of enactive AI are also discussed.", "title": "" }, { "docid": "c1e1d4bf69a9a3de470aa8d7574b5fb5", "text": "An agent that can see everyday scenes and fluently communicate with people is one of the ambitious goals of artificial intelligence. To achieve that, it is crucial to exploit visually-grounded information and capture subtle nuances from human conversation. To this end, Visual Dialog (VisDial) task has been introduced. In this paper, we propose a new model for visual dialog. Our model employs Bilinear Attention Network (BAN) and Embeddings from Language Models (ELMo) to exploit visually-grounded information and context of dialogs, respectively. Our proposed model outperforms previous state-of-the-art on VisDial v1.0 dataset by a significant margin (5.33% on recall @10)", "title": "" }, { "docid": "97ed18e26a80a2ae078f78c70becfe8c", "text": "A fully-integrated 18.5 kHz RC time-constant-based oscillator is designed in 65 nm CMOS for sleep-mode timers in wireless sensors. A comparator offset cancellation scheme achieves 4× to 25× temperature stability improvement, leading to an accuracy of ±0.18% to ±0.55% over -40 to 90 °C. Sub-threshold operation and low-swing oscillations result in ultra-low power consumption of 130 nW. The architecture also provides timing noise suppression, leading to 10× reduction in long-term Allan deviation. It is measured to have a stability of 20 ppm or better for measurement intervals over 0.5 s. The oscillator also has a fast startup-time, with the period settling in 4 cycles.", "title": "" }, { "docid": "ca7afb87dae38ee0cf079f91dbd91d43", "text": "Diet is associated with the development of CHD. The incidence of CHD is lower in southern European countries than in northern European countries and it has been proposed that this difference may be a result of diet. The traditional Mediterranean diet emphasises a high intake of fruits, vegetables, bread, other forms of cereals, potatoes, beans, nuts and seeds. It includes olive oil as a major fat source and dairy products, fish and poultry are consumed in low to moderate amounts. Many observational studies have shown that the Mediterranean diet is associated with reduced risk of CHD, and this result has been confirmed by meta-analysis, while a single randomised controlled trial, the Lyon Diet Heart study, has shown a reduction in CHD risk in subjects following the Mediterranean diet in the secondary prevention setting. However, it is uncertain whether the benefits of the Mediterranean diet are transferable to other non-Mediterranean populations and whether the effects of the Mediterranean diet will still be feasible in light of the changes in pharmacological therapy seen in patients with CHD since the Lyon Diet Heart study was conducted. Further randomised controlled trials are required and if the risk-reducing effect is confirmed then the best methods to effectively deliver this public health message worldwide need to be considered.", "title": "" }, { "docid": "a1bd6742011302d35527cdbad73a82a3", "text": "The Semantic Web contains an enormous amount of information in the form of knowledge bases (KB). To make this information available, many question answering (QA) systems over KBs were created in the last years. Building a QA system over KBs is difficult because there are many different challenges to be solved. In order to address these challenges, QA systems generally combine techniques from natural language processing, information retrieval, machine learning and Semantic Web. The aim of this survey is to give an overview of the techniques used in current QA systems over KBs. We present the techniques used by the QA systems which were evaluated on a popular series of benchmarks: Question Answering over Linked Data. Techniques that solve the same task are first grouped together and then described. The advantages and disadvantages are discussed for each technique. This allows a direct comparison of similar techniques. Additionally, we point to techniques that are used over WebQuestions and SimpleQuestions, which are two other popular benchmarks for QA systems.", "title": "" }, { "docid": "ec81d912a8509bb9e7317d2fba4dff57", "text": "Dedicated Short Range Communication is attracting a lot of interest these days due to its utility in vehicular safety applications, intelligent transportation system and infotainment applications. Such vehicular networks are characterized by the highly dynamic changes in topology, no significant power constraints and ephemeral links. Considering an interaction between the client and server nodes that last for a random duration of time, an important question is to maximize the amount of useful content downloaded by the client, either in a single request phase, or iteratively in multiple phases. The aim of this work is to propose and investigate a multiphase request model using Markov Decision Process and compare its efficiency against a single phase version. We show that a multiphase request protocol performs better than single phase protocol.", "title": "" }, { "docid": "4d1be9aebf7534cce625b95bde4696c6", "text": "BlockChain (BC) has attracted tremendous attention due to its immutable nature and the associated security and privacy benefits. BC has the potential to overcome security and privacy challenges of Internet of Things (IoT). However, BC is computationally expensive, has limited scalability and incurs significant bandwidth overheads and delays which are not suited to the IoT context. We propose a tiered Lightweight Scalable BC (LSB) that is optimized for IoT requirements. We explore LSB in a smart home setting as a representative example for broader IoT applications. Low resource devices in a smart home benefit from a centralized manager that establishes shared keys for communication and processes all incoming and outgoing requests. LSB achieves decentralization by forming an overlay network where high resource devices jointly manage a public BC that ensures end-to-end privacy and security. The overlay is organized as distinct clusters to reduce overheads and the cluster heads are responsible for managing the public BC. LSB incorporates several optimizations which include algorithms for lightweight consensus, distributed trust and throughput management. Qualitative arguments demonstrate that LSB is resilient to several security attacks. Extensive simulations show that LSB decreases packet overhead and delay and increases BC scalability compared to relevant baselines.", "title": "" }, { "docid": "b3a2be2d02946449ef32546d097220f1", "text": "A half-rate bang-bang phase and frequency detector (BBPFD) is presented for continuous-rate clock and data recovery (CDR) circuits. The proposed half-rate BBPFD not only preserves the advantages of conventional BBPDs, but also has the infinite unilateral frequency detection range. To verify the proposed circuit, a continuous-rate CDR circuit with the proposed BBPFD has been fabricated in a 0.18um CMOS process. It can recover the NRZ data with the bit rate ranging from 622 Mbps to 3.125 Gbps. The measured bit-error rate is less than 10-12. The core area is 0.33 x 0.27 mm2 and the power consumption is 80 mW from a 1.8 V supply.", "title": "" }, { "docid": "ba92025b0930fa0182053f3d51fe131b", "text": "In this paper we present two path planning algorithms based on Bézier curves for autonomous vehicles with waypoints and corridor constraints. Bézier curves have useful properties for the path generation problem. The paper describes how the algorithms apply these properties to generate the reference trajectory for vehicles to satisfy the path constraints. Both algorithms join cubic Bézier curve segments smoothly to generate the path. Additionally, we discuss the constrained optimization problem that optimizes the resulting path for user-defined cost function. The simulation shows the generation of successful routes for autonomous vehicles using these algorithms as well as control results for a simple kinematic vehicle. Extensions of these algorithms towards navigating through an unstructured environment with limited sensor range are discussed.", "title": "" }, { "docid": "6465cc1429ecbf208d0478392cad1b16", "text": "The ultimate objective in planning is to construct plans for execution. However, when a plan is executed in a real environment it can encounter differences between the expected and actual context of execution. These differences can manifest as divergences between the expected and observed states of the world, or as a change in the goals to be achieved by the plan. In both cases, the old plan must be replaced with a new one. In replacing the plan an important consideration is plan stability. We compare two alternative strategies for achieving the stable repair of a plan: one is simply to replan from scratch and the other is to adapt the existing plan to the new context. We present arguments to support the claim that plan stability is a valuable property. We then propose an implementation, based on LPG, of a plan repair strategy that adapts a plan to its new context. We demonstrate empirically that our plan repair strategy achieves more stability than replanning and can produce repaired plans more efficiently than replanning.", "title": "" }, { "docid": "66155c59a4dd4cad52f642625af91a4a", "text": "In most e-commerce platforms, product title classification is a crucial task. It can assist sellers listing an item in an appropriate category. At first glance, product title classification is merely an instance of text classification problems, which are well-studied in literature. However, product titles possess some properties very different from general documents. A title is usually a very short description, and an incomplete sentence. A product title classifier may need to be designed differently from a text classifier, although this issue has not been thoroughly studied. In this work, using a large-scale real-world data set, we examine conventional text-classification procedures on product title data. These procedures include word stemming, stop-word removal, feature representation and multi-class classification. Our major findings include that stemming and stop-word removal are harmful, and bigrams or degree-2 polynomial mappings are very effective. Further, if linear classifiers such as SVMs are applied, instance normalization does not downgrade the performance and binary/TF-IDF representations perform similarly. These results lead to a concrete guideline for practitioners on product title classification.", "title": "" }, { "docid": "52e492ff5e057a8268fd67eb515514fe", "text": "We present a long-range passive (battery-free) radio frequency identification (RFID) and distributed sensing system using a single wire transmission line (SWTL) as the communication channel. A SWTL exploits guided surface wave propagation along a single conductor, which can be formed from existing infrastructure, such as power lines, pipes, or steel cables. Guided propagation along a SWTL has far lower losses than a comparable over-the-air (OTA) communication link; so much longer read distances can be achieved compared with the conventional OTA RFID system. In a laboratory-scale experiment with an ISO18000–6C (EPC Gen 2) passive tag, we demonstrate an RFID system using an 8 mm diameter, 5.2 m long SWTL. This SWTL has 30 dB lower propagation loss than a standard OTA RFID system at the same read range. We further demonstrate that the SWTL can tolerate extreme temperatures far beyond the capabilities of coaxial cable, by heating an operating SWTL conductor with a propane torch having a temperature of nearly 2000 °C. Extrapolation from the measured results suggest that a SWTL-based RFID system is capable of read ranges of over 70 m assuming a reader output power of +32.5 dBm and a tag power-up threshold of −7 dBm.", "title": "" }, { "docid": "935a044fc91d98df9bb390dff5b38520", "text": "Recent work classifying citations in scientific literature has shown that it is possible to improve classification results with extensive feature engineering. While this result confirms that citation classification is feasible, there are two drawbacks to this approach: (i) it requires a large annotated corpus for supervised classification, which in the case of scientific literature is quite expensive; and (ii) feature engineering that is too specific to one area of scientific literature may not be portable to other domains, even within scientific literature. In this paper we address these two drawbacks. First, we frame citation classification as a domain adaptation task and leverage the abundant labeled data available in other domains. Then, to avoid over-engineering specific citation features for a particular scientific domain, we explore a deep learning neural network approach that has shown to generalize well across domains using unigram and bigram features. We achieve better citation classification results with this cross-domain approach than using in-domain classification.", "title": "" }, { "docid": "19b3b2e271a40e5597fd281b1e87c686", "text": "Key frame extraction has been recognized as one of the important research issues in video information retrieval. Although progress has been made in key frame extraction, the existing approaches are either computationally expensive or ine ective in capturing salient visual content. In this paper, we rst discuss the importance of key frame selection; and then brie y review and evaluate the existing approaches. To overcome the shortcomings of the existing approaches, we introduce a new algorithm for key frame extraction based on unsupervised clustering. The proposed algorithm is both computationally simple and able to adapt to the visual content. The e ciency and e ectiveness are validated by large amount of real-world videos.", "title": "" }, { "docid": "b17d89e7db1ca18fa5bcf2446f553a1b", "text": "Following the definition of developable surface in differential geometry, the flattenable mesh surface, a special type of piecewise- linear surface, inherits the good property of developable surface about having an isometric map from its 3D shape to a corresponding planar region. Different from the developable surfaces, a flattenable mesh surface is more flexible to model objects with complex shapes (e.g., cramped paper or warped leather with wrinkles). Modelling a flattenable mesh from a given input mesh surface can be completed under a constrained nonlinear optimization framework. In this paper, we reformulate the problem in terms of estimation error. Therefore, the shape of a flattenable mesh can be computed by the least-norm solutions faster. Moreover, the method for adding shape constraints to the modelling of flattenable mesh surfaces has been exploited. We show that the proposed method can compute flattenable mesh surfaces from input piecewise linear surfaces successfully and efficiently.", "title": "" }, { "docid": "f7a29c71523ee159a582ac7603226d78", "text": "Although project-based learning is a well-known and widely used instructional strategy, it remains a challenging issue to effectively apply this approach to practical settings for improving the learning performance of students. In this study, a project-based digital storytelling approach is proposed to cope with this problem. With a quasiexperiment, the proposed approach has been applied to a learning activity of a science course in an elementary school. A total of 117 Grade 5 students in an elementary school in southern Taiwan were assigned to an experimental group (N = 60) and a control group (N = 57) to compare the performance of the approach with that of conventional project-based learning. A web-based information-searching system, Meta-Analyzer, was used to enable the students to collect data on the Internet based on the questions raised by the teachers, and Microsoft’s Photo Story was used to help the experimental group develop movies for storytelling based on the collected data. Moreover, several measuring tools, including the science learning motivation scale, the problem-solving competence scale and the science achievement test, were used to collect feedback as well as evaluate the learning performance of the students. The experimental results show that the project-based learning with digital storytelling could effectively enhance the students’ science learning motivation, problem-solving competence, and learning achievement.", "title": "" }, { "docid": "2cae596882b22980080527c19b6fd361", "text": "Peace is commonly considered as 'absence of war'. Nevertheless, peace the so called 'positive peace' (Galtung, 1996) – implies a lot more than this. It implies the creation of a society based on social justice through equal opportunity, a fair distribution of power and resources, equal protection and impartial enforcement of law, and above all, mutual cultural understanding and respect. Th us, far from the pessimistic view of classical realists, which assume that the confl ict is an intrinsic part of the human nature, we claim that peace is an architecture requiring fi rm, specifi c foundations such as a widespread education to peace and the promotion of intercultural dialogue. In this context, social scientists speculate on the causal relation between tourism and peace. In the present article we deepen this topic to propose more concrete arguments about the existence of a relationship between tourism and the construction of a positive peace. We thus off er a pioneering approach by proposing an association between international tourism and the practices of cultural diplomacy. We also analyse one of the most important conditions for this alliance to be created, that is, the implementation of cultural heritage management policies based on public participation and, at the same time, the promotion of intercultural dialogue (paideia approach to cultural heritage management). In this sense, we fi nally propose a defi nition for 'cultural heritage quality management'.", "title": "" }, { "docid": "bd94b129fdb45adf5d31f2b59cf66867", "text": "Systems based on Brain Computer Interface (BCI) have been developed from the past three decades for assisting locked-in state patients. Researchers across the globe are developing new techniques to increase the BCI accuracy. In 1924 Dr. Hans Berger recorded the first EEG signal. The number of experimental measurements of brain activity has been done using human control commands. The main function of BCI is to convert and transmit human intentions into appropriate motion commands for the wheelchairs, robots, devices, and so forth. BCI allows improving the quality of life of disabled patients and letting them interact with their environment. Since the BCI signals are non-stationary, the main challenges in the non-invasive BCI system are to accurately detect and classify the signals. This paper reviews the State of Art of BCI and techniques used for feature extraction and classification using electroencephalogram (EEG) and highlights the need of adaptation concept.", "title": "" } ]
scidocsrr
8bd229cba2fb9fd2abd062d170a332a1
Scientific Information Extraction with Semi-supervised Neural Tagging
[ { "docid": "7d2baafa1e2abb311fe9c68f4f9fe46a", "text": "In this paper, we present a conversational model that incorporates both context and participant role for two-party conversations. Different architectures are explored for integrating participant role and context information into a Long Short-term Memory (LSTM) language model. The conversational model can function as a language model or a language generation model. Experiments on the Ubuntu Dialog Corpus show that our model can capture multiple turn interaction between participants. The proposed method outperforms a traditional LSTM model as measured by language model perplexity and response ranking. Generated responses show characteristic differences between the two participant roles.", "title": "" } ]
[ { "docid": "16f93322871e61392b286a7ddba1034f", "text": "1 Objective: Although it is well-established that the ability to manage stress is a prerequisite of 2 sporting excellence, the construct of psychological resilience has yet to be systematically 3 examined in athletic performers. The study reported here sought to explore and explain the 4 relationship between psychological resilience and optimal sport performance. 5 Design and Method: Twelve Olympic champions (8 men and 4 women) from a range of sports 6 were interviewed regarding their experiences of withstanding pressure during their sporting 7 careers. A grounded theory approach was employed throughout the data collection and 8 analysis, and interview transcripts were analyzed using open, axial and selective coding. 9 Methodological rigor was established by incorporating various verification strategies into the 10 research process, and the resultant grounded theory was also judged using the quality criteria of 11 fit, work, relevance, and modifiability. 12 Results and Conclusions: Results indicate that numerous psychological factors (relating to a 13 positive personality, motivation, confidence, focus, and perceived social support) protect the 14 world’s best athletes from the potential negative effect of stressors by influencing their 15 challenge appraisal and meta-cognitions. These processes promote facilitative responses that 16 precede optimal sport performance. The emergent theory provides sport psychologists, coaches 17 and national sport organizations with an understanding of the role of resilience in athletes’ lives 18 and the attainment of optimal sport performance. 19", "title": "" }, { "docid": "135d451e66cdc8d47add47379c1c35f9", "text": "We present an approach to low-level vision that combines two main ideas: the use of convolutional networks as an image processing architecture and an unsupervised learning procedure that synthesizes training samples from specific noise models. We demonstrate this approach on the challenging problem of natural image denoising. Using a test set with a hundred natural images, we find that convolutional networks provide comparable and in some cases superior performance to state of the art wavelet and Markov random field (MRF) methods. Moreover, we find that a convolutional network offers similar performance in the blind denoising setting as compared to other techniques in the non-blind setting. We also show how convolutional networks are mathematically related to MRF approaches by presenting a mean field theory for an MRF specially designed for image denoising. Although these approaches are related, convolutional networks avoid computational difficulties in MRF approaches that arise from probabilistic learning and inference. This makes it possible to learn image processing architectures that have a high degree of representational power (we train models with over 15,000 parameters), but whose computational expense is significantly less than that associated with inference in MRF approaches with even hundreds of parameters.", "title": "" }, { "docid": "ac4e4d77acdb1f823812eecbd801d44b", "text": "Authorship attribution typically uses all information representing both content and style whereas attribution based only on stylistic aspects may be robust in cross-domain settings. This paper analyzes different linguistic aspects that may help represent style. Specifically, we study the role of syntax and lexical words (nouns, verbs, adjectives and adverbs) in representing style. We use a purely syntactic language model to study the significance of sentence structures in both singledomain and cross-domain attribution, i.e. cross-topic and cross-genre attribution. We show that syntax may be helpful for cross-genre attribution while cross-topic attribution and single-domain may benefit from additional lexical information. Further, pure syntactic models may not be effective by themselves and need to be used in combination with other robust models. To study the role of word choice, we perform attribution by masking all words or specific topic words corresponding to nouns, verbs, adjectives and adverbs. Using a single-domain dataset, IMDB1M reviews, we demonstrate the heavy influence of common nouns and proper nouns in attribution, thereby highlighting topic interference. Using cross-domain Guardian10 dataset, we show that some common nouns, verbs, adjectives and adverbs may help with stylometric attribution as demonstrated by masking topic words corresponding to these parts-of-speech. As expected, it was observed that proper nouns are heavily influenced by content and cross-domain attribution will benefit from completely masking them.", "title": "" }, { "docid": "3084181a8f29e281ed3d68f8c9a67aee", "text": "Object detection with deep neural networks is often performed by passing a few thousand candidate bounding boxes through a deep neural network for each image. These bounding boxes are highly correlated since they originate from the same image. In this paper we investigate how to exploit feature occurrence at the image scale to prune the neural network which is subsequently applied to all bounding boxes. We show that removing units which have near-zero activation in the image allows us to significantly reduce the number of parameters in the network. Results on the PASCAL 2007 Object Detection Challenge demonstrate that up to 40% of units in some fully-connected layers can be entirely eliminated with little change in the detection result.", "title": "" }, { "docid": "8d8723d0c1b6e23109ec59e6cc6ffeff", "text": " Employees often have ideas, information, and opinions for constructive ways to improve work and work organizations. Sometimes these employees exercise voice and express their ideas, information, and opinions; and other times they engage in silence and withhold their ideas, information, and opinions. On the surface, expressing and withholding behaviours might appear to be polar opposites because silence implies not speaking while voice implies speaking up on important issues and problems in organizations. Challenging this simplistic notion, this paper presents a conceptual framework suggesting that employee silence and voice are best conceptualized as separate, multidimensional constructs. Based on employee motives, we differentiate three types of silence (Acquiescent Silence, Defensive Silence, and ProSocial Silence) and three parallel types of voice (Acquiescent Voice, Defensive Voice, and ProSocial Voice) where withholding important information is not simply the absence of voice. Building on this conceptual framework, we further propose that silence and voice have differential consequences to employees in work organizations. Based on fundamental differences in the overt behavioural cues provided by silence and voice, we present a series of propositions predicting that silence is more ambiguous than voice, observers are more likely to misattribute employee motives for silence than for voice, and misattributions for motives behind silence will lead to more incongruent consequences (both positive and negative) for employees (than for voice). We conclude by discussing implications for future research and for managers. Journal of Management Studies 40:6 September 2003 0022-2380", "title": "" }, { "docid": "ffb87dc7922fd1a3d2a132c923eff57d", "text": "It has been suggested that pulmonary artery pressure at the end of ejection is close to mean pulmonary artery pressure, thus contributing to the optimization of external power from the right ventricle. We tested the hypothesis that dicrotic notch and mean pulmonary artery pressures could be of similar magnitude in 15 men (50 +/- 12 yr) referred to our laboratory for diagnostic right and left heart catheterization. Beat-to-beat relationships between dicrotic notch and mean pulmonary artery pressures were studied 1) at rest over 10 consecutive beats and 2) in 5 patients during the Valsalva maneuver (178 beats studied). At rest, there was no difference between dicrotic notch and mean pulmonary artery pressures (21.8 +/- 12.0 vs. 21.9 +/- 11.1 mmHg). There was a strong linear relationship between dicrotic notch and mean pressures 1) over the 10 consecutive beats studied in each patient (mean r = 0.93), 2) over the 150 resting beats (r = 0.99), and 3) during the Valsalva maneuver in each patient (r = 0.98-0.99) and in the overall beats (r = 0.99). The difference between dicrotic notch and mean pressures was -0.1 +/- 1.7 mmHg at rest and -1.5 +/- 2.3 mmHg during the Valsalva maneuver. Substitution of the mean pulmonary artery pressure by the dicrotic notch pressure in the standard formula of the pulmonary vascular resistance (PVR) resulted in an equation relating linearly end-systolic pressure and stroke volume. The slope of this relation had the dimension of a volume elastance (in mmHg/ml), a simple estimate of volume elastance being obtained as 1.06(PVR/T), where T is duration of the cardiac cycle. In conclusion, dicrotic notch pressure was of similar magnitude as mean pulmonary artery pressure. These results confirmed our primary hypothesis and indicated that human pulmonary artery can be treated as if it is an elastic chamber with a volume elastance of 1.06(PVR/T).", "title": "" }, { "docid": "3e8adf9643ff91ae1ed846d9fc6be72e", "text": "Durable responses and encouraging survival have been demonstrated with immune checkpoint inhibitors in small-cell lung cancer (SCLC), but predictive markers are unknown. We used whole exome sequencing to evaluate the impact of tumor mutational burden on efficacy of nivolumab monotherapy or combined with ipilimumab in patients with SCLC from the nonrandomized or randomized cohorts of CheckMate 032. Patients received nivolumab (3 mg/kg every 2 weeks) or nivolumab plus ipilimumab (1 mg/kg plus 3 mg/kg every 3 weeks for four cycles, followed by nivolumab 3 mg/kg every 2 weeks). Efficacy of nivolumab ± ipilimumab was enhanced in patients with high tumor mutational burden. Nivolumab plus ipilimumab appeared to provide a greater clinical benefit than nivolumab monotherapy in the high tumor mutational burden tertile.", "title": "" }, { "docid": "2fe2f83fa9a0dca9f01fd9e5e80ca515", "text": "For the first time in history, it is possible to study human behavior on great scale and in fine detail simultaneously. Online services and ubiquitous computational devices, such as smartphones and modern cars, record our everyday activity. The resulting Big Data offers unprecedented opportunities for tracking and analyzing behavior. This paper hypothesizes the applicability and impact of Big Data technologies in the context of psychometrics both for research and clinical applications. It first outlines the state of the art, including the severe shortcomings with respect to quality and quantity of the resulting data. It then presents a technological vision, comprised of (i) numerous data sources such as mobile devices and sensors, (ii) a central data store, and (iii) an analytical platform, employing techniques from data mining and machine learning. To further illustrate the dramatic benefits of the proposed methodologies, the paper then outlines two current projects, logging and analyzing smartphone usage. One such study attempts to thereby quantify severity of major depression dynamically; the other investigates (mobile) Internet Addiction. Finally, the paper addresses some of the ethical issues inherent to Big Data technologies. In summary, the proposed approach is about to induce the single biggest methodological shift since the beginning of psychology or psychiatry. The resulting range of applications will dramatically shape the daily routines of researches and medical practitioners alike. Indeed, transferring techniques from computer science to psychiatry and psychology is about to establish Psycho-Informatics, an entire research direction of its own.", "title": "" }, { "docid": "ac885eedad9c777e2980460d987c7cfb", "text": "BACKGROUND\nOne of the greatest problems for India is undernutrition among children. The country is still struggling with this problem. Malnutrition, the condition resulting from faulty nutrition, weakens the immune system and causes significant growth and cognitive delay. Growth assessment is the measurement that best defines the health and nutritional status of children, while also providing an indirect measurement of well-being for the entire population.\n\n\nMETHODS\nA cross-sectional study, in which we explored nutritional status in school-age slum children and analyze factors associated with malnutrition with the help of a pre-designed and pre-tested questionnaire, anthropometric measurements and clinical examination from December 2010 to April 2011 in urban slums of Bareilly, Uttar-Pradesh (UP), India.\n\n\nRESULT\nThe mean height and weight of boys and girls in the study group was lower than the CDC 2000 (Centers for Disease Control and Prevention) standards in all age groups. Regarding nutritional status, prevalence of stunting and underweight was highest in age group 11 yrs to 13 yrs whereas prevalence of wasting was highest in age group 5 yrs to 7 yrs. Except refractive errors all illnesses are more common among girls, but this gender difference is statistically significant only for anemia and rickets. The risk of malnutrition was significantly higher among children living in joint families, children whose mother's education was [less than or equal to] 6th standard and children with working mothers.\n\n\nCONCLUSIONS\nMost of the school-age slum children in our study had a poor nutritional status. Interventions such as skills-based nutrition education, fortification of food items, effective infection control, training of public healthcare workers and delivery of integrated programs are recommended.", "title": "" }, { "docid": "df0e7ed374f70893afae92aaaf0980d4", "text": "The interactive behavior between the attacker and the defender in a network environment is similar to information warfare where both attacker and defender may have several available strategies to achieve maximum gratification. The process of positioning security within a network environment is synonymous to a decision-making process. Security decisionmaking involves the allocation of scarce network security resources to counter or mitigate security attacks. To ensure effective security, security decision-makers must ensure that the resources are allocated and deployed in the most optimum manner. Game theory provides a quantitative framework for the analysis and modeling of such network security cases. Gametheoretic models view network security scenarios as an optimization game comprising of multiple players notably the attackers (malicious users) and the defenders (system administrators) and has become a major source of attraction in security research. These types of games are referred to as security games. Security games and their solutions are potential tools for security decision making and algorithm development as well as for predicting attacker behavior. In this paper, we first explore the fundamentals of game-theory with respect to security, and then presents a two-player zero-sum game model of the interaction between malicious users and network administrators. A description of the major components of such game is presented and a solution technique for solving such game scenario is proposed. We then describe how expected results can be analyzed to show the optimality of resulting strategies and how they may be employed by system administrators to better protect the network. Index Terms security games, strategies, attackers, defenders, stochastic games, deterministic games, game theory.", "title": "" }, { "docid": "0836e5d45582b0a0eec78234776aa419", "text": "‘Description’: ‘Microsoft will accelerate your journey to cloud computing with an! agile and responsive datacenter built from your existing technology investments.’,! ‘DisplayUrl’: ‘www.microsoft.com/en-us/server-cloud/ datacenter/virtualization.aspx’,! ‘ID’: ‘a42b0908-174e-4f25-b59c-70bdf394a9da’,! ‘Title’: ‘Microsoft | Server & Cloud | Datacenter | Virtualization ...’,! ‘Url’: ‘http://www.microsoft.com/en-us/server-cloud/datacenter/ virtualization.aspx’,! ...! Data! #Topics: 228! #Candidate Labels: ~6,000! Domains: BLOGS, BOOKS, NEWS, PUBMED! Candidate labels rated by humans (0-3) ! Published by Lau et al. (2011). 4. Scoring Candidate Labels! Candidate Label: L = {w1, w2, ..., wm}! Scoring Function: Task: The aim of the task is to associate labels with automatically generated topics.", "title": "" }, { "docid": "7c677829656403c28ea6f9bb385457dc", "text": "Making decisions regarding risk is an integral part of clinical mental health work (Flewett 2010). A plethora of tools and approaches (Quinsey 1998; Otto 2000; Douglas 2010a) now exists to assist with clinical judgement when undertaking this task in the context of risk of violence. The tremendous empirical advances in the field over the past two decades (Douglas 2010a) have seen a welcome increase in emphasis on systematic, structured approaches. In tandem with these developments, however, the role of ‘clinical intuition’ has been marginalised or even denigrated (Quinsey 1998). This article will assert that the intui tive mode of thought has considerable value for clinicians charged with the task of violence risk assessment, provided it is applied in a thoughtful and systematic way. It will outline practical guide­ lines for such application, derived from the work of cognitive psychologist Robin Hogarth.", "title": "" }, { "docid": "97907d29bac2dc5214dd794b740b70e9", "text": "Anomaly detection is a typical task in many fields, as well as spectrum monitoring in wireless communication. Anomaly detection task of spectrum in wireless communication is quite different from other anomaly detection tasks, mainly reflected in two aspects: (a) the variety of anomaly types makes it impossible to get the label of abnormal data. (b) the complexity and the quantity of the electromagnetic environment data increase the difficulty of manual feature extraction. Therefore, a novelty learning model is expected to deal with the task of anomaly detection of spectrum in wireless communication. In this paper, we apply the deep-structure auto-encoder neural networks to detect the anomalies of spectrum, and the time–frequency diagram is acted as the feature of the learning model. Meanwhile, a threshold is used to distinguish the anomalies from the normal data. Finally, we evaluate the performance of our models with different number of hidden layers by our experiments. The results of numerical experiments demonstrate that a model with a deeper architecture achieves relatively better performance in our spectrum anomaly detection task.", "title": "" }, { "docid": "5ff7a82ec704c8fb5c1aa975aec0507c", "text": "With the increase of an ageing population and chronic diseases, society becomes more health conscious and patients become “health consumers” looking for better health management. People’s perception is shifting towards patient-centered, rather than the classical, hospital–centered health services which has been propelling the evolution of telemedicine research from the classic e-Health to m-Health and now is to ubiquitous healthcare (u-Health). It is expected that mobile & ubiquitous Telemedicine, integrated with Wireless Body Area Network (WBAN), have a great potential in fostering the provision of next-generation u-Health. Despite the recent efforts and achievements, current u-Health proposed solutions still suffer from shortcomings hampering their adoption today. This paper presents a comprehensive review of up-to-date requirements in hardware, communication, and computing for next-generation u-Health systems. It compares new technological and technical trends and discusses how they address expected u-Health requirements. A thorough survey on various worldwide recent system implementations is presented in an attempt to identify shortcomings in state-of-the art solutions. In particular, challenges in WBAN and ubiquitous computing were emphasized. The purpose of this survey is not only to help beginners with a holistic approach toward understanding u-Health systems but also present to researchers new technological trends and design challenges they have to cope with, while designing such systems.", "title": "" }, { "docid": "be7ad6ff14910b8198b1e94003418989", "text": "An important ability of a robot that interacts with the environment and manipulates objects is to deal with the uncertainty in sensory data. Sensory information is necessary to, for example, perform online assessment of grasp stability. We present methods to assess grasp stability based on haptic data and machine-learning methods, including AdaBoost, support vector machines (SVMs), and hidden Markov models (HMMs). In particular, we study the effect of different sensory streams to grasp stability. This includes object information such as shape; grasp information such as approach vector; tactile measurements from fingertips; and joint configuration of the hand. Sensory knowledge affects the success of the grasping process both in the planning stage (before a grasp is executed) and during the execution of the grasp (closed-loop online control). In this paper, we study both of these aspects. We propose a probabilistic learning framework to assess grasp stability and demonstrate that knowledge about grasp stability can be inferred using information from tactile sensors. Experiments on both simulated and real data are shown. The results indicate that the idea to exploit the learning approach is applicable in realistic scenarios, which opens a number of interesting venues for the future research.", "title": "" }, { "docid": "79623049d961677960ed769d1469fb03", "text": "Understanding how people communicate during disasters is important for creating systems to support this communication. Twitter is commonly used to broadcast information and to organize support during times of need. During the 2010 Gulf Oil Spill, Twitter was utilized for spreading information, sharing firsthand observations, and to voice concern about the situation. Through building a series of classifiers to detect emotion and sentiment, the distribution of emotion during the Gulf Oil Spill can be analyzed and its propagation compared against released information and corresponding events. We contribute a series of emotion classifiers and a prototype collaborative visualization of the results and discuss their implications.", "title": "" }, { "docid": "2c8061cf1c9b6e157bdebf9126b2f15c", "text": "Recently, the concept of olfaction-enhanced multimedia applications has gained traction as a step toward further enhancing user quality of experience. The next generation of rich media services will be immersive and multisensory, with olfaction playing a key role. This survey reviews current olfactory-related research from a number of perspectives. It introduces and explains relevant olfactory psychophysical terminology, knowledge of which is necessary for working with olfaction as a media component. In addition, it reviews and highlights the use of, and potential for, olfaction across a number of application domains, namely health, tourism, education, and training. A taxonomy of research and development of olfactory displays is provided in terms of display type, scent generation mechanism, application area, and strengths/weaknesses. State of the art research works involving olfaction are discussed and associated research challenges are proposed.", "title": "" }, { "docid": "63d19f75bc0baee93404488a1d307a32", "text": "Mitochondria can unfold importing precursor proteins by unraveling them from their N-termini. However, how this unraveling is induced is not known. Two candidates for the unfolding activity are the electrical potential across the inner mitochondrial membrane and mitochondrial Hsp70 in the matrix. Here, we propose that many precursors are unfolded by the electrical potential acting directly on positively charged amino acid side chains in the targeting sequences. Only precursor proteins with targeting sequences that are long enough to reach the matrix at the initial interaction with the import machinery are unfolded by mitochondrial Hsp70, and this unfolding occurs even in the absence of a membrane potential.", "title": "" }, { "docid": "4b3c69e446dcf1d237db63eb4f106dd7", "text": "Creating linguistic annotations requires more than just a reliable annotation scheme. Annotation can be a complex endeavour potentially involving many people, stages, and tools. This chapter outlines the process of creating end-toend linguistic annotations, identifying specific tasks that researchers often perform. Because tool support is so central to achieving high quality, reusable annotations with low cost, the focus is on identifying capabilities that are necessary or useful for annotation tools, as well as common problems these tools present that reduce their utility. Although examples of specific tools are provided in many cases, this chapter concentrates more on abstract capabilities and problems because new tools appear continuously, while old tools disappear into disuse or disrepair. The two core capabilities tools must have are support for the chosen annotation scheme and the ability to work on the language under study. Additional capabilities are organized into three categories: those that are widely provided; those that often useful but found in only a few tools; and those that have as yet little or no available tool support. 1 Annotation: More than just a scheme Creating manually annotated linguistic corpora requires more than just a reliable annotation scheme. A reliable scheme, of course, is a central ingredient to successful annotation; but even the most carefully designed scheme will not answer a number of practical questions about how to actually create the annotations, progressing from raw linguistic data to annotated linguistic artifacts that can be used to answer interesting questions or do interesting things. Annotation, especially high-quality annotation of large language datasets, can be a complex process potentially involving many people, stages, and tools, and the scheme only specifies the conceptual content of the annotation. By way of example, the following questions are relevant to a text annotation project and are not answered by a scheme:  How should linguistic artifacts be prepared? Will the originals be annotated directly, or will their textual content be extracted into separate files for annotation? In the latter case, what layout or formatting will be kept (lines, paragraphs page breaks, section headings, highlighted text)? What file format will be used? How will typographical errors be handled? Will typos be ignored, changed in the original, changed in extracted content, or encoded as an additional annotation? Who will be allowed to make corrections: the annotators themselves, adjudicators, or perhaps only the project manager?  How will annotators be provided artifacts to annotate? How will the order of annotation be specified (if at all), and how will this order be enforced? How will the project manager ensure that each document is annotated the appropriate number of times (e.g., by two different people for double annotation).  What inter-annotator agreement measures (IAAs) will be measured, and when? Will IAAs be measured continuously, on batches, or on other subsets of the corpus? How will their measurement at the right time be enforced? Will IAAs be used to track annotator training? If so, what level of IAA will be considered to indicate that training has succeeded? These questions are only a small selection of those that arise during the practical process of conducting annotation. The first goal of this chapter is to give an overview of the process of annotation from start to finish, pointing out these sorts of questions and subtasks for each stage. We will start with a known conceptual framework for the annotation process, the MATTER framework (Pustejovsky & Stubbs, 2013) and expand upon it. Our expanded framework is not guaranteed to be complete, but it will give a reader a very strong flavor of the kind of issues that arise so that they can start to anticipate them in the design of their own annotation project. The second goal is to explore the capabilities required by annotation tools. Tool support is central to effecting high quality, reusable annotations with low cost. The focus will be on identifying capabilities that are necessary or useful for annotation tools. Again, this list will not be exhaustive but it will be fairly representative, as the majority of it was generated by surveying a number of annotation experts about their opinions of available tools. Also listed are common problems that reduce tool utility (gathered during the same survey). Although specific examples of tools will be provided in many cases, the focus will be on more abstract capabilities and problems because new tools appear all the time while old tools disappear into disuse or disrepair. Before beginning, it is well to first introduce a few terms. By linguistic artifact, or just artifact, we mean the object to which annotations are being applied. These could be newspaper articles, web pages, novels, poems, TV 2 Mark A. Finlayson and Tomaž Erjavec shows, radio broadcasts, images, movies, or something else that involves language being captured in a semipermanent form. When we use the term document we will generally mean textual linguistic artifacts such as books, articles, transcripts, and the like. By annotation scheme, or just scheme, we follow the terminology as given in the early chapters of this volume, where a scheme comprises a linguistic theory, a derived model of a phenomenon of interest, a specification that defines the actual physical format of the annotation, and the guidelines that explain to an annotator how to apply the specification to linguistic artifacts. (citation to Chapter III by Ide et al.) By computing platform, or just platform, we mean any computational system on which an annotation tool can be run; classically this has meant personal computers, either desktops or laptops, but recently the range of potential computing platforms has expanded dramatically, to include on the one hand things like web browsers and mobile devices, and, on the other, internet-connected annotation servers and service oriented architectures. Choice of computing platform is driven by many things, including the identity of the annotators and their level of sophistication. We will speak of the annotation process or just process within an annotation project. By process, we mean any procedure or activity, at any level of granularity, involved in the production of annotation. This potentially encompasses everything from generating the initial idea, applying the annotation to the artifacts, to archiving the annotated documents for distribution. Although traditionally not considered part of annotation per se, we might also include here writing academic papers about the results of the annotation, as these activities also sometimes require annotation-focused tool support. We will also speak of annotation tools. By tool we mean any piece of computer software that runs on a computing platform that can be used to implement or carry out a process in the annotation project. Classically conceived annotation tools include software such as the Alembic workbench, Callisto, or brat (Day et al., 1997; Day, McHenry, Kozierok, & Riek, 2004; Stenetorp et al., 2012), but tools can also include software like Microsoft Word or Excel, Apache Tomcat (to run web servers), Subversion or Git (for document revision control), or mobile applications (apps). Tools usually have user interfaces (UIs), but they are not always graphical, fully functional, or even all that helpful. There is a useful distinction between a tool and a component (also called an NLP component, or an NLP algorithm; in UIMA (Apache, 2014) called an annotator), which are pieces of software that are intended to be integrated as libraries into software and can often be strung together in annotation pipelines for applying automatic annotations to linguistic artifacts. Software like tokenizers, part of speech taggers, parsers (Manning et al., 2014), multiword expression detectors (Kulkarni & Finlayson, 2011) or coreference resolvers (Pradhan et al., 2011) are all components. Sometimes the distinction between a tool and a component is not especially clear cut, but it is a useful one nonetheless. The main reason a chapter like this one is needed is that there is no one tool that does everything. There are multiple stages and tasks within every annotation project, typically requiring some degree of customization, and no tool does it all. That is why one needs multiple tools in annotation, and why a detailed consideration of the tool capabilities and problems is needed. 2 Overview of the Annotation Process The first step in an annotation project is, naturally, defining the scheme, but many other tasks must be executed to go from an annotation scheme to an actual set of cleanly annotated files useful for other tasks. 2.1 MATTER & MAMA A good starting place for organizing our conception of the various stages of the process of annotation is the MATTER cycle, proposed by Pustejovsky & Stubbs (2013). This framework outlines six major stages to annotation, corresponding to each letter in the word, defined as follows: M = Model: In this stage, the first of the process, the project leaders set up the conceptual framework for the project. Subtasks may include:  Search background work to understand existing theories of the phenomena  Create or adopt an abstract model of the phenomenon  Define an annotation scheme based on the model Overview of Annotation Creation: Processes & Tools 3  Search libraries, the web, and online repositories for potential linguistic artifacts  Create corpus artifacts if appropriate artifacts do not exist  Measure overall characteristics of artifacts to ground estimates of representativeness and balance  Collect the artifacts on which the annotation will be performed  Track artifact licenses  Measure various statistics of the collected corpus  Choose an annotation specification language  Build an annotation specification that disti", "title": "" } ]
scidocsrr
c7b714ec3bae25a93ecd5e876af8e682
Learning Visual Reasoning Without Strong Priors
[ { "docid": "f32e8f005d277652fe691216e96e7fd8", "text": "PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup O(log N) sampling instead of O(N) enabling the practical generation of 512× 512 images. We evaluate the model on class-conditional image generation, text-toimage synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.", "title": "" } ]
[ { "docid": "0353dbfd30bbfe3f47d471d6ead52010", "text": "In traditional 3D model reconstruction, the texture information is captured in a certain dynamic range, which is usually insufficient for rendering under new environmental light. This paper proposes a novel approach for multi-view stereo (MVS) reconstruction of models with high dynamic range (HDR) texture. In the proposed approach, multi-view images are firstly taken with different exposure times simultaneously. Corresponding pixels in adjacent viewpoints are then extracted using a multi-projection method, to robustly recover the response function of the camera. With the response function, pixel values in the differently exposed images can be converted to the desired relative radiance values. Subsequently, geometry reconstruction and HDR texture recovering can be achieved using these values. Experimental results demonstrate that our method can recover the HDR texture for the 3D model efficiently while keep high geometry precision. With our reconstructed HDR texture model, high-quality scene re-lighting is exemplarily exhibited.", "title": "" }, { "docid": "2d87e26389b9d4ebf896bd9cbd281e69", "text": "Finger-vein biometrics has been extensively investigated for personal authentication. One of the open issues in finger-vein verification is the lack of robustness against image-quality degradation. Spurious and missing features in poor-quality images may degrade the system’s performance. Despite recent advances in finger-vein quality assessment, current solutions depend on domain knowledge. In this paper, we propose a deep neural network (DNN) for representation learning to predict image quality using very limited knowledge. Driven by the primary target of biometric quality assessment, i.e., verification error minimization, we assume that low-quality images are falsely rejected in a verification system. Based on this assumption, the low- and high-quality images are labeled automatically. We then train a DNN on the resulting data set to predict the image quality. To further improve the DNN’s robustness, the finger-vein image is divided into various patches, on which a patch-based DNN is trained. The deepest layers associated with the patches form together a complementary and an over-complete representation. Subsequently, the quality of each patch from a testing image is estimated and the quality scores from the image patches are conjointly input to probabilistic support vector machines (P-SVM) to boost quality-assessment performance. To the best of our knowledge, this is the first proposed work of deep learning-based quality assessment, not only for finger-vein biometrics, but also for other biometrics in general. The experimental results on two public finger-vein databases show that the proposed scheme accurately identifies high- and low-quality images and significantly outperforms existing approaches in terms of the impact on equal error-rate decrease.", "title": "" }, { "docid": "14857144b52dbfb661d6ef4cd2c59b64", "text": "The candidate confirms that the work submitted is his/her own and that appropriate credit has been given where reference has been made to the work of others. i ACKNOWLEDGMENT I am truly indebted and thankful to my scholarship sponsor ―National Information Technology Development Agency (NITDA), Nigeria‖ for giving me the rare privilege to study at the University of Leeds. I am sincerely and heartily grateful to my supervisor Dr. Des McLernon for his valuable support, patience and guidance throughout the course of this dissertation. I am sure it would not have been possible without his help. I would like to express my deep gratitude to Romero-Zurita Nabil for his enthusiastic encouragement, useful critique, recommendation and providing me with great information resources. I also acknowledge my colleague Frempong Kwadwo for his invaluable suggestions and discussion. Finally, I would like to appreciate my parents for their support and encouragement throughout my study at Leeds. Above all, special thanks to God Almighty for the gift of life. ii DEDICATION This thesis is dedicated to family especially; to my parents for inculcating the importance of hardwork and higher education to Omobolanle for being a caring and loving sister. to Abimbola for believing in me.", "title": "" }, { "docid": "6224f4f3541e9cd340498e92a380ad3f", "text": "A personal story: From philosophy to software.", "title": "" }, { "docid": "209b304009db4a04400da178d19fe63e", "text": "Mecanum wheels give vehicles and robots autonomous omni-directional capabilities, while regular wheels don’t. The omni-directionality that such wheels provide makes the vehicle extremely maneuverable, which could be very helpful in different indoor and outdoor applications. However, current Mecanum wheel designs can only operate on flat hard surfaces, and perform very poorly on rough terrains. This paper presents two modified Mecanum wheel designs targeted for complex rough terrains and discusses their advantages and disadvantages in comparison to regular Mecanum wheels. The wheels proposed here are particularly advantageous for overcoming obstacles up to 75% of the overall wheel diameter in lateral motion which significantly facilitates the lateral motion of vehicles on hard rough surfaces and soft soils such as sand which cannot be achieved using other types of wheels. The paper also presents control aspects that need to be considered when controlling autonomous vehicles/robots using the proposed wheels.", "title": "" }, { "docid": "d43dc521d3f0f17ccd4840d6081dcbfe", "text": "In Vehicular Ad hoc NETworks (VANETs), authentication is a crucial security service for both inter-vehicle and vehicle-roadside communications. On the other hand, vehicles have to be protected from the misuse of their private data and the attacks on their privacy, as well as to be capable of being investigated for accidents or liabilities from non-repudiation. In this paper, we investigate the authentication issues with privacy preservation and non-repudiation in VANETs. We propose a novel framework with preservation and repudiation (ACPN) for VANETs. In ACPN, we introduce the public-key cryptography (PKC) to the pseudonym generation, which ensures legitimate third parties to achieve the non-repudiation of vehicles by obtaining vehicles' real IDs. The self-generated PKCbased pseudonyms are also used as identifiers instead of vehicle IDs for the privacy-preserving authentication, while the update of the pseudonyms depends on vehicular demands. The existing ID-based signature (IBS) scheme and the ID-based online/offline signature (IBOOS) scheme are used, for the authentication between the road side units (RSUs) and vehicles, and the authentication among vehicles, respectively. Authentication, privacy preservation, non-repudiation and other objectives of ACPN have been analyzed for VANETs. Typical performance evaluation has been conducted using efficient IBS and IBOOS schemes. We show that the proposed ACPN is feasible and adequate to be used efficiently in the VANET environment.", "title": "" }, { "docid": "8f9e5d1288ca365e7b5350b10e86a54b", "text": "While developing a program to render Voronoi diagrams, I accidentally produced a strange and surprising image. The unexpected behaviour turned out to be caused by a combination of reasons from signal processing and computer architecture. I describe the process that led to the pattern, explain its structure, and display many of the wonderful designs that can be produced from this and related techniques.", "title": "" }, { "docid": "e9f963fd5215c4d9531ee626f58d2812", "text": "To a generation of philosophers, modal logic was a field with ties to some of the most active areas of research and Hughes and Cresswell was the authoritative text and reference to that field. 'Hughes and Cresswell,' of course, referred to the authors' An Introduction to Modal Logic, published by Methuen in 1968 and reprinted as a University Paperback in 1972. Supposing no prior logic, An Introduction presented, in remarkably readable prose, axiomatic and semantic treatments of the propositional modal systems T, S4 and S5 (including completeness and decidability results), similar treatments of a few basic predicate systems, and an extensive survey of other results. Although this work continues to be widely used and cited, it has become somewhat dated. There are now more streamlined methods of proving completeness and decidability than the normal forms and semantic diagrams it employed. There has been a realization that the modal 'frame' is a more fundamental semantic notion than 'model' and a concomitant investigation of the phenomenon of modal 'incompleteness' (i.e., of systems not characterised by classes of frames). And there has been an explosion of further investigations and applications beyond those contained in the original survey. Several newer texts have appeared, but none combines the lucidity and the encyclopedic perspective of Hughes and Cresswell. Brian Chellas's excellent Modal Logic: An Introduction, for example, treats only the propositional systems. Hughes and Cresswell themselves tried to fill the void with A Companion to Modal Logic (Methuen & Co., 1984). This was a self-contained work in the same style, but at a more advanced level, than the previous. To use both an 'introduction' and 'companion' as texts is somewhat awkward, however, and the Companion itself has become dated. Thus, the New Introduction fills a great need. The relevant parts of An Introduction and A Companion have been rewritten and integrated with a wealth of new material into a worthy successor to the original Hughes and Cresswell.", "title": "" }, { "docid": "f9ffe3af3a2f604efb6bde83f519f55c", "text": "BIA is easy, non-invasive, relatively inexpensive and can be performed in almost any subject because it is portable. Part II of these ESPEN guidelines reports results for fat-free mass (FFM), body fat (BF), body cell mass (BCM), total body water (TBW), extracellular water (ECW) and intracellular water (ICW) from various studies in healthy and ill subjects. The data suggests that BIA works well in healthy subjects and in patients with stable water and electrolytes balance with a validated BIA equation that is appropriate with regard to age, sex and race. Clinical use of BIA in subjects at extremes of BMI ranges or with abnormal hydration cannot be recommended for routine assessment of patients until further validation has proven for BIA algorithm to be accurate in such conditions. Multi-frequency- and segmental-BIA may have advantages over single-frequency BIA in these conditions, but further validation is necessary. Longitudinal follow-up of body composition by BIA is possible in subjects with BMI 16-34 kg/m(2) without abnormal hydration, but must be interpreted with caution. Further validation of BIA is necessary to understand the mechanisms for the changes observed in acute illness, altered fat/lean mass ratios, extreme heights and body shape abnormalities.", "title": "" }, { "docid": "b7177265a8e82e4357fdb8eeb3cbab12", "text": "Various hand-crafted features and metric learning methods prevail in the field of person re-identification. Compared to these methods, this paper proposes a more general way that can learn a similarity metric from image pixels directly. By using a \"siamese\" deep neural network, the proposed method can jointly learn the color feature, texture feature and metric in a unified framework. The network has a symmetry structure with two sub-networks which are connected by a cosine layer. Each sub network includes two convolutional layers and a full connected layer. To deal with the big variations of person images, binomial deviance is used to evaluate the cost between similarities and labels, which is proved to be robust to outliers. Experiments on VIPeR illustrate the superior performance of our method and a cross database experiment also shows its good generalization.", "title": "" }, { "docid": "2cd2a85598c0c10176a34c0bd768e533", "text": "BACKGROUND\nApart from skills, and knowledge, self-efficacy is an important factor in the students' preparation for clinical work. The Physiotherapist Self-Efficacy (PSE) questionnaire was developed to measure physical therapy (TP) students' self-efficacy in the cardiorespiratory, musculoskeletal, and neurological clinical areas. The aim of this study was to establish the measurement properties of the Dutch PSE questionnaire, and to explore whether self-efficacy beliefs in students are clinical area specific.\n\n\nMETHODS\nMethodological quality of the PSE was studied using COSMIN guidelines. Item analysis, structural validity, and internal consistency of the PSE were determined in 207 students. Test-retest reliability was established in another sample of 60 students completing the PSE twice. Responsiveness of the scales was determined in 80 students completing the PSE at the start and the end of the second year. Hypothesis testing was used to determine construct validity of the PSE.\n\n\nRESULTS\nExploratory factor analysis resulted in three meaningful components explaining similar proportions of variance (25%, 21%, and 20%), reflecting the three clinical areas. Internal consistency of each of the three subscales was excellent (Cronbach's alpha > .90). Intra Class Correlation Coefficient was good (.80). Hypothesis testing confirmed construct validity of the PSE.\n\n\nCONCLUSION\nThe PSE shows excellent measurement properties. The component structure of the PSE suggests that self-efficacy about physiotherapy in PT students is not generic, but specific for a clinical area. As self-efficacy is considered a predictor of performance in clinical settings, enhancing self-efficacy is an explicit goal of educational interventions. Further research is needed to determine if the scale is specific enough to assess the effect of educational interventions on student self-efficacy.", "title": "" }, { "docid": "c2d3db65ce89b7df228880b72f620a4c", "text": "This paper presents a supply system design methodology for high-speed interface systems used in the design of a 1600 Mbps DDR3 interface in wirebond package. The high data rate and challenging system environment requires a system-level approach of supply noise mitigation that demonstrates the full spectrum of Power Integrity considerations typical in the design of high-speed interfaces. We will first discuss supply noise considerations during the architectural design phase used to define a supply mitigation strategy for the interface design. Next, we will discuss the physical implementation of the supply network component on the chip, the package, and the PCB using a co-design approach. Finally, we will present measurement data demonstrating the achieved supply quality and correlations to simulation results based on supply systems models developed during the design phase of the interface.", "title": "" }, { "docid": "1ae2b50f5b4faaf6c343d02b90f93250", "text": "A binaural beat can be produced by presenting two tones of a differing frequency, one to each ear. Such auditory stimulation has been suggested to influence behaviour and cognition via the process of cortical entrainment. However, research so far has only shown the frequency following responses in the traditional EEG frequency ranges of delta, theta and gamma. Hence a primary aim of this research was to ascertain whether it would be possible to produce clear changes in the EEG in either the alpha or beta frequency ranges. Such changes, if possible, would have a number of important implications as well as potential applications. A secondary goal was to track any observable changes in the EEG throughout the entrainment epoch to gain some insight into the nature of the entrainment effects on any changes in an effort to identify more effective entrainment regimes. Twenty two healthy participants were recruited and randomly allocated to one of two groups, each of which was exposed to a distinct binaural beat frequency for ten 1-minute epochs. The first group listened to an alpha binaural beat of 10 Hz and the second to a beta binaural beat of 20 Hz. EEG was recorded from the left and right temporal regions during pre-exposure baselines, stimulus exposure epochs and post-exposure baselines. Analysis of changes in broad-band and narrow-band amplitudes, and frequency showed no effect of binaural beat frequency eliciting a frequency following effect in the EEG. Possible mediating factors are discussed and a number of recommendations are made regarding future studies, exploring entrainment effects from a binaural beat presentation.", "title": "" }, { "docid": "6a2fa5998bf51eb40c1fd2d8f3dd8277", "text": "In this paper, we propose a new descriptor for texture classification that is robust to image blurring. The descriptor utilizes phase information computed locally in a window for every image position. The phases of the four low-frequency coefficients are decorrelated and uniformly quantized in an eight-dimensional space. A histogram of the resulting code words is created and used as a feature in texture classification. Ideally, the low-frequency phase components are shown to be invariant to centrally symmetric blur. Although this ideal invariance is not completely achieved due to the finite window size, the method is still highly insensitive to blur. Because only phase information is used, the method is also invariant to uniform illumination changes. According to our experiments, the classification accuracy of blurred texture images is much higher with the new method than with the well-known LBP or Gabor filter bank methods. Interestingly, it is also slightly better for textures that are not blurred.", "title": "" }, { "docid": "0817c0162f8b1b4f67b7e1ec2ea3e6a4", "text": "Depression is one of the most common psychiatric disorders worldwide, with over 350 million people affected. Current methods to screen for and assess depression depend almost entirely on clinical interviews and self-report scales. While useful, such measures lack objective, systematic, and efficient ways of incorporating behavioral observations that are strong indicators of depression presence and severity. Using dynamics of facial and head movement and vocalization, we trained classifiers to detect three levels of depression severity. Participants were a community sample diagnosed with major depressive disorder. They were recorded in clinical interviews (Hamilton Rating Scale for Depression, HRSD) at seven-week intervals over a period of 21 weeks. At each interview, they were scored by the HRSD as moderately to severely depressed, mildly depressed, or remitted. Logistic regression classifiers using leave-one-participant-out validation were compared for facial movement, head movement, and vocal prosody individually and in combination. Accuracy of depression severity measurement from facial movement dynamics was higher than that for head movement dynamics, and each was substantially higher than that for vocal prosody. Accuracy using all three modalities combined only marginally exceeded that of face and head combined. These findings suggest that automatic detection of depression severity from behavioral indicators in patients is feasible and that multimodal measures afford the most powerful detection.", "title": "" }, { "docid": "42d79800699b372489ad6c95ac91b21c", "text": "Being able to reason in an environment with a large number of discrete actions is essential to bringing reinforcement learning to a larger class of problems. Recommender systems, industrial plants and language models are only some of the many real-world tasks involving large numbers of discrete actions for which current methods can be difficult or even impossible to apply. An ability to generalize over the set of actions as well as sub-linear complexity relative to the size of the set are both necessary to handle such tasks. Current approaches are not able to provide both of these, which motivates the work in this paper. Our proposed approach leverages prior information about the actions to embed them in a continuous space upon which it can generalize. Additionally, approximate nearest-neighbor methods allow for logarithmic-time lookup complexity relative to the number of actions, which is necessary for time-wise tractable training. This combined approach allows reinforcement learning methods to be applied to large-scale learning problems previously intractable with current methods. We demonstrate our algorithm’s abilities on a series of tasks having up to one million actions.", "title": "" }, { "docid": "52c400a9f8d6dbad24bbdd13ad3fb8fd", "text": "TRAIL has been shown to induce apoptosis in cancer cells, but in some cases, certain cancer cells are resistant to this ligand. In this study, we explored the ability of representative HSP90 (heat shock protein 90) inhibitor NVP-AUY922 to overcome TRAIL resistance by increasing apoptosis in colorectal cancer (CRC) cells. The combination of TRAIL and NVP-AUY922 induced synergistic cytotoxicity and apoptosis, which was mediated through an increase in caspase activation. The treatment of NVP-AUY922 dephosphorylated JAK2 and STAT3 and decreased Mcl-1, which resulted in facilitating cytochrome c release. NVP-AUY922-mediated inhibition of JAK2/STAT3 signaling and down-regulation of their target gene, Mcl-1, occurred in a dose and time-dependent manner. Knock down of Mcl-1, STAT3 inhibitor or JAK2 inhibitor synergistically enhanced TRAIL-induced apoptosis. Taken together, our results suggest the involvement of the JAK2-STAT3-Mcl-1 signal transduction pathway in response to NVP-AUY922 treatment, which may play a key role in NVP-AUY922-mediated sensitization to TRAIL. By contrast, the effect of the combination treatments in non-transformed colon cells was minimal. We provide a clinical rationale that combining HSP90 inhibitor with TRAIL enhances therapeutic efficacy without increasing normal tissue toxicity in CRC patients.", "title": "" }, { "docid": "7e7c1a6f66fa5ac22da08467a479ad48", "text": "This paper presents a 60 GHz communication link system and measurements using a 64-element phased array transmitter. The transmit array includes high-efficiency on-wafer antennas, 3-bits amplitude and 5-bits phase control on each element, a measured saturated EIRP of 38 dBm at 60 GHz and scans to +/- 55° in the E- and H-planes with near-ideal patterns and low sidelobes. The phased-array transmitter is used in a 60 GHz communication link with an external up-conversion mixers and a Keysight 802.11ad waveform generator. A standard gain horn with a gain of 20 dB is used as the receiver, coupled to a Keysight high-speed digital demodulation scope. The communication link achieves a 16-QAM modulation with 3.85 Gbps at 4 m (full 802.11ad channel) and a QPSK modulation with 1.54 GBps over 100 m while scanning to +/-45° in both planes.", "title": "" }, { "docid": "788e91cda7872b065cac8439b89db00f", "text": "Among the pioneers of psychology, Lev Vygotsky (1896-1934) may be the best known of those who are least understood. This is not just a problem of historical scholarship: The misunderstanding of Vygotsky started with his own students and collaborators—during his lifetime—and continued after his death. It is, in other words, integrated into the literature. And that literature, as a result, appears fractured and inconsistent. Indeed, the largest and the best intellectual biography of Vygotsky is titled Understanding Vygotsky: A Quest for Synthesis (van der Veer & Valsiner, 1991). Yet even this excellent book is far from providing a full and complete story. The discovery of the real Vygotsky is still to come.", "title": "" }, { "docid": "53d48fc9cbc1c1371a7c2c22852fb880", "text": "Advances in medicine have changed how patients experience the end of life. With longer life spans, there has also been an increase in years lived with disability. The clustering of illnesses in the last years of life is particularly pronounced in patients with cardiovascular disease. At the end of life, patients with cardiovascular disease are more symptomatic, less likely to die at home, and less likely to receive high-quality palliative care. Social determinants have created widening disparities in end-of-life care. The increasing complexity and duration of care have resulted in an epidemic of caregiver burden. Modern medical care has also resulted in new ethical challenges, for example, those related to deactivation of cardiac devices, such as pacemakers, defibrillators, and mechanical circulatory support. Recommendations to improve end-of-life care for patients with cardiovascular disease include optimizing metrics to assess quality, ameliorating disparities, enhancing education and research in palliative care, overcoming disparities, and innovating palliative care delivery and reimbursement.", "title": "" } ]
scidocsrr
16cdaf61ddcce90118e63eb1f5a95ee4
An Algorithm for Learning Shape and Appearance Models without Annotations
[ { "docid": "6e4bb5d16c72c8dc706f934fa3558adb", "text": "This paper examine the Euler-Lagrange equations for the solution of the large deformation diffeomorphic metric mapping problem studied in Dupuis et al. (1998) and Trouvé (1995) in which two images I 0, I 1 are given and connected via the diffeomorphic change of coordinates I 0○ϕ−1=I 1 where ϕ=Φ1 is the end point at t= 1 of curve Φ t , t∈[0, 1] satisfying .Φ t =v t (Φ t ), t∈ [0,1] with Φ0=id. The variational problem takes the form $$\\mathop {\\arg {\\text{m}}in}\\limits_{\\upsilon :\\dot \\phi _t = \\upsilon _t \\left( {\\dot \\phi } \\right)} \\left( {\\int_0^1 {\\left\\| {\\upsilon _t } \\right\\|} ^2 {\\text{d}}t + \\left\\| {I_0 \\circ \\phi _1^{ - 1} - I_1 } \\right\\|_{L^2 }^2 } \\right),$$ where ‖v t‖ V is an appropriate Sobolev norm on the velocity field v t(·), and the second term enforces matching of the images with ‖·‖L 2 representing the squared-error norm. In this paper we derive the Euler-Lagrange equations characterizing the minimizing vector fields v t, t∈[0, 1] assuming sufficient smoothness of the norm to guarantee existence of solutions in the space of diffeomorphisms. We describe the implementation of the Euler equations using semi-lagrangian method of computing particle flows and show the solutions for various examples. As well, we compute the metric distance on several anatomical configurations as measured by ∫0 1‖v t‖ V dt on the geodesic shortest paths.", "title": "" } ]
[ { "docid": "f29b2fed177eeb826920cc9b5a8dc653", "text": "Analysis of blockchain data is useful for both scienti c research and commercial applications. We present BlockSci, an open-source software platform for blockchain analysis. BlockSci is versatile in its support for di erent blockchains and analysis tasks. It incorporates an in-memory, analytical (rather than transactional) database, making it several hundred times faster than existing tools. We describe BlockSci’s design and present four analyses that illustrate its capabilities. This is a working paper that accompanies the rst public release of BlockSci, available at github.com/citp/BlockSci. We seek input from the community to further develop the software and explore other potential applications.", "title": "" }, { "docid": "16f93322871e61392b286a7ddba1034f", "text": "1 Objective: Although it is well-established that the ability to manage stress is a prerequisite of 2 sporting excellence, the construct of psychological resilience has yet to be systematically 3 examined in athletic performers. The study reported here sought to explore and explain the 4 relationship between psychological resilience and optimal sport performance. 5 Design and Method: Twelve Olympic champions (8 men and 4 women) from a range of sports 6 were interviewed regarding their experiences of withstanding pressure during their sporting 7 careers. A grounded theory approach was employed throughout the data collection and 8 analysis, and interview transcripts were analyzed using open, axial and selective coding. 9 Methodological rigor was established by incorporating various verification strategies into the 10 research process, and the resultant grounded theory was also judged using the quality criteria of 11 fit, work, relevance, and modifiability. 12 Results and Conclusions: Results indicate that numerous psychological factors (relating to a 13 positive personality, motivation, confidence, focus, and perceived social support) protect the 14 world’s best athletes from the potential negative effect of stressors by influencing their 15 challenge appraisal and meta-cognitions. These processes promote facilitative responses that 16 precede optimal sport performance. The emergent theory provides sport psychologists, coaches 17 and national sport organizations with an understanding of the role of resilience in athletes’ lives 18 and the attainment of optimal sport performance. 19", "title": "" }, { "docid": "848c8ffaa9d58430fbdebd0e9694d531", "text": "This paper presents an application for studying the death records of WW2 casualties from a prosopograhical perspective, provided by the various local military cemeteries where the dead were buried. The idea is to provide the end user with a global visual map view on the places in which the casualties were buried as well as with a local historical perspective on what happened to the casualties that lay within a particular cemetery of a village or town. Plenty of data exists about the Second World War (WW2), but the data is typically archived in unconnected, isolated silos in different organizations. This makes it difficult to track down, visualize, and study information that is contained within multiple distinct datasets. In our work, this problem is solved using aggregated Linked Open Data provided by the WarSampo Data Service and SPARQL endpoint.", "title": "" }, { "docid": "0d774f86bb45f2e3e04814dd84cb4490", "text": "Crop yield estimation is an important task in apple orchard management. The current manual sampling-based yield estimation is time-consuming, labor-intensive and inaccurate. To deal with this challenge, we develop and deploy a computer vision system for automated, rapid and accurate yield estimation. The system uses a two-camera stereo rig for image acquisition. It works at nighttime with controlled artificial lighting to reduce the variance of natural illumination. An autonomous orchard vehicle is used as the support platform for automated data collection. The system scans the both sides of each tree row in orchards. A computer vision algorithm is developed to detect and register apples from acquired sequential images, and then generate apple counts as crop yield estimation. We deployed the yield estimation system in Washington state in September, 2011. The results show that the developed system works well with both red and green apples in the tall-spindle planting system. The errors of crop yield estimation are -3.2% for a red apple block with about 480 trees, and 1.2% for a green apple block with about 670 trees.", "title": "" }, { "docid": "4a87e61106125ffdd49c42517ce78b87", "text": "Due to network effects and switching costs, platform providers often become entrenched. To dislodge them, entrants generally must offer revolutionary products. We explore a second path to platform leadership change that does not rely on Schumpeterian creative destruction: platform envelopment. By leveraging common components and shared user relationships, one platform provider can move into another’s market, combining its own functionality with the target’s in a multi-platform bundle. Dominant firms otherwise sheltered from entry by standalone rivals may be vulnerable to an adjacent platform provider’s envelopment attack. We analyze conditions under which envelopment strategies are likely to succeed.", "title": "" }, { "docid": "d1796cd063e0d1ea03462d2002c4dae5", "text": "This paper describes the experimental characterization of MOS bipolar pseudo-resistors for a general purpose technology. Very-high resistance values can be obtained in small footprint layouts, allowing the development of high-pass filters with RC constants over 1 second. The pseudo-resistor presents two different behavior regions, and as described in this work, in bio-amplifiers applications, important functions are assigned to each of these regions. 0.13 μm 8HP technology from GlobalFoundries was chosen as the target technology for the prototypes, because of its versatility. Due to the very-low current of pseudo-resistors, a circuit for indirect resistance measurement was proposed and applied. The fabricated devices presented resistances over 1 teraohm and preserved both the linear and the exponential operation regions, proving that they are well suited for bio-amplifier applications.", "title": "" }, { "docid": "0034edb604e5196b18c550353ffe9ea9", "text": "As the body of research on abusive language detection and analysis grows, there is a need for critical consideration of the relationships between different subtasks that have been grouped under this label. Based on work on hate speech, cyberbullying, and online abuse we propose a typology that captures central similarities and differences between subtasks and we discuss its implications for data annotation and feature construction. We emphasize the practical actions that can be taken by researchers to best approach their abusive language detection subtask of interest.", "title": "" }, { "docid": "3cc97542631d734d8014abfbef652c79", "text": "Internet exchange points (IXPs) are an important ingredient of the Internet AS-level ecosystem - a logical fabric of the Internet made up of about 30,000 ASes and their mutual business relationships whose primary purpose is to control and manage the flow of traffic. Despite the IXPs' critical role in this fabric, little is known about them in terms of their peering matrices (i.e., who peers with whom at which IXP) and corresponding traffic matrices (i.e., how much traffic do the different ASes that peer at an IXP exchange with one another). In this paper, we report on an Internet-wide traceroute study that was specifically designed to shed light on the unknown IXP-specific peering matrices and involves targeted traceroutes from publicly available and geographically dispersed vantage points. Based on our method, we were able to discover and validate the existence of about 44K IXP-specific peering links - nearly 18K more links than were previously known. In the process, we also classified all known IXPs depending on the type of information required to detect them. Moreover, in view of the currently used inferred AS-level maps of the Internet that are known to miss a significant portion of the actual AS relationships of the peer-to-peer type, our study provides a new method for augmenting these maps with IXP-related peering links in a systematic and informed manner.", "title": "" }, { "docid": "eb29f0094237da86af1df56735e310ab", "text": "INTRODUCTION\nTemporary skeletal anchorage devices now offer the possibility of closing anterior open bites and decreasing anterior face height by intruding maxillary posterior teeth, but data for treatment outcomes are lacking. This article presents outcomes and posttreatment changes for consecutive patients treated with a standardized technique.\n\n\nMETHODS\nThe sample included 33 consecutive patients who had intrusion of maxillary posterior teeth with a maxillary occlusal splint and nickel-titanium coil springs to temporary anchorage devices in the zygomatic buttress area, buccal and apical to the maxillary molars. Of this group, 30 had adequate cephalograms available for the period of treatment, 27 had cephalograms including 1-year posttreatment, and 25 had cephalograms from 2 years or longer.\n\n\nRESULTS\nDuring splint therapy, the mean molar intrusion was 2.3 mm. The mean decrease in anterior face height was 1.6 mm, less than expected because of a 0.6-mm mean eruption of the mandibular molars. During the postintrusion orthodontics, the mean change in maxillary molar position was a 0.2-mm extrusion, and there was a mean 0.5-mm increase in face height. Positive overbite was maintained in all patients, with a slight elongation (<2 mm) of the incisors contributing to this. During the 1 year of posttreatment retention, the mean changes were a further eruption of 0.5 mm of the maxillary molars, whereas the mandibular molars intruded by 0.6 mm, and there was a small decrease in anterior face height. Changes beyond 1 year posttreatment were small and attributable to growth rather than relapse in tooth positions.\n\n\nCONCLUSIONS\nIntrusion of the maxillary posterior teeth can give satisfactory correction of moderately severe anterior open bites, but 0.5 to 1.5 mm of reeruption of these teeth is likely to occur. Controlling the vertical position of the mandibular molars so that they do not erupt as the maxillary teeth are intruded is important in obtaining a decrease in face height.", "title": "" }, { "docid": "a5911891697a1b2a407f231cf0ad6c28", "text": "In this paper, a new control method for the parallel operation of inverters operating in an island grid or connected to an infinite bus is described. Frequency and voltage control, including mitigation of voltage harmonics, are achieved without the need for any common control circuitry or communication between inverters. Each inverter supplies a current that is the result of the voltage difference between a reference ac voltage source and the grid voltage across a virtual complex impedance. The reference ac voltage source is synchronized with the grid, with a phase shift, depending on the difference between rated and actual grid frequency. A detailed analysis shows that this approach has a superior behavior compared to existing methods, regarding the mitigation of voltage harmonics, short-circuit behavior and the effectiveness of the frequency and voltage control, as it takes the R to X line impedance ratio into account. Experiments show the behavior of the method for an inverter feeding a highly nonlinear load and during the connection of two parallel inverters in operation.", "title": "" }, { "docid": "91811c07f246e979401937aca9b66f7e", "text": "Extraction of complex head and hand movements along with their constantly changing shapes for recognition of sign language is considered a difficult problem in computer vision. This paper proposes the recognition of Indian sign language gestures using a powerful artificial intelligence tool, convolutional neural networks (CNN). Selfie mode continuous sign language video is the capture method used in this work, where a hearing-impaired person can operate the SLR mobile application independently. Due to non-availability of datasets on mobile selfie sign language, we initiated to create the dataset with five different subjects performing 200 signs in 5 different viewing angles under various background environments. Each sign occupied for 60 frames or images in a video. CNN training is performed with 3 different sample sizes, each consisting of multiple sets of subjects and viewing angles. The remaining 2 samples are used for testing the trained CNN. Different CNN architectures were designed and tested with our selfie sign language data to obtain better accuracy in recognition. We achieved 92.88% recognition rate compared to other classifier models reported on the same dataset.", "title": "" }, { "docid": "1d8667d40c6e6cd5881cf4fa0b788f10", "text": "While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.1", "title": "" }, { "docid": "360389616377a67ee206cd4ede5c77d6", "text": "We present a wrist worn medical monitoring computer designed to free high risk patient from the constraints of stationary monitoring equipment. The system combines complex medical monitoring, data analysis and communication capabilities in a truly wearable watchlike form. The paper summarizes the functionality, architecture and implementation of the system.", "title": "" }, { "docid": "68278896a61e13705e5ffb113487cceb", "text": "Universal Language Model for Fine-tuning [6] (ULMFiT) is one of the first NLP methods for efficient inductive transfer learning. Unsupervised pretraining results in improvements on many NLP tasks for English. In this paper, we describe a new method that uses subword tokenization to adapt ULMFiT to languages with high inflection. Our approach results in a new state-of-the-art for the Polish language, taking first place in Task 3 of PolEval’18. After further training, our final model outperformed the second best model by 35%. We have open-sourced our pretrained models and code.", "title": "" }, { "docid": "e299966eded9f65f6446b3cd7ab41f49", "text": "BACKGROUND Asthma is the most common chronic pulmonary disease during pregnancy. Several previous reports have documented reversible electrocardiographic changes during severe acute asthma attacks, including tachycardia, P pulmonale, right bundle branch block, right axis deviation, and ST segment and T wave abnormalities. CASE REPORT We present the case of a pregnant patient with asthma exacerbation in which acute bronchospasm caused S1Q3T3 abnormality on an electrocardiogram (ECG). The complete workup of ECG findings of S1Q3T3 was negative and correlated with bronchospasm. The S1Q3T3 electrocardiographic abnormality can be seen in acute bronchospasm in pregnant women. The other causes like pulmonary embolism, pneumothorax, acute lung disease, cor pulmonale, and left posterior fascicular block were excluded. CONCLUSIONS Asthma exacerbations are of considerable concern during pregnancy due to their adverse effect on the fetus, and optimization of asthma treatment during pregnancy is vital for achieving good outcomes. Prompt recognition of electrocardiographic abnormality and early treatment can prevent adverse perinatal outcomes.", "title": "" }, { "docid": "1d0f73a465399421b86bc6cf470d70dc", "text": "INTRODUCTION\nThis study was initiated to determine the psychometric properties of the Smart Phone Addiction Scale (SAS) by translating and validating this scale into the Malay language (SAS-M), which is the main language spoken in Malaysia. This study can distinguish smart phone and internet addiction among multi-ethnic Malaysian medical students. In addition, the reliability and validity of the SAS was also demonstrated.\n\n\nMATERIALS AND METHODS\nA total of 228 participants were selected between August 2014 and September 2014 to complete a set of questionnaires, including the SAS and the modified Kimberly Young Internet addiction test (IAT) in the Malay language.\n\n\nRESULTS\nThere were 99 males and 129 females with ages ranging from 19 to 22 years old (21.7±1.1) included in this study. Descriptive and factor analyses, intra-class coefficients, t-tests and correlation analyses were conducted to verify the reliability and validity of the SAS. Bartlett's test of sphericity was significant (p <0.01), and the Kaiser-Mayer-Olkin measure of sampling adequacy for the SAS-M was 0.92, indicating meritoriously that the factor analysis was appropriate. The internal consistency and concurrent validity of the SAS-M were verified (Cronbach's alpha = 0.94). All of the subscales of the SAS-M, except for positive anticipation, were significantly related to the Malay version of the IAT.\n\n\nCONCLUSIONS\nThis study developed the first smart phone addiction scale among medical students. This scale was shown to be reliable and valid in the Malay language.", "title": "" }, { "docid": "1e8caa9f0a189bafebd65df092f918bc", "text": "For several decades, the role of hormone-replacement therapy (HRT) has been debated. Early observational data on HRT showed many benefits, including a reduction in coronary heart disease (CHD) and mortality. More recently, randomized trials, including the Women's Health Initiative (WHI), studying mostly women many years after the the onset of menopause, showed no such benefit and, indeed, an increased risk of CHD and breast cancer, which led to an abrupt decrease in the use of HRT. Subsequent reanalyzes of data from the WHI with age stratification, newer randomized and observational data and several meta-analyses now consistently show reductions in CHD and mortality when HRT is initiated soon after menopause. HRT also significantly decreases the incidence of various symptoms of menopause and the risk of osteoporotic fractures, and improves quality of life. In younger healthy women (aged 50–60 years), the risk–benefit balance is positive for using HRT, with risks considered rare. As no validated primary prevention strategies are available for younger women (<60 years of age), other than lifestyle management, some consideration might be given to HRT as a prevention strategy as treatment can reduce CHD and all-cause mortality. Although HRT should be primarily oestrogen-based, no particular HRT regimen can be advocated.", "title": "" }, { "docid": "59565e9113e5a34ec7097c803dfb8cac", "text": "Web apps are cheaper to develop and deploy than native apps, but can they match the native user experience?", "title": "" }, { "docid": "db047842157a05f77f2967ba983c5641", "text": "We present a learning algorithm for neural networks, called Alopex. Instead of error gradient, Alopex uses local correlations between changes in individual weights and changes in the global error measure. The algorithm does not make any assumptions about transfer functions of individual neurons, and does not explicitly depend on the functional form of the error measure. Hence, it can be used in networks with arbitrary transfer functions and for minimizing a large class of error measures. The learning algorithm is the same for feedforward and recurrent networks. All the weights in a network are updated simultaneously, using only local computations. This allows complete parallelization of the algorithm. The algorithm is stochastic and it uses a temperature parameter in a manner similar to that in simulated annealing. A heuristic annealing schedule is presented that is effective in finding global minima of error surfaces. In this paper, we report extensive simulation studies illustrating these advantages and show that learning times are comparable to those for standard gradient descent methods. Feedforward networks trained with Alopex are used to solve the MONK's problems and symmetry problems. Recurrent networks trained with the same algorithm are used for solving temporal XOR problems. Scaling properties of the algorithm are demonstrated using encoder problems of different sizes and advantages of appropriate error measures are illustrated using a variety of problems.", "title": "" }, { "docid": "ae74b0befa2da2aeb2d831aac0bef456", "text": "The central purpose of this survey is to provide readers an insight into the recent advances and challenges in on-line active learning. Active learning has attracted the data mining and machine learning community since around 20 years. This is because it served for important purposes to increase practical applicability of machine learning techniques, such as (i) to reduce annotation and measurement costs for operators and measurement equipments, (ii) to reduce manual labelling effort for experts and (iii) to reduce computation time for model training. Almost all of the current techniques focus on the classical pool-based approach, which is off-line by nature as iterating over a pool of (unlabelled) reference samples a multiple times to choose the most promising ones for improving the performance of the classifiers. This is achieved by (time-intensive) re-training cycles on all labelled samples available so far. For the on-line, stream mining case, the challenge is that the sample selection strategy has to operate in a fast, ideally single-pass manner. Some first approaches have been proposed during the last decade (starting from around 2005) with the usage of machine learning (ML) oriented incremental classifiers, which are able to update their parameters based on selected samples, but not their structures. Since 2012, on-line active learning concepts have been proposed in connection with the paradigm of evolving models, which are able to expand their knowledge into feature space regions so far unexplored. This opened the possibility to address a particular type of uncertainty, namely that one which stems from a significant novelty content in streams, as, e.g., caused by drifts, new operation modes, changing system behaviors or non-stationary environments. We will provide an overview about the concepts and techniques for sample selection and active learning within these two principal major research lines (incremental ML models versus evolving systems), a comparison of their essential characteristics and properties (raising some advantages and disadvantages), and a study on possible evaluation techniques for them. We conclude with an overview of real-world application examples where various online AL approaches have been already successfully applied in order to significantly reduce user’s interaction efforts and costs for model updates. Preprint submitted to Information Sciences 27 June 2017", "title": "" } ]
scidocsrr
3481331484ea0810f920e6fb8064b944
Behind Phishing: An Examination of Phisher Modi Operandi
[ { "docid": "b60a3560be02f8bab648291131428b22", "text": "There are currently dozens of freely available tools to combat phishing and other web-based scams, many of which are web browser extensions that warn users when they are browsing a suspected phishing site. We developed an automated test bed for testing antiphishing tools. We used 200 verified phishing URLs from two sources and 516 legitimate URLs to test the effectiveness of 10 popular anti-phishing tools. Only one tool was able to consistently identify more than 90% of phishing URLs correctly; however, it also incorrectly identified 42% of legitimate URLs as phish. The performance of the other tools varied considerably depending on the source of the phishing URLs. Of these remaining tools, only one correctly identified over 60% of phishing URLs from both sources. Performance also changed significantly depending on the freshness of the phishing URLs tested. Thus we demonstrate that the source of phishing URLs and the freshness of the URLs tested can significantly impact the results of anti-phishing tool testing. We also demonstrate that many of the tools we tested were vulnerable to simple exploits. In this paper we describe our anti-phishing tool test bed, summarize our findings, and offer observations about the effectiveness of these tools as well as ways they might be improved.", "title": "" }, { "docid": "00410fcb0faa85d5423ccf0a7cc2f727", "text": "Phishing is form of identity theft that combines social engineering techniques and sophisticated attack vectors to harvest financial information from unsuspecting consumers. Often a phisher tries to lure her victim into clicking a URL pointing to a rogue page. In this paper, we focus on studying the structure of URLs employed in various phishing attacks. We find that it is often possible to tell whether or not a URL belongs to a phishing attack without requiring any knowledge of the corresponding page data. We describe several features that can be used to distinguish a phishing URL from a benign one. These features are used to model a logistic regression filter that is efficient and has a high accuracy. We use this filter to perform thorough measurements on several million URLs and quantify the prevalence of phishing on the Internet today", "title": "" }, { "docid": "c89b740ec1d752415eaea873a1bbe55d", "text": "Spam filters often use the reputation of an IP address (or IP address range) to classify email senders. This approach worked well when most spam originated from senders with fixed IP addresses, but spam today is also sent from IP addresses for which blacklist maintainers have outdated or inaccurate information (or no information at all). Spam campaigns also involve many senders, reducing the amount of spam any particular IP address sends to a single domain; this method allows spammers to stay \"under the radar\". The dynamism of any particular IP address begs for blacklisting techniques that automatically adapt as the senders of spam change.\n This paper presents SpamTracker, a spam filtering system that uses a new technique called behavioral blacklisting to classify email senders based on their sending behavior rather than their identity. Spammers cannot evade SpamTracker merely by using \"fresh\" IP addresses because blacklisting decisions are based on sending patterns, which tend to remain more invariant. SpamTracker uses fast clustering algorithms that react quickly to changes in sending patterns. We evaluate SpamTracker's ability to classify spammers using email logs for over 115 email domains; we find that SpamTracker can correctly classify many spammers missed by current filtering techniques. Although our current datasets prevent us from confirming SpamTracker's ability to completely distinguish spammers from legitimate senders, our evaluation shows that SpamTracker can identify a significant fraction of spammers that current IP-based blacklists miss. SpamTracker's ability to identify spammers before existing blacklists suggests that it can be used in conjunction with existing techniques (e.g., as an input to greylisting). SpamTracker is inherently distributed and can be easily replicated; incorporating it into existing email filtering infrastructures requires only small modifications to mail server configurations.", "title": "" } ]
[ { "docid": "fbcaba091a407d2bd831d3520577cf27", "text": "Studying a software project by mining data from a single repository has been a very active research field in software engineering during the last years. However, few efforts have been devoted to perform studies by integrating data from various repositories, with different kinds of information, which would, for instance, track the different activities of developers. One of the main problems of these multi-repository studies is the different identities that developers use when they interact with different tools in different contexts. This makes them appear as different entities when data is mined from different repositories (and in some cases, even from a single one). In this paper we propose an approach, based on the application of heuristics, to identify the many identities of developers in such cases, and a data structure for allowing both the anonymized distribution of information, and the tracking of identities for verification purposes. The methodology will be presented in general, and applied to the GNOME project as a case example. Privacy issues and partial merging with new data sources will also be considered and discussed.", "title": "" }, { "docid": "9b7b5d50a6578342f230b291bfeea443", "text": "Ecological systems are generally considered among the most complex because they are characterized by a large number of diverse components, nonlinear interactions, scale multiplicity, and spatial heterogeneity. Hierarchy theory, as well as empirical evidence, suggests that complexity often takes the form of modularity in structure and functionality. Therefore, a hierarchical perspective can be essential to understanding complex ecological systems. But, how can such hierarchical approach help us with modeling spatially heterogeneous, nonlinear dynamic systems like landscapes, be they natural or human-dominated? In this paper, we present a spatially explicit hierarchical modeling approach to studying the patterns and processes of heterogeneous landscapes. We first discuss the theoretical basis for the modeling approach—the hierarchical patch dynamics (HPD) paradigm and the scaling ladder strategy, and then describe the general structure of a hierarchical urban landscape model (HPDM-PHX) which is developed using this modeling approach. In addition, we introduce a hierarchical patch dynamics modeling platform (HPD-MP), a software package that is designed to facilitate the development of spatial hierarchical models. We then illustrate the utility of HPD-MP through two examples: a hierarchical cellular automata model of land use change and a spatial multi-species population dynamics model. © 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "cc6267d02ecbb1d2679ac30ee5b56d82", "text": "We established polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) and diagnostic PCR based on cytochrome C oxidase subunit I (COI) barcodes of Bungarus multicinctus, genuine Jinqian Baihua She (JBS), and adulterant snake species. The PCR-RFLP system utilizes the specific restriction sites of SpeI and BstEII in the COI sequence of B. multicinctus to allow its cleavage into 3 fragments (120 bp, 230 bp, and 340 bp); the COI sequences of the adulterants do not contain these restriction sites and therefore remained intact after digestion with SpeI and BstEII (except for that of Zaocys dhumnades, which could be cleaved into a 120 bp and a 570 bp fragment). For diagnostic PCR, a pair of species-specific primers (COI37 and COI337) was designed to amplify a specific 300 bp amplicon from the genomic DNA of B. multicinctus; no such amplicons were found in other allied species. We tested the two methods using 11 commercial JBS samples, and the results demonstrated that barcode-based PCR-RFLP and diagnostic PCR both allowed effective and accurate authentication of JBS.", "title": "" }, { "docid": "ce972f404cbf00e1170ad8fa3c37718a", "text": "Pouring a specific amount of liquid is a challenging task. In this paper we develop methods for robots to use visual feedback to perform closed-loop control for pouring liquids. We propose both a model-based and a model-free method utilizing deep learning for estimating the volume of liquid in a container. Our results show that the model-free method is better able to estimate the volume. We combine this with a simple PID controller to pour specific amounts of liquid, and show that the robot is able to achieve an average 38ml deviation from the target amount. To our knowledge, this is the first use of raw visual feedback to pour liquids in robotics.", "title": "" }, { "docid": "bb02c3a2c02cce6325fe542f006dde9c", "text": "In this paper, we argue for a theoretical separation of the free-energy principle from Helmholtzian accounts of the predictive brain. The free-energy principle is a theoretical framework capturing the imperative for biological self-organization in information-theoretic terms. The free-energy principle has typically been connected with a Bayesian theory of predictive coding, and the latter is often taken to support a Helmholtzian theory of perception as unconscious inference. If our interpretation is right, however, a Helmholtzian view of perception is incompatible with Bayesian predictive coding under the free-energy principle. We argue that the free energy principle and the ecological and enactive approach to mind and life make for a much happier marriage of ideas. We make our argument based on three points. First we argue that the free energy principle applies to the whole animal–environment system, and not only to the brain. Second, we show that active inference, as understood by the free-energy principle, is incompatible with unconscious inference understood as analagous to scientific hypothesis-testing, the main tenet of a Helmholtzian view of perception. Third, we argue that the notion of inference at work in Bayesian predictive coding under the free-energy principle is too weak to support a Helmholtzian theory of perception. Taken together these points imply that the free energy principle is best understood in ecological and enactive terms set out in this paper.", "title": "" }, { "docid": "b6f0a0e2fd96f2be89e28c500b295b59", "text": "1: function BaB(net, domain, ) 2: global ub← inf 3: global lb← − inf 4: doms← [(global lb, domain)] 5: while global ub− global lb > do 6: ( , dom)← pick out(doms) 7: [subdom 1, . . . , subdom s]← split(dom) 8: for i = 1 . . . s do 9: dom ub← compute UB(net, subdom i) 10: dom lb← compute LB(net, subdom i) 11: if dom ub < global ub then 12: global ub← dom ub 13: prune domains(doms, global ub) 14: end if 15: if dom lb < global ub then 16: domains.append((dom lb, subdom i)) 17: end if 18: end for 19: global lb← min{lb | (lb, dom) ∈ doms} 20: end while 21: return global ub 22: end function", "title": "" }, { "docid": "922ce107f9d88b02483fd6b65109d466", "text": "With the growing popularity of electronic documents, replication can occur for many reasons. People may copy text segments from various sources and make modifications. In this paper, we study the problem of local similarity search to find partially replicated text. Unlike existing studies on similarity search which find entirely duplicated documents, our target is to identify documents that approximately share a pair of sliding windows which differ by no more than τ tokens. Our problem is technically challenging because for sliding windows the tokens to be indexed are less selective than entire documents, rendering set similarity join-based algorithms less efficient. Our proposed method is based on enumerating token combinations to obtain signatures with high selectivity. In order to strike a balance between signature and candidate generation, we partition the token universe and for different partitions we generate combinations composed of different numbers of tokens. A cost-aware algorithm is devised to find a good partitioning of the token universe. We also propose to leverage the overlap between adjacent windows to share computation and thus speed up query processing. In addition, we develop the techniques to support the large thresholds. Experiments on real datasets demonstrate the efficiency of our method against alternative solutions.", "title": "" }, { "docid": "dd51cc2138760f1dcdce6e150cabda19", "text": "Breast cancer is the most common cancer in women worldwide. The most common screening technology is mammography. To reduce the cost and workload of radiologists, we propose a computer aided detection approach for classifying and localizing calcifications and masses in mammogram images. To improve on conventional approaches, we apply deep convolutional neural networks (CNN) for automatic feature learning and classifier building. In computer-aided mammography, deep CNN classifiers cannot be trained directly on full mammogram images because of the loss of image details from resizing at input layers. Instead, our classifiers are trained on labelled image patches and then adapted to work on full mammogram images for localizing the abnormalities. State-of-the-art deep convolutional neural networks are compared on their performance of classifying the abnormalities. Experimental results indicate that VGGNet receives the best overall accuracy at 92.53% in classifications. For localizing abnormalities, ResNet is selected for computing class activation maps because it is ready to be deployed without structural change or further training. Our approach demonstrates that deep convolutional neural network classifiers have remarkable localization capabilities despite no supervision on the location of abnormalities is provided.", "title": "" }, { "docid": "c222ac75347638f3b6182dc03c337b66", "text": "Feature selection is an important task for mining useful information from datasets in high dimensions, a typical characteristic of biology domains such as microarray datasets. In this paper, we present an altogether new perspective on feature selection. We pose feature selection as a one class SVM problem of modeling the space in which features can be represented. We show that finding the support vectors in our one class formulation is tantamount to performing feature selection. Further, we show that our formulation reduces to the standard QPFS formulation in the dual problem space. Not only our formulation gives new insights into the task of feature selection, solving it directly in the primal space can give significant computational gains when the number of the samples is much smaller than the number of features. We validate our thesis by experimenting on three different microarray datasets.", "title": "" }, { "docid": "9647b3278ee0ad7f8cb1c40c2dbe1331", "text": "I want to describe an idea which is related to other things that were suggested in the colloquium, though my approach will be quite different. The basic theme of these suggestions have been to try to get rid of the continuum and build up physical theory from discreteness. The most obvious place in which the continuum comes into physics is the structure of space-time. But, apparently independently of this, there is also another place in which the continuum is built into present physical theory. This is in quantum theory, where there is the superposition law: if you have two states, you’re supposed to be able to form any linear combination of these two states. These are complex linear combinations, so again you have a continuum coming in—namely the two-dimensional complex continuum— in a fundamental way. My basic idea is to try and build up both space-time and quantum mechanics simultaneously—from combinatorial principles—but not (at least in the first instance) to try and change physical theory. In the first place it is a reformulation, though ultimately, perhaps, there will be some changes. Different things will suggest themselves in a reformulated theory, than in the original formulation. One scarcely wants to take every concept in existing theory and try to make it combinatorial: there are too many things which look continuous in existing theory. And to try to eliminate the continuum by approximating it by some discrete structure would be to change the theory. The idea, instead, is to concentrate only on things which, in fact, are discrete in existing theory and try and use them as primary concepts—then to build up other things using these discrete primary concepts as the basic building blocks. Continuous concepts could emerge in a limit, when we take more and more complicated systems. The most obvious physical concept that one has to start with, where quantum mechanics says something is discrete, and which is connected with the structure of space-time in a very intimate way, is in angular momentum. The idea here, then, is to start with the concept of angular momentum— here one has a discrete spectrum—and use the rules for combining angular", "title": "" }, { "docid": "995076c141cac21b9be4dcda872afefc", "text": "Argumentation mining and stance classification were recently introduced as interesting tasks in text mining. In this paper, a novel framework for argument tagging based on topic modeling is proposed. Unlike other machine learning approaches for argument tagging which often require large set of labeled data, the proposed model is minimally supervised and merely a one-to-one mapping between the pre-defined argument set and the extracted topics is required. These extracted arguments are subsequently exploited for stance classification. Additionally, a manuallyannotated corpus for stance classification and argument tagging of online news comments is introduced and made available. Experiments on our collected corpus demonstrate the benefits of using topic-modeling for argument tagging. We show that using Non-Negative Matrix Factorization instead of Latent Dirichlet Allocation achieves better results for argument classification, close to the results of a supervised classifier. Furthermore, the statistical model that leverages automatically-extracted arguments as features for stance classification shows promising results.", "title": "" }, { "docid": "762a9c96acdadb1d133102cd014c3c95", "text": "In the domain of multiprocessor real-time systems, there has been a wealth of recent work on scheduling, but relatively little work on the equally-important topic of synchronization. When synchronizing accesses to shared resources, four basic options exist: lock-free execution, wait-free execution, spin- based locking, and suspension-based locking. To our knowledge, no empirical multiprocessor-based evaluation of these basic techniques that focuses on real-time systems has ever been conducted before. In this paper, we present such an evaluation and report on our efforts to incorporate synchronization support in the testbed used in this effort.", "title": "" }, { "docid": "96f42b3a653964cffa15d9b3bebf0086", "text": "The brain processes information through many layers of neurons. This deep architecture is representationally powerful1,2,3,4, but it complicates learning by making it hard to identify the responsible neurons when a mistake is made1,5. In machine learning, the backpropagation algorithm1 assigns blame to a neuron by computing exactly how it contributed to an error. To do this, it multiplies error signals by matrices consisting of all the synaptic weights on the neuron’s axon and farther downstream. This operation requires a precisely choreographed transport of synaptic weight information, which is thought to be impossible in the brain1,6,7,8,9,10,11,12,13,14. Here we present a surprisingly simple algorithm for deep learning, which assigns blame by multiplying error signals by random synaptic weights. We show that a network can learn to extract useful information from signals sent through these random feedback connections. In essence, the network learns to learn. We demonstrate that this new mechanism performs as quickly and accurately as backpropagation on a variety of problems and describe the principles which underlie its function. Our demonstration provides a plausible basis for how a neuron can be adapted using error signals generated at distal locations in the brain, and thus dispels long-held assumptions about the algorithmic constraints on learning in neural circuits. 1 ar X iv :1 41 1. 02 47 v1 [ qbi o. N C ] 2 N ov 2 01 4 Networks in the brain compute via many layers of interconnected neurons15,16. To work properly neurons must adjust their synapses so that the network’s outputs are appropriate for its tasks. A longstanding mystery is how upstream synapses (e.g. the synapse between α and β in Fig. 1a) are adjusted on the basis of downstream errors (e.g. e in Fig. 1a). In artificial intelligence this problem is solved by an algorithm called backpropagation of error1. Backprop works well in real-world applications17,18,19, and networks trained with it can account for cell response properties in some areas of cortex20,21. But it is biologically implausible because it requires that neurons send each other precise information about large numbers of synaptic weights — i.e. it needs weight transport1,6,7,8,12,14,22 (Fig. 1a, b). Specifically, backprop multiplies error signals e by the matrix W T , the transpose of the forward synaptic connections, W (Fig. 1b). This implies that feedback is computed using knowledge of all the synaptic weights W in the forward path. For this reason, current theories of biological learning have turned to simpler schemes such as reinforcement learning23, and “shallow” mechanisms which use errors to adjust only the final layer of a network4,11. But reinforcement learning, which delivers the same reward signal to each neuron, is slow and scales poorly with network size5,13,24. And shallow mechanisms waste the representational power of deep networks3,4,25. Here we describe a new deep-learning algorithm that is as fast and accurate as backprop, but much simpler, avoiding all transport of synaptic weight information. This makes it a mechanism the brain could easily exploit. It is based on three insights: (i) The feedback weights need not be exactly W T . In fact, any matrix B will suffice, so long as on average,", "title": "" }, { "docid": "21130eded44790720e79a750ecdf3847", "text": "Enabled by Web 2.0 technologies social media provide an unparalleled platform for consumers to share their product experiences and opinions---through word-of-mouth (WOM) or consumer reviews. It has become increasingly important to understand how WOM content and metrics thereof are related to consumer purchases and product sales. By integrating network analysis with text sentiment mining techniques, we propose product comparison networks as a novel construct, computed from consumer product reviews. To test the validity of these product ranking measures, we conduct an empirical study based on a digital camera dataset from Amazon.com. The results demonstrate significant linkage between network-based measures and product sales, which is not fully captured by existing review measures such as numerical ratings. The findings provide important insights into the business impact of social media and user-generated content, an emerging problem in business intelligence research. From a managerial perspective, our results suggest that WOM in social media also constitutes a competitive landscape for firms to understand and manipulate.", "title": "" }, { "docid": "d01b73a497ef23b7627199fb6339ef9f", "text": "Unit tests are valuable as a source of up-to-date documentation as developers continuously changes them to reflect changes in the production code to keep an effective regression suite. Maintaining traceability links between unit tests and classes under test can help developers to comprehend parts of a system. In particular, unit tests show how parts of a system are executed and as such how they are supposed to be used. Moreover, the dependencies between unit tests and classes can be exploited to maintain the consistency during refactoring. Generally, such dependences are not explicitly maintained and they have to be recovered during software development. Some guidelines and naming conventions have been defined to describe the testing environment in order to easily identify related tests for a programming task. However, very often these guidelines are not followed making the identification of links between unit tests and classes a time-consuming task. Thus, automatic approaches to recover such links are needed. In this paper a traceability recovery approach based on Data Flow Analysis (DFA) is presented. In particular, the approach retrieves as tested classes all the classes that affect the result of the last assert statement in each method of the unit test class. The accuracy of the proposed method has been empirically evaluated on two systems, an open source system and an industrial system. As a benchmark, we compare the accuracy of the DFA-based approach with the accuracy of the previously used traceability recovery approaches, namely Naming Convention (NC) and Last Call Before Assert (LCBA) that seem to provide the most accurate results. The results show that the proposed approach is the most accurate method demonstrating the effectiveness of DFA. However, the case study also highlights the limitations of the experimented traceability recovery approaches, showing that detecting the class under test cannot be fully automated and some issues are still under study.", "title": "" }, { "docid": "356684bac2e5fecd903eb428dc5455f4", "text": "Social media expose millions of users every day to information campaigns - some emerging organically from grassroots activity, others sustained by advertising or other coordinated efforts. These campaigns contribute to the shaping of collective opinions. While most information campaigns are benign, some may be deployed for nefarious purposes, including terrorist propaganda, political astroturf, and financial market manipulation. It is therefore important to be able to detect whether a meme is being artificially promoted at the very moment it becomes wildly popular. This problem has important social implications and poses numerous technical challenges. As a first step, here we focus on discriminating between trending memes that are either organic or promoted by means of advertisement. The classification is not trivial: ads cause bursts of attention that can be easily mistaken for those of organic trends. We designed a machine learning framework to classify memes that have been labeled as trending on Twitter. After trending, we can rely on a large volume of activity data. Early detection, occurring immediately at trending time, is a more challenging problem due to the minimal volume of activity data that is available prior to trending. Our supervised learning framework exploits hundreds of time-varying features to capture changing network and diffusion patterns, content and sentiment information, timing signals, and user meta-data. We explore different methods for encoding feature time series. Using millions of tweets containing trending hashtags, we achieve 75% AUC score for early detection, increasing to above 95% after trending. We evaluate the robustness of the algorithms by introducing random temporal shifts on the trend time series. Feature selection analysis reveals that content cues provide consistently useful signals; user features are more informative for early detection, while network and timing features are more helpful once more data is available.", "title": "" }, { "docid": "80b0106e0efd946258034d7c9d866ebe", "text": "The marketing profession is being challenged to assess and communicate the value created by its actions on shareholder value. These demands create a need to translate marketing resource allocations and their performance consequences into financial and firm value effects. The objective of this paper is to integrate the existing knowledge on the impact of marketing on firm value. The authors first frame the important research questions on marketing and firm value and review the important investor response metrics and relevant analytical models, as they relate to marketing. The authors next summarize the empirical findings to date on how marketing creates shareholder value, including the impact of brand equity, customer equity, customer satisfaction, R&D, product quality and specific marketing-mix actions. In addition the authors review emerging findings on biases in investor response to marketing actions. The paper concludes by formulating an agenda for future research challenges in this emerging area.", "title": "" }, { "docid": "a3cb5c10747f21667ec525df93cc3f01", "text": "With the success of deep learning based approaches in tackling challenging problems in computer vision, a wide range of deep architectures have recently been proposed for the task of visual odometry (VO) estimation. Most of these proposed solutions rely on supervision, which requires the acquisition of precise ground-truth camera pose information, collected using expensive motion capture systems or high-precision IMU/GPS sensor rigs. In this work, we propose an unsupervised paradigm for deep visual odometry learning. We show that using a noisy teacher, which could be a standard VO pipeline, and by designing a loss term that enforces geometric consistency of the trajectory, we can train accurate deep models for VO that do not require ground-truth labels. We leverage geometry as a self-supervisory signal and propose \"Composite Transformation Constraints (CTCs)\", that automatically generate supervisory signals for training and enforce geometric consistency in the VO estimate. We also present a method of characterizing the uncertainty in VO estimates thus obtained. To evaluate our VO pipeline, we present exhaustive ablation studies that demonstrate the efficacy of end-to-end, self-supervised methodologies to train deep models for monocular VO. We show that leveraging concepts from geometry and incorporating them into the training of a recurrent neural network results in performance competitive to supervised deep VO methods.", "title": "" }, { "docid": "f00b0b00dffcae8f5f0bce8c17abc8b6", "text": "From a marketing communication point of view, new digital marketing channels, such as Internet and mobile phones, are considered to be powerful opportunities to reach consumers by allowing interactivity and personalisation of the content and context of the message. The increased number of media has, however, led to a harder competition for consumers’ attention. Given the potential of digital media it is interesting to understand how consumers are going to relate to mobile marketing efforts. The purpose of the paper was to explore consumers’ responsiveness to mobile marketing communication. With mobile marketing we refer to the use of SMS and MMS as marketing media in push campaigns. It is argued in the paper that consumer responsiveness is a function of personally perceived relevance of the marketing message as well as on the disturbance/acceptance of the context of receiving the message. A relevance/disturbance framework can thus measure the effectiveness of mobile marketing communication. An empirical study was conducted in Finland, where responsiveness to mobile marketing was benchmarked against e-mail communication. Findings from this study indicated that responsiveness to mobile marketing communication varies among consumers. Compared to traditional direct mail and commercial email communication, the responsiveness to mobile marketing was considerably lower. However, even if the majority of consumers showed low responsiveness to mobile marketing there were also consumers who welcome such messages.", "title": "" }, { "docid": "595a31e82d857cedecd098bf4c910e99", "text": "Human actions in video sequences are three-dimensional (3D) spatio-temporal signals characterizing both the visual appearance and motion dynamics of the involved humans and objects. Inspired by the success of convolutional neural networks (CNN) for image classification, recent attempts have been made to learn 3D CNNs for recognizing human actions in videos. However, partly due to the high complexity of training 3D convolution kernels and the need for large quantities of training videos, only limited success has been reported. This has triggered us to investigate in this paper a new deep architecture which can handle 3D signals more effectively. Specifically, we propose factorized spatio-temporal convolutional networks (FstCN) that factorize the original 3D convolution kernel learning as a sequential process of learning 2D spatial kernels in the lower layers (called spatial convolutional layers), followed by learning 1D temporal kernels in the upper layers (called temporal convolutional layers). We introduce a novel transformation and permutation operator to make factorization in FstCN possible. Moreover, to address the issue of sequence alignment, we propose an effective training and inference strategy based on sampling multiple video clips from a given action video sequence. We have tested FstCN on two commonly used benchmark datasets (UCF-101 and HMDB-51). Without using auxiliary training videos to boost the performance, FstCN outperforms existing CNN based methods and achieves comparable performance with a recent method that benefits from using auxiliary training videos.", "title": "" } ]
scidocsrr
af58162c117a6972bbfda4da439f4f19
A large scale exploratory analysis of software vulnerability life cycles
[ { "docid": "811c430ff9efd0f8a61ff40753f083d4", "text": "The Waikato Environment for Knowledge Analysis (Weka) is a comprehensive suite of Java class libraries that implement many state-of-the-art machine learning and data mining algorithms. Weka is freely available on the World-Wide Web and accompanies a new text on data mining [1] which documents and fully explains all the algorithms it contains. Applications written using the Weka class libraries can be run on any computer with a Web browsing capability; this allows users to apply machine learning techniques to their own data regardless of computer platform.", "title": "" } ]
[ { "docid": "6970acb72318375a5af6aa03ad634f7e", "text": "BACKGROUND\nMyopia is an important public health problem because it is common and is associated with increased risk for chorioretinal degeneration, retinal detachment, and other vision- threatening abnormalities. In animals, ocular elongation and myopia progression can be lessened with atropine treatment. This study provides information about progression of myopia and atropine therapy for myopia in humans.\n\n\nMETHODS\nA total of 214 residents of Olmsted County, Minnesota (118 girls and 96 boys, median age, 11 years; range 6 to 15 years) received atropine for myopia from 1967 through 1974. Control subjects were matched by age, sex, refractive error, and date of baseline examination to 194 of those receiving atropine. Duration of treatment with atropine ranged from 18 weeks to 11.5 years (median 3.5 years).\n\n\nRESULTS\nMedian followup from initial to last refraction in the atropine group (11.7 years) was similar to that in the control group (12.4 years). Photophobia and blurred vision were frequently reported, but no serious adverse effects were associated with atropine therapy. Mean myopia progression during atropine treatment adjusted for age and refractive error (0.05 diopters per year) was significantly less than that among control subjects (0.36 diopters per year)(P<.001). Final refractions standardized to the age of 20 years showed a greater mean level of myopia in the control group (3.78 diopters) than in the atropine group (2.79 diopters) (P<.001).\n\n\nCONCLUSIONS\nThe data support the view that atropine therapy is associated with decreased progression of myopia and that beneficial effects remain after treatment has been discontinued.", "title": "" }, { "docid": "26d0e97bbb14bc52b8dbb3c03522ac38", "text": "Intraocular injections of rhodamine and horseradish peroxidase in chameleon, labelled retrogradely neurons in the ventromedial tegmental region of the mesencephalon and the ventrolateral thalamus of the diencephalon. In both areas, staining was observed contralaterally to the injected eye. Labelling was occasionally observed in some rhombencephalic motor nuclei. These results indicate that chameleons, unlike other reptilian species, have two retinopetal nuclei.", "title": "" }, { "docid": "dddec8d72a4ed68ee47c0cc7f4f31dbd", "text": "Probabilistic topic modeling of text collections is a powerful tool for statistical text analysis. In this tutorial we introduce a novel non-Bayesian approach, called Additive Regularization of Topic Models. ARTM is free of redundant probabilistic assumptions and provides a simple inference for many combined and multi-objective topic models.", "title": "" }, { "docid": "b7dcd24f098965ff757b7ce5f183662b", "text": "We give an overview of a complex systems approach to large blackouts of electric power transmission systems caused by cascading failure. Instead of looking at the details of particular blackouts, we study the statistics and dynamics of series of blackouts with approximate global models. Blackout data from several countries suggest that the frequency of large blackouts is governed by a power law. The power law makes the risk of large blackouts consequential and is consistent with the power system being a complex system designed and operated near a critical point. Power system overall loading or stress relative to operating limits is a key factor affecting the risk of cascading failure. Power system blackout models and abstract models of cascading failure show critical points with power law behavior as load is increased. To explain why the power system is operated near these critical points and inspired by concepts from self-organized criticality, we suggest that power system operating margins evolve slowly to near a critical point and confirm this idea using a power system model. The slow evolution of the power system is driven by a steady increase in electric loading, economic pressures to maximize the use of the grid, and the engineering responses to blackouts that upgrade the system. Mitigation of blackout risk should account for dynamical effects in complex self-organized critical systems. For example, some methods of suppressing small blackouts could ultimately increase the risk of large blackouts.", "title": "" }, { "docid": "18ada6a64572d11cf186e4497fd81f43", "text": "The task of ranking is crucial in information retrieval. With the advent of the Big Data age, new challenges have arisen for the field. Deep neural architectures are capable of learning complex functions, and capture the underlying representation of the data more effectively. In this work, ranking is reduced to a classification problem and deep neural architectures are used for this task. A dynamic, pointwise approach is used to learn a ranking function, which outperforms the existing ranking algorithms. We introduce three architectures for the task, our primary objective being to identify architectures which produce good results, and to provide intuitions behind their usefulness. The inputs to the models are hand-crafted features provided in the datasets. The outputs are relevance levels. Further, we also explore the idea as to whether the semantic grouping of handcrafted features aids deep learning models in our task.", "title": "" }, { "docid": "55749da1639911c33ba86a2d7ddae0d2", "text": "Artificial intelligence (AI) tools, such as expert system, fuzzy logic, and neural network are expected to usher a new era in power electronics and motion control in the coming decades. Although these technologies have advanced significantly in recent years and have found wide applications, they have hardly touched the power electronics and mackine drives area. The paper describes these Ai tools and their application in the area of power electronics and motion control. The body of the paper is subdivided into three sections which describe, respectively, the principles and applications of expert system, fuzzy logic, and neural network. The theoretical portion of each topic is of direct relevance to the application of power electronics. The example applications in the paper are taken from the published literature. Hopefully, the readers will be able to formulate new applications from these examples.", "title": "" }, { "docid": "0b70a4a44a26ff9218224727fbba823c", "text": "Recently, DNN model compression based on network architecture design, e.g., SqueezeNet, attracted a lot attention. No accuracy drop on image classification is observed on these extremely compact networks, compared to well-known models. An emerging question, however, is whether these model compression techniques hurt DNNs learning ability other than classifying images on a single dataset. Our preliminary experiment shows that these compression methods could degrade domain adaptation (DA) ability, though the classification performance is preserved. Therefore, we propose a new compact network architecture and unsupervised DA method in this paper. The DNN is built on a new basic module Conv-M which provides more diverse feature extractors without significantly increasing parameters. The unified framework of our DA method will simultaneously learn invariance across domains, reduce divergence of feature representations, and adapt label prediction. Our DNN has 4.1M parameters, which is only 6.7% of AlexNet or 59% of GoogLeNet. Experiments show that our DNN obtains GoogLeNet-level accuracy both on classification and DA, and our DA method slightly outperforms previous competitive ones. Put all together, our DA strategy based on our DNN achieves state-of-the-art on sixteen of total eighteen DA tasks on popular Office-31 and Office-Caltech datasets.", "title": "" }, { "docid": "9ba3c67136d573c4a10b133a2391d8bc", "text": "Modern text collections often contain large documents that span several subject areas. Such documents are problematic for relevance feedback since inappropriate terms can easi 1y be chosen. This study explores the highly effective approach of feeding back passages of large documents. A less-expensive method that discards long documents is also reviewed and found to be effective if there are enough relevant documents. A hybrid approach that feeds back short documents and passages of long documents may be the best compromise.", "title": "" }, { "docid": "fd2abd6749eb7a85f3480ae9b4cbefa6", "text": "We examine the current performance and future demands of interconnects to and on silicon chips. We compare electrical and optical interconnects and project the requirements for optoelectronic and optical devices if optics is to solve the major problems of interconnects for future high-performance silicon chips. Optics has potential benefits in interconnect density, energy, and timing. The necessity of low interconnect energy imposes low limits especially on the energy of the optical output devices, with a ~ 10 fJ/bit device energy target emerging. Some optical modulators and radical laser approaches may meet this requirement. Low (e.g., a few femtofarads or less) photodetector capacitance is important. Very compact wavelength splitters are essential for connecting the information to fibers. Dense waveguides are necessary on-chip or on boards for guided wave optical approaches, especially if very high clock rates or dense wavelength-division multiplexing (WDM) is to be avoided. Free-space optics potentially can handle the necessary bandwidths even without fast clocks or WDM. With such technology, however, optics may enable the continued scaling of interconnect capacity required by future chips.", "title": "" }, { "docid": "545509f9e3aa65921a7d6faa41247ae6", "text": "BACKGROUND\nPenicillins inhibit cell wall synthesis; therefore, Helicobacter pylori must be dividing for this class of antibiotics to be effective in eradication therapy. Identifying growth responses to varying medium pH may allow design of more effective treatment regimens.\n\n\nAIM\nTo determine the effects of acidity on bacterial growth and the bactericidal efficacy of ampicillin.\n\n\nMETHODS\nH. pylori were incubated in dialysis chambers suspended in 1.5-L of media at various pHs with 5 mM urea, with or without ampicillin, for 4, 8 or 16 h, thus mimicking unbuffered gastric juice. Changes in gene expression, viability and survival were determined.\n\n\nRESULTS\nAt pH 3.0, but not at pH 4.5 or 7.4, there was decreased expression of ~400 genes, including many cell envelope biosynthesis, cell division and penicillin-binding protein genes. Ampicillin was bactericidal at pH 4.5 and 7.4, but not at pH 3.0.\n\n\nCONCLUSIONS\nAmpicillin is bactericidal at pH 4.5 and 7.4, but not at pH 3.0, due to decreased expression of cell envelope and division genes with loss of cell division at pH 3.0. Therefore, at pH 3.0, the likely pH at the gastric surface, the bacteria are nondividing and persist with ampicillin treatment. A more effective inhibitor of acid secretion that maintains gastric pH near neutrality for 24 h/day should enhance the efficacy of amoxicillin, improving triple therapy and likely even allowing dual amoxicillin-based therapy for H. pylori eradication.", "title": "" }, { "docid": "e2b8dd31dad42e82509a8df6cf21df11", "text": "Recent experiments indicate the need for revision of a model of spatial memory consisting of viewpoint-specific representations, egocentric spatial updating and a geometric module for reorientation. Instead, it appears that both egocentric and allocentric representations exist in parallel, and combine to support behavior according to the task. Current research indicates complementary roles for these representations, with increasing dependence on allocentric representations with the amount of movement between presentation and retrieval, the number of objects remembered, and the size, familiarity and intrinsic structure of the environment. Identifying the neuronal mechanisms and functional roles of each type of representation, and of their interactions, promises to provide a framework for investigation of the organization of human memory more generally.", "title": "" }, { "docid": "c19863ef5fa4979f288763837e887a7c", "text": "Decentralized cryptocurrencies have pushed deployments of distributed consensus to more stringent environments than ever before. Most existing protocols rely on proofs-of-work which require expensive computational puzzles to enforce, imprecisely speaking, “one vote per unit of computation”. The enormous amount of energy wasted by these protocols has been a topic of central debate, and well-known cryptocurrencies have announced it a top priority to alternative paradigms. Among the proposed alternative solutions, proofs-of-stake protocols have been of particular interest, where roughly speaking, the idea is to enforce “one vote per unit of stake”. Although the community have rushed to propose numerous candidates for proofs-of-stake, no existing protocol has offered formal proofs of security, which we believe to be a critical, indispensible ingredient of a distributed consensus protocol, particularly one that is to underly a high-value cryptocurrency system. In this work, we seek to address the following basic questions: • What kind of functionalities and robustness requirements should a consensus candidate offer to be suitable in a proof-of-stake application? • Can we design a provably secure protocol that satisfies these requirements? To the best of our knowledge, we are the first to formally articulate a set of requirements for consensus candidates for proofs-of-stake. We argue that any consensus protocol satisfying these properties can be used for proofs-of-stake, as long as money does not switch hands too quickly. Moreover, we provide the first consensus candidate that provably satisfies the desired robustness properties.", "title": "" }, { "docid": "1f7454de77b2f3f489c12a8e836ceb43", "text": "Pornography use among emerging adults in the USA has increased in recent decades, as has the acceptance of such consumption. While previous research has linked pornography use to both positive and negative outcomes in emerging adult populations, few studies have investigated how attitudes toward pornography may alter these associations, or how examining pornography use together with other sexual behaviours may offer unique insights into the outcomes associated with pornography use. Using a sample of 792 emerging adults, the present study explored how the combined examination of pornography use, acceptance, and sexual behaviour within a relationship might offer insight into emerging adults' development. Results suggested clear gender differences in both pornography use and acceptance patterns. High male pornography use tended to be associated with high engagement in sex within a relationship and was associated with elevated risk-taking behaviours. High female pornography use was not associated with engagement in sexual behaviours within a relationship and was general associated with negative mental health outcomes.", "title": "" }, { "docid": "f3e63f3fb0ce0e74697e0a74867d9671", "text": "Convolutional Neural Networks (CNN) have been successfully applied to autonomous driving tasks, many in an end-to-end manner. Previous end-to-end steering control methods take an image or an image sequence as the input and directly predict the steering angle with CNN. Although single task learning on steering angles has reported good performances, the steering angle alone is not sufficient for vehicle control. In this work, we propose a multi-task learning framework to predict the steering angle and speed control simultaneously in an end-to-end manner. Since it is nontrivial to predict accurate speed values with only visual inputs, we first propose a network to predict discrete speed commands and steering angles with image sequences. Moreover, we propose a multi-modal multi-task network to predict speed values and steering angles by taking previous feedback speeds and visual recordings as inputs. Experiments are conducted on the public Udacity dataset and a newly collected SAIC dataset. Results show that the proposed model predicts steering angles and speed values accurately. Furthermore, we improve the failure data synthesis methods to solve the problem of error accumulation in real road tests.", "title": "" }, { "docid": "5b2fbfe1e9ceb9cb9e969df992ea1271", "text": "Distributed denial of service (DDoS) attacks continues to grow as a threat to organizations worldwide. From the first known attack in 1999 to the highly publicized Operation Ababil, the DDoS attacks have a history of flooding the victim network with an enormous number of packets, hence exhausting the resources and preventing the legitimate users to access them. After having standard DDoS defense mechanism, still attackers are able to launch an attack. These inadequate defense mechanisms need to be improved and integrated with other solutions. The purpose of this paper is to study the characteristics of DDoS attacks, various models involved in attacks and to provide a timeline of defense mechanism with their improvements to combat DDoS attacks. In addition to this, a novel scheme is proposed to detect DDoS attack efficiently by using MapReduce programming model.", "title": "" }, { "docid": "912a05d1ee733d85d3dbe6b63c986a44", "text": "Keyphrases efficiently summarize a document’s content and are used in various document processing and retrieval tasks. Several unsupervised techniques and classifiers exist for extracting keyphrases from text documents. Most of these methods operate at a phrase-level and rely on part-of-speech (POS) filters for candidate phrase generation. In addition, they do not directly handle keyphrases of varying lengths. We overcome these modeling shortcomings by addressing keyphrase extraction as asequential labelingtask in this paper. We explore a basic set of features commonly used in NLP tasks as well as predictions from various unsupervised methods to train our taggers. In addition to a more natural modeling for the keyphrase extraction problem, we show that tagging models yield significant performance benefits over existing stateof-the-art extraction methods.", "title": "" }, { "docid": "db7426a1896920e0d2e3342d2df96401", "text": "Nasal obstruction due to weakening of the nasal sidewall is a very common patient complaint. The conchal cartilage butterfly graft is a proven technique for the correction of nasal valve collapse. It allows for excellent functional results, and with experience and attention to technical detail, it may also provide excellent cosmetic results. While this procedure is most useful for restoring form and function in cases of secondary rhinoplasty following the reduction of nasal support structures, we have found it to be a very powerful and satisfying technique in primary rhinoplasty as well. This article aims to describe the butterfly graft, discuss its history, and detail the technical considerations which we have found useful.", "title": "" }, { "docid": "c8a6f20bf8daded62ee23ea2615c8dc0", "text": "In developing countries, fruit and vegetable juices sold by street vendors are widely consumed by millions of people. These juices provide a source of readily available and affordable source of nutrients to many sectors of the population, including the urban poor. Unpasteurized juices are preferred by the consumers because of the “fresh flavor” attributes and hence, in recent times, their demand has increased. They are simply prepared by extracting, usually by mechanical means, the liquid and pulp of mature fruit and vegetables. The final product is an unfermented, clouded, untreated juice, ready for consumption. Pathogenic organisms can enter fruits and vegetables through damaged surfaces, such as punctures, wounds, cuts and splits that occur during growing or harvesting. Contamination from raw materials and equipments, additional processing conditions, improper handling, prevalence of unhygienic conditions contribute substantially to the entry of bacterial pathogens in juices prepared from these fruits or vegetables (Victorian Government Department of Human Services 2005; Oliveira et al., 2006; Nicolas et al., 2007). In countries, where street food vending is prevalent, there is commonly a lack of information on the incidence of food borne diseases related to the street vended foods. However, microbial studies on such foods in American, Asian and African countries have revealed increased bacterial pathogens in the food. There have been documented outbreaks of illnesses in humans associated with the consumption of unpasteurized fruit and vegetable juices and fresh produce. A report published by Victorian Government Department of Abstract: Fresh squeezed juices of sugarcane, lime and carrot sold by street vendors in Mumbai city were analyzed for their microbial contents during the months of June 2007 to September 2007. The total viable counts of all 30 samples were approximately log 6.5 cfu/100ml with significant load of coliforms, faecal coliforms, Vibrio and Staphylococcal counts. Qualitative counts showed the presence of coagulase positive S.aureus in 5 samples of sugarcane and 2 samples of carrot juice. Almost 70% of the ice samples collected from street vendors showed high microbial load ranging from log 58.5. Our results demonstrate the non hygienic quality of three most popular types of street vended fruit juices and ice used for cooling of juices suggesting the urgent need for government participation in developing suitable intervention measures to improve microbial quality of juices.", "title": "" }, { "docid": "9cc8d5f395a11ceaabdf9b2e57aa2bc9", "text": "This paper proposes a Model Predictive Control methodology for a non-inverting Buck-Boost DC-DC converter for its efficient control. PID and MPC control strategies are simulated for the control of Buck-Boost converter and its performance is compared using MATLAB Simulink model. MPC shows better performance compared to PID controller. Output follows reference voltage more accurately showing that MPC can handle the dynamics of the system efficiently. The proposed methodology can be used for constant voltage applications. The control strategy can be implemented using a Field Programmable Gate Array (FPGA).", "title": "" } ]
scidocsrr
44caa69e26d33b39158f4187ad930005
A High-Quality Video Denoising Algorithm Based on Reliable Motion Estimation
[ { "docid": "67e16f36bb6d83c5d6eae959a7223b77", "text": "Neighborhood filters are nonlocal image and movie filters which reduce the noise by averaging similar pixels. The first object of the paper is to present a unified theory of these filters and reliable criteria to compare them to other filter classes. A CCD noise model will be presented justifying the involvement of neighborhood filters. A classification of neighborhood filters will be proposed, including classical image and movie denoising methods and discussing further a recently introduced neighborhood filter, NL-means. In order to compare denoising methods three principles will be discussed. The first principle, “method noise”, specifies that only noise must be removed from an image. A second principle will be introduced, “noise to noise”, according to which a denoising method must transform a white noise into a white noise. Contrarily to “method noise”, this principle, which characterizes artifact-free methods, eliminates any subjectivity and can be checked by mathematical arguments and Fourier analysis. “Noise to noise” will be proven to rule out most denoising methods, with the exception of neighborhood filters. This is why a third and new comparison principle, the “statistical optimality”, is needed and will be introduced to compare the performance of all neighborhood filters. The three principles will be applied to compare ten different image and movie denoising methods. It will be first shown that only wavelet thresholding methods and NL-means give an acceptable method noise. Second, that neighborhood filters are the only ones to satisfy the “noise to noise” principle. Third, that among them NL-means is closest to statistical optimality. A particular attention will be paid to the application of the statistical optimality criterion for movie denoising methods. It will be pointed out that current movie denoising methods are motion compensated neighborhood filters. This amounts to say that they are neighborhood filters and that the ideal neighborhood of a pixel is its trajectory. Unfortunately the aperture problem makes it impossible to estimate ground true trajectories. It will be demonstrated that computing trajectories and restricting the neighborhood to them is harmful for denoising purposes and that space-time NL-means preserves more movie details.", "title": "" }, { "docid": "85007af502deac21cd6477945e0578d6", "text": "State of the art movie restoration methods either estimate motion and filter out the trajectories, or compensate the motion by an optical flow estimate and then filter out the compensated movie. Now, the motion estimation problem is ill posed. This fact is known as the aperture problem: trajectories are ambiguous since they could coincide with any promenade in the space-time isophote surface. In this paper, we try to show that, for denoising, the aperture problem can be taken advantage of. Indeed, by the aperture problem, many pixels in the neighboring frames are similar to the current pixel one wishes to denoise. Thus, denoising by an averaging process can use many more pixels than just the ones on a single trajectory. This observation leads to use for movies a recently introduced image denoising method, the NL-means algorithm. This static 3D algorithm outperforms motion compensated algorithms, as it does not lose movie details. It involves the whole movie isophote and not just a trajectory.", "title": "" }, { "docid": "b5453d9e4385d5a5ff77997ad7e3f4f0", "text": "We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the nonlocal means (NL-means), based on a nonlocal averaging of all pixels in the image. Finally, we present some experiments comparing the NL-means algorithm and the local smoothing filters.", "title": "" } ]
[ { "docid": "a993a7a5aa45fb50e19326ec4c98472d", "text": "Innumerable terror and suspicious messages are sent through Instant Messengers (IM) and Social Networking Sites (SNS) which are untraced, leading to hindrance for network communications and cyber security. We propose a Framework that discover and predict such messages that are sent using IM or SNS like Facebook, Twitter, LinkedIn, and others. Further, these instant messages are put under surveillance that identifies the type of suspected cyber threat activity by culprit along with their personnel details. Framework is developed using Ontology based Information Extraction technique (OBIE), Association rule mining (ARM) a data mining technique with set of pre-defined Knowledge-based rules (logical), for decision making process that are learned from domain experts and past learning experiences of suspicious dataset like GTD (Global Terrorist Database). The experimental results obtained will aid to take prompt decision for eradicating cyber crimes.", "title": "" }, { "docid": "6c8a3fcab2b511b4527bb40736774507", "text": "For the purposes of this research, the optimal MLP neural network topology has been designed and tested by means the specific genetic algorithm multi-objective Pareto-Based. The objective of the research is to predict the trend of the exchange rate Euro/USD up to three days ahead of last data available. The variable of output of the ANN designed is then the daily exchange rate Euro/Dollar and the frequency of data collection of variables of input and the output is daily. By the analysis of the data it is possible to conclude that the ANN model developed can largely predict the trend to three days of exchange rate Euro/USD.", "title": "" }, { "docid": "226750535735e3a13363e98594851f71", "text": "Synthesizing high resolution photorealistic images has been a long-standing challenge in machine learning. In this paper we introduce new methods for the improved training of generative adversarial networks (GANs) for image synthesis. We construct a variant of GANs employing label conditioning that results in 128× 128 resolution image samples exhibiting global coherence. We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high resolution samples provide class information not present in low resolution samples. Across 1000 ImageNet classes, 128 × 128 samples are more than twice as discriminable as artificially resized 32× 32 samples. In addition, 84.7% of the classes have samples exhibiting diversity comparable to real ImageNet data.", "title": "" }, { "docid": "3fb39e30092858b84291a85a719f97f0", "text": "A spherical wrist of the serial type is said to be isotropic if it can attain a posture whereby the singular values of its Jacobian matrix are all identical and nonzero. What isotropy brings about is robustness to manufacturing, assembly, and measurement errors, thereby guaranteeing a maximum orientation accuracy. In this paper we investigate the existence of redundant isotropic architectures, which should add to the dexterity of the wrist under design by virtue of its extra degree of freedom. The problem formulation leads to a system of eight quadratic equations with eight unknowns. The Bezout number of this system is thus 2 = 256, its BKK bound being 192. However, the actual number of solutions is shown to be 32. We list all solutions of the foregoing algebraic problem. All these solutions are real, but distinct solutions do not necessarily lead to distinct manipulators. Upon discarding those algebraic solutions that yield no new wrists, we end up with exactly eight distinct architectures, the eight corresponding manipulators being displayed at their isotropic posture.", "title": "" }, { "docid": "706f602b6fe489be0b1c170f38d5bf6e", "text": "Test designers widely believe that the overall effectiveness and cost of software testing depends largely on the type and number of test cases executed on the software. In this paper we show that the test oracle used during testing also contributes significantly to test effectiveness and cost. A test oracle is a mechanism that determines whether a software executed correctly for a test case. We define a test oracle to contain two essential parts: oracle information that represents expected output, and an oracle procedure that compares the oracle information with the actual output. By varying the level of detail of oracle information and changing the oracle procedure, a test designer can create different types of test oracles. We design 11 types of test oracles and empirically compare them on four software systems. We seed faults in each software to create 100 faulty versions, execute 600 test cases on each version, for all 11 types of oracles. In all, we report results of 660,000 test runs on each software. We show (1) the time and space requirements of the oracles, (2) that faults are detected early in the testing process when using detailed oracle information and complex oracle procedures, although at a higher cost per test case, and (3) that employing expensive oracles results in detecting a large number of faults using relatively smaller number of test cases.", "title": "" }, { "docid": "f10ac6d718b07a22b798ef236454b806", "text": "The capability to operate cloud-native applications can generate enormous business growth and value. But enterprise architects should be aware that cloud-native applications are vulnerable to vendor lock-in. We investigated cloud-native application design principles, public cloud service providers, and industrial cloud standards. All results indicate that most cloud service categories seem to foster vendor lock-in situations which might be especially problematic for enterprise architectures. This might sound disillusioning at first. However, we present a reference model for cloud-native applications that relies only on a small subset of well standardized IaaS services. The reference model can be used for codifying cloud technologies. It can guide technology identification, classification, adoption, research and development processes for cloud-native application and for vendor lock-in aware enterprise architecture engineering methodologies.", "title": "" }, { "docid": "947d4c60427377bcb466fe1393c5474c", "text": "This paper presents a single BCD technology platform with high performance power devices at a wide range of operating voltages. The platform offers 6 V to 70 V LDMOS devices. All devices offer best-in-class specific on-resistance of 20 to 40 % lower than that of the state-of-the-art IC-based LDMOS devices and robustness better than the square SOA (safe-operating-area). Fully isolated LDMOS devices, in which independent bias is capable for circuit flexibility, demonstrate superior specific on-resistance (e.g. 11.9 mΩ-mm2 for breakdown voltage of 39 V). Moreover, the unusual sudden current enhancement appeared in the ID-VD saturation region of most of the high voltage LDMOS devices is significantly suppressed.", "title": "" }, { "docid": "0867eb365ca19f664bd265a9adaa44e5", "text": "We present VI-DSO, a novel approach for visual-inertial odometry, which jointly estimates camera poses and sparse scene geometry by minimizing photometric and IMU measurement errors in a combined energy functional. The visual part of the system performs a bundle-adjustment like optimization on a sparse set of points, but unlike key-point based systems it directly minimizes a photometric error. This makes it possible for the system to track not only corners, but any pixels with large enough intensity gradients. IMU information is accumulated between several frames using measurement preintegration, and is inserted into the optimization as an additional constraint between keyframes. We explicitly include scale and gravity direction into our model and jointly optimize them together with other variables such as poses. As the scale is often not immediately observable using IMU data this allows us to initialize our visual-inertial system with an arbitrary scale instead of having to delay the initialization until everything is observable. We perform partial marginalization of old variables so that updates can be computed in a reasonable time. In order to keep the system consistent we propose a novel strategy which we call “dynamic marginalization”. This technique allows us to use partial marginalization even in cases where the initial scale estimate is far from the optimum. We evaluate our method on the challenging EuRoC dataset, showing that VI-DSO outperforms the state of the art.", "title": "" }, { "docid": "9f6a26351de92e8005036c96520d5638", "text": "We learn models to generate the immediate future in video. This problem has two main challenges. Firstly, since the future is uncertain, models should be multi-modal, which can be difficult to learn. Secondly, since the future is similar to the past, models store low-level details, which complicates learning of high-level semantics. We propose a framework to tackle both of these challenges. We present a model that generates the future by transforming pixels in the past. Our approach explicitly disentangles the models memory from the prediction, which helps the model learn desirable invariances. Experiments suggest that this model can generate short videos of plausible futures. We believe predictive models have many applications in robotics, health-care, and video understanding.", "title": "" }, { "docid": "c672f070ae4ef8d71095702f179984f6", "text": "A novel CMOS-MEMS Pirani vacuum gauge with complementary bump heat-sink and cavity heater design has been proposed and demonstrated. This design using CMOS-MEMS process to offer the following advantages for Pirani gauge: (1) The bump heat-sink vertical integrates with cavity heater increases the dynamic range and sensitivity without changing device footprint size, (2) The cavity in heater reduces the thermal mass for low-power operation, and (3) Easy integration with packaged CMOS-MEMS devices for pressure monitoring [1]. The design is implemented using the standard TSMC 0.18μm 1P6M CMOS process. A 120μm×120μm die size with 0.53μm sensing gap is demonstrated. Measurement indicates the gauge has sensing range 0.3-100torr with sensitivity of 1.53×104(K/W)/torr. The power consumption is 67μW for 1% resistance change. In comparison, the gauge with typical heat-sink/heater design has sensing range 1-100torr with sensitivity of 0.99×104(K/W)/torr and power consumption of 119μW.", "title": "" }, { "docid": "2acd418c6e961cbded8b9ee33b63be41", "text": "Purpose – Customer relationship management (CRM) is an information system that tracks customers’ interactions with the firm and allows employees to instantly pull up information about the customers such as past sales, service records, outstanding records and unresolved problem calls. This paper aims to put forward strategies for successful implementation of CRM and discusses barriers to CRM in e-business and m-business. Design/methodology/approach – The paper combines narrative with argument and analysis. Findings – CRM stores all information about its customers in a database and uses this data to coordinate sales, marketing, and customer service departments so as to work together smoothly to best serve their customers’ needs. Originality/value – The paper demonstrates how CRM, if used properly, could enhance a company’s ability to achieve the ultimate goal of retaining customers and gain strategic advantage over its competitors.", "title": "" }, { "docid": "dfa62c69b1ab26e7e160100b69794674", "text": "Canonical correlation analysis (CCA) is a well established technique for identifying linear relationships among two variable sets. Kernel CCA (KCCA) is the most notable nonlinear extension but it lacks interpretability and robustness against irrelevant features. The aim of this article is to introduce two nonlinear CCA extensions that rely on the recently proposed Hilbert-Schmidt independence criterion and the centered kernel target alignment. These extensions determine linear projections that provide maximally dependent projected data pairs. The paper demonstrates that the use of linear projections allows removing irrelevant features, whilst extracting combinations of strongly associated features. This is exemplified through a simulation and the analysis of recorded data that are available in the literature.", "title": "" }, { "docid": "e1cbcd23e4a311f97d2520170b8474ad", "text": "In this paper, we study the problem of mining frequent sequences under the rigorous differential privacy model. We explore the possibility of designing a differentially private frequent sequence mining (FSM) algorithm which can achieve both high data utility and a high degree of privacy. We found, in differentially private FSM, the amount of required noise is proportionate to the number of candidate sequences. If we could effectively prune those unpromising candidate sequences, the utility and privacy tradeoff can be significantly improved. To this end, by leveraging a sampling-based candidate pruning technique, we propose <italic>PFS</italic><inline-formula><tex-math notation=\"LaTeX\"> $^2$</tex-math><alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"xu-ieq1-2601106.gif\"/></alternatives></inline-formula>, a novel differentially private FSM algorithm. It is the first algorithm that supports the general gap-constrained FSM in the context of differential privacy. The gap constraints in FSM can be used to limit the mining results to a controlled set of frequent sequences. In our <italic>PFS</italic><inline-formula><tex-math notation=\"LaTeX\">$^2$ </tex-math><alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"xu-ieq2-2601106.gif\"/></alternatives></inline-formula> algorithm, the core is to utilize sample databases to prune the candidate sequences generated based on the downward closure property. In particular, we use the noisy local support of candidate sequences in the sample databases to estimate which candidate sequences are potentially frequent. To improve the accuracy of such private estimations, a gap-aware sequence shrinking method is proposed to enforce the length constraint on the sample databases. Moreover, to calibrate the amount of noise required by differential privacy, a gap-aware sensitivity computation method is proposed to obtain the sensitivity of the local support computations with different gap constraints. Furthermore, to decrease the probability of misestimating frequent sequences as infrequent, a threshold relaxation method is proposed to relax the user-specified threshold for the sample databases. Through formal privacy analysis, we show that our <italic>PFS </italic><inline-formula><tex-math notation=\"LaTeX\">$^2$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"xu-ieq3-2601106.gif\"/></alternatives></inline-formula> algorithm is <inline-formula> <tex-math notation=\"LaTeX\">$\\epsilon$</tex-math><alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"xu-ieq4-2601106.gif\"/> </alternatives></inline-formula>-differentially private. Extensive experiments on real datasets illustrate that our <italic>PFS</italic><inline-formula><tex-math notation=\"LaTeX\">$^2$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"xu-ieq5-2601106.gif\"/></alternatives></inline-formula> algorithm can privately find frequent sequences with high accuracy.", "title": "" }, { "docid": "2e9d46f8be771894a2b61aa8a5c82715", "text": "Military vehicles are important part to maintain territories of a country. Military vehicle is often equipped with a gun turret mounted on top of the vehicle. Traditionally, gun turret is operated manually by an operator sitting on the vehicle. With the advance of current robotic technology an automatic operation of gun turret is highly possible. Notable works on automatic gun turret tend to use features that are manually designed as an input to a classifier for target tracking. These features can cause less optimal parameters and require highly complex kinematic and dynamic analysis specific to a particular turret. In this paper, toward the goal of realizing an automatic targeting system of gun turret, a gun turret simulation system is developed by leveraging fully connected network of deep learning using only visual information from a camera. It includes designing convolutional layers to accurately detect and tracking a target with input from a camera. All network parameters are automatically and jointly learned without any human intervention, all network parameters are driven purely by data. This method also requires less kinematic and dynamic model. Experiments show encouraging results that the automatic targeting system of gun turret using only a camera can benefit research in the related fields.", "title": "" }, { "docid": "d7ecb69aeb14b5f899c768032f36cc43", "text": "Building on top of the success of generative adversarial networks (GANs), conditional GANs attempt to better direct the data generation process by conditioning with certain additional information. Inspired by the most recent AC-GAN, in this paper we propose a fast-converging conditional GAN (FC-GAN). In addition to the real/fake classifier used in vanilla GANs, our discriminator has an advanced auxiliary classifier which distinguishes each real class from an extra ‘fake’ class. The ‘fake’ class avoids mixing generated data with real data, which can potentially confuse the classification of real data as AC-GAN does, and makes the advanced auxiliary classifier behave as another real/fake classifier. As a result, FC-GAN can accelerate the process of differentiation of all classes, thus boost the convergence speed. Experimental results on image synthesis demonstrate our model is competitive in the quality of images generated while achieving a faster convergence rate.", "title": "" }, { "docid": "95e1d5dc90f7fc6ece51f61585842f3d", "text": "This paper investigates how the splitting cri teria and pruning methods of decision tree learning algorithms are in uenced by misclas si cation costs or changes to the class distri bution Splitting criteria that are relatively insensitive to costs class distributions are found to perform as well as or better than in terms of expected misclassi cation cost splitting criteria that are cost sensitive Con sequently there are two opposite ways of deal ing with imbalance One is to combine a cost insensitive splitting criterion with a cost in sensitive pruning method to produce a deci sion tree algorithm little a ected by cost or prior class distribution The other is to grow a cost independent tree which is then pruned in a cost sensitive manner", "title": "" }, { "docid": "ff59d1ec0c3eb11b3201e5708a585ca4", "text": "In this paper, we described our system for Knowledge Base Acceleration (KBA) Track at TREC 2013. The KBA Track has two tasks, CCR and SSF. Our approach consists of two major steps: selecting documents and extracting slot values. Selecting documents is to look for and save the documents that mention the entities of interest. The second step involves with generating seed patterns to extract the slot values and computing confidence score.", "title": "" }, { "docid": "f9da4bfe6dba0a6ec886758b164cd10b", "text": "Physically based deformable models have been widely embraced by the Computer Graphics community. Many problems outlined in a previous survey by Gibson and Mirtich [GM97] have been addressed, thereby making these models interesting and useful for both offline and real-time applications, such as motion pictures and video games. In this paper, we present the most significant contributions of the past decade, which produce such impressive and perceivably realistic animations and simulations: finite element/difference/volume methods, mass-spring systems, meshfree methods, coupled particle systems and reduced deformable models based on modal analysis. For completeness, we also make a connection to the simulation of other continua, such as fluids, gases and melting objects. Since time integration is inherent to all simulated phenomena, the general notion of time discretization is treated separately, while specifics are left to the respective models. Finally, we discuss areas of application, such as elastoplastic deformation and fracture, cloth and hair animation, virtual surgery simulation, interactive entertainment and fluid/smoke animation, and also suggest areas for future research.", "title": "" }, { "docid": "9afdeab9abb1bfde45c6e9f922181c6b", "text": "Aiming at the need for autonomous learning in reinforcement learning (RL), a quantitative emotion-based motivation model is proposed by introducing psychological emotional factors as the intrinsic motivation. The curiosity is used to promote or hold back agents' exploration of unknown states, the happiness index is used to determine the current state-action's happiness level, the control power is used to indicate agents' control ability over its surrounding environment, and together to adjust agents' learning preferences and behavioral patterns. To combine intrinsic emotional motivations with classic RL, two methods are proposed. The first method is to use the intrinsic emotional motivations to explore unknown environment and learn the environment transitioning model ahead of time, while the second method is to combine intrinsic emotional motivations with external rewards as the ultimate joint reward function, directly to drive agents' learning. As the result shows, in the simulation experiments in the rat foraging in maze scenario, both methods have achieved relatively good performance, compared with classic RL purely driven by external rewards.", "title": "" } ]
scidocsrr
de37b984f71393dff406f7cfee1cde55
Locating tables in scanned documents for reconstructing and republishing
[ { "docid": "823c0e181286d917a610f90d1c9db0c3", "text": "Table characteristics vary widely. Consequently, a great variety of computational approaches have been applied to table recognition. In this survey, the table recognition literature is presented as an interaction of table models, observations, transformations and inferences. A table model defines the physical and logical structure of tables; the model is used to detect tables, and to analyze and decompose the detected tables. Observations perform feature measurements and data lookup, transformations alter or restructure data, and inferences generate and test hypotheses. This presentation clarifies the decisions that are made by a table recognizer, and the assumptions and inferencing techniques that underlie these decisions.", "title": "" }, { "docid": "3e21946b125a625db60ce9ebb34f4cd6", "text": "Table detection can be a valuable step in the analysis of unstructured documents. Although much work has been conducted in the domain of machine-print including books, scientific papers, etc., little has been done to address the case of handwritten inputs. In this paper, we study table detection in scanned handwritten documents subject to challenging artifacts and noise. First, we separate text components (machine-print, handwriting) from the rest of the page using an SVM classifier. We then employ a correlation-based approach to measure the coherence between adjacent text lines which may be part of the same table, solving the resulting page decomposition problem using dynamic programming. A report of preliminary results from ongoing experiments concludes the paper.", "title": "" }, { "docid": "e5ed312b0c3aaa26240a9f3aaa2bd36e", "text": "This paper presents PDF-TREX, an heuristic approach for table recognition and extraction from PDF documents.The heuristics starts from an initial set of basic content elements and aligns and groups them, in bottom-up way by considering only their spatial features, in order to identify tabular arrangements of information. The scope of the approach is to recognize tables contained in PDF documents as a 2-dimensional grid on a Cartesian plane and extract them as a set of cells equipped by 2-dimensional coordinates. Experiments, carried out on a dataset composed of tables contained in documents coming from different domains, shows that the approach is well performing in recognizing table cells.The approach aims at improving PDF document annotation and information extraction by providing an output that can be further processed for understanding table and document contents.", "title": "" }, { "docid": "b18c8b7472ba03a260d63b886a6dc11d", "text": "In this paper, we propose a novel technique for automatic table detection in document images. Lines and tables are among the most frequent graphic, non-textual entities in documents and their detection is directly related to the OCR performance as well as to the document layout description. We propose a workflow for table detection that comprises three distinct steps: (i) image pre-processing; (ii) horizontal and vertical line detection and (iii) table detection. The efficiency of the proposed method is demonstrated by using a performance evaluation scheme which considers a great variety of documents such as forms, newspapers/magazines, scientific journals, tickets/bank cheques, certificates and handwritten documents.", "title": "" }, { "docid": "ac4d208a022717f6389d8b754abba80b", "text": "This paper presents a new approach to detect tabular structures present in document images and in low resolution video images. The algorithm for table detection is based on identifying the unique table start pattern and table trailer pattern. We have formulated perceptual attributes to characterize the patterns. The performance of our table detection system is tested on a set of document images picked from UW-III (University of Washington) dataset, UNLV dataset, video images of NPTEL videos, and our own dataset. Our approach demonstrates improved detection for different types of table layouts, with or without ruling lines. We have obtained correct table localization on pages with multiple tables aligned side-by-side.", "title": "" } ]
[ { "docid": "b41d8ca866268133f2af88495dad6482", "text": "Text clustering is an important area of interest in the field of Text summarization, sentiment analysis etc. There have been a lot of algorithms experimented during the past years, which have a wide range of performances. One of the most popular method used is k-means, where an initial assumption is made about k, which is the number of clusters to be generated. Now a new method is introduced where the number of clusters is found using a modified spectral bisection and then the output is given to a genetic algorithm where the final solution is obtained. Keywords— Cluster, Spectral Bisection, Genetic Algorithm, kmeans.", "title": "" }, { "docid": "1a0ce5b259b3c5ee3f72a48802b03503", "text": "This article presents a longitudinal study with four children with autism, who were exposed to a humanoid robot over a period of several months. The longitudinal approach allowed the children time to explore the space of robot–human, as well as human–human interaction. Based on the video material documenting the interactions, a quantitative and qualitative analysis was conducted. The quantitative analysis showed an increase in duration of pre-defined behaviours towards the later trials. A qualitative analysis of the video data, observing the children’s activities in their interactional context, revealed further aspects of social interaction skills (imitation, turn-taking and role-switch) and communicative competence that the children showed. The results clearly demonstrate the need for, and benefits of, long-term studies in order to reveal the full potential of robots in the therapy and education of children with autism.", "title": "" }, { "docid": "11a28e11ba6e7352713b8ee63291cd9c", "text": "This review focuses on discussing the main changes on the upcoming fourth edition of the WHO Classification of Tumors of the Pituitary Gland emphasizing histopathological and molecular genetics aspects of pituitary neuroendocrine (i.e., pituitary adenomas) and some of the non-neuroendocrine tumors involving the pituitary gland. Instead of a formal review, we introduced the highlights of the new WHO classification by answering select questions relevant to practising pathologists. The revised classification of pituitary adenomas, in addition to hormone immunohistochemistry, recognizes the role of other immunohistochemical markers including but not limited to pituitary transcription factors. Recognizing this novel approach, the fourth edition of the WHO classification has abandoned the concept of \"a hormone-producing pituitary adenoma\" and adopted a pituitary adenohypophyseal cell lineage designation of the adenomas with subsequent categorization of histological variants according to hormone content and specific histological and immunohistochemical features. This new classification does not require a routine ultrastructural examination of these tumors. The new definition of the Null cell adenoma requires the demonstration of immunonegativity for pituitary transcription factors and adenohypophyseal hormones Moreover, the term of atypical pituitary adenoma is no longer recommended. In addition to the accurate tumor subtyping, assessment of the tumor proliferative potential by mitotic count and Ki-67 index, and other clinical parameters such as tumor invasion, is strongly recommended in individual cases for consideration of clinically aggressive adenomas. This classification also recognizes some subtypes of pituitary neuroendocrine tumors as \"high-risk pituitary adenomas\" due to the clinical aggressive behavior; these include the sparsely granulated somatotroph adenoma, the lactotroph adenoma in men, the Crooke's cell adenoma, the silent corticotroph adenoma, and the newly introduced plurihormonal Pit-1-positive adenoma (previously known as silent subtype III pituitary adenoma). An additional novel aspect of the new WHO classification was also the definition of the spectrum of thyroid transcription factor-1 expressing pituitary tumors of the posterior lobe as representing a morphological spectrum of a single nosological entity. These tumors include the pituicytoma, the spindle cell oncocytoma, the granular cell tumor of the neurohypophysis, and the sellar ependymoma.", "title": "" }, { "docid": "4eafe7f60154fa2bed78530735a08878", "text": "Although Android's permission system is intended to allow users to make informed decisions about their privacy, it is often ineffective at conveying meaningful, useful information on how a user's privacy might be impacted by using an application. We present an alternate approach to providing users the knowledge needed to make informed decisions about the applications they install. First, we create a knowledge base of mappings between API calls and fine-grained privacy-related behaviors. We then use this knowledge base to produce, through static analysis, high-level behavior profiles of application behavior. We have analyzed almost 80,000 applications to date and have made the resulting behavior profiles available both through an Android application and online. Nearly 1500 users have used this application to date. Based on 2782 pieces of application-specific feedback, we analyze users' opinions about how applications affect their privacy and demonstrate that these profiles have had a substantial impact on their understanding of those applications. We also show the benefit of these profiles in understanding large-scale trends in how applications behave and the implications for user privacy.", "title": "" }, { "docid": "68470cd075d9c475b5ff93578ff7e86d", "text": "Beyond understanding what is being discussed, human communication requires an awareness of what someone is feeling. One challenge for dialogue agents is being able to recognize feelings in the conversation partner and reply accordingly, a key communicative skill that is trivial for humans. Research in this area is made difficult by the paucity of large-scale publicly available datasets both for emotion and relevant dialogues. This work proposes a new task for empathetic dialogue generation and EMPATHETICDIALOGUES, a dataset of 25k conversations grounded in emotional contexts to facilitate training and evaluating dialogue systems. Our experiments indicate that models explicitly leveraging emotion predictions from previous utterances are perceived to be more empathetic by human evaluators, while improving on other metrics as well (e.g. perceived relevance of responses, BLEU scores).", "title": "" }, { "docid": "7cf044d95194f0891d98d207d3003ed8", "text": "Global warming and the obesity epidemic are two unprecedented challenges mankind faces today. A literature search was conducted in the PubMed, Web of Science, EBSCO and Scopus for articles published until July 2017 that reported findings on the relationship between global warming and the obesity epidemic. Fifty studies were identified. Topic-wise, articles were classified into four relationships - global warming and the obesity epidemic are correlated because of common drivers (n = 21); global warming influences the obesity epidemic (n = 13); the obesity epidemic influences global warming (n = 13); and global warming and the obesity epidemic influence each other (n = 3). We constructed a conceptual model linking global warming and the obesity epidemic - the fossil fuel economy, population growth and industrialization impact land use and urbanization, motorized transportation and agricultural productivity and consequently influences global warming by excess greenhouse gas emission and the obesity epidemic by nutrition transition and physical inactivity; global warming also directly impacts obesity by food supply/price shock and adaptive thermogenesis, and the obesity epidemic impacts global warming by the elevated energy consumption. Policies that endorse deployment of clean and sustainable energy sources, and urban designs that promote active lifestyles, are likely to alleviate the societal burden of global warming and obesity.", "title": "" }, { "docid": "3dc800707ecbbf0fed60e445cfe02fcc", "text": "We extend the method introduced by Cinzano et al. (2000a) to map the artificial sky brightness in large territories from DMSP satellite data, in order to map the naked eye star visibility and telescopic limiting magnitudes. For these purposes we take into account the altitude of each land area from GTOPO30 world elevation data, the natural sky brightness in the chosen sky direction, based on Garstang modelling, the eye capability with naked eye or a telescope, based on the Schaefer (1990) and Garstang (2000b) approach, and the stellar extinction in the visual photometric band. For near zenith sky directions we also take into account screening by terrain elevation. Maps of naked eye star visibility and telescopic limiting magnitudes are useful to quantify the capability of the population to perceive our Universe, to evaluate the future evolution, to make cross correlations with statistical parameters and to recognize areas where astronomical observations or popularisation can still acceptably be made. We present, as an application, maps of naked eye star visibility and total sky brightness in V band in Europe at the zenith with a resolution of approximately 1 km.", "title": "" }, { "docid": "56245b600dd082439d2b1b2a2452a6b7", "text": "The electric drive systems used in many industrial applications require higher performance, reliability, variable speed due to its ease of controllability. The speed control of DC motor is very crucial in applications where precision and protection are of essence. Purpose of a motor speed controller is to take a signal representing the required speed and to drive a motor at that speed. Microcontrollers can provide easy control of DC motor. Microcontroller based speed control system consist of electronic component, microcontroller and the LCD. In this paper, implementation of the ATmega8L microcontroller for speed control of DC motor fed by a DC chopper has been investigated. The chopper is driven by a high frequency PWM signal. Controlling the PWM duty cycle is equivalent to controlling the motor terminal voltage, which in turn adjusts directly the motor speed. This work is a practical one and high feasibility according to economic point of view and accuracy. In this work, development of hardware and software of the close loop dc motor speed control system have been explained and illustrated. The desired objective is to achieve a system with the constant speed at any load condition. That means motor will run at a fixed speed instead of varying with amount of load. KeywordsDC motor, Speed control, Microcontroller, ATmega8, PWM.", "title": "" }, { "docid": "0160ef86512929e91fc3e5bb3902514e", "text": "In this paper we propose a clustering method based on combination of the particle swarm optimization (PSO) and the k-mean algorithm. PSO algorithm was showed to successfully converge during the initial stages of a global search, but around global optimum, the search process will become very slow. On the contrary, k-means algorithm can achieve faster convergence to optimum solution. At the same time, the convergent accuracy for k-means can be higher than PSO. So in this paper, a hybrid algorithm combining particle swarm optimization (PSO) algorithm with k-means algorithm is proposed we refer to it as PSO-KM algorithm. The algorithm aims to group a given set of data into a user specified number of clusters. We evaluate the performance of the proposed algorithm using five datasets. The algorithm performance is compared to K-means and PSO clustering.", "title": "" }, { "docid": "4b0fcab3e9599f24cae499a4a2cbbd55", "text": "In June 2016, Apple made a bold announcement that it will deploy local differential privacy for some of their user data collection in order to ensure privacy of user data, even from Apple [21, 23]. The details of Apple’s approach remained sparse. Although several patents [17–19] have since appeared hinting at the algorithms that may be used to achieve differential privacy, they did not include a precise explanation of the approach taken to privacy parameter choice. Such choice and the overall approach to privacy budget use and management are key questions for understanding the privacy protections provided by any deployment of differential privacy. In this work, through a combination of experiments, static and dynamic code analysis of macOS Sierra (Version 10.12) implementation, we shed light on the choices Apple made for privacy budget management. We discover and describe Apple’s set-up for differentially private data processing, including the overall data pipeline, the parameters used for differentially private perturbation of each piece of data, and the frequency with which such data is sent to Apple’s servers. We find that although Apple’s deployment ensures that the (differential) privacy loss per each datum submitted to its servers is 1 or 2, the overall privacy loss permitted by the system is significantly higher, as high as 16 per day for the four initially announced applications of Emojis, New words, Deeplinks and Lookup Hints [21]. Furthermore, Apple renews the privacy budget available every day, which leads to a possible privacy loss of 16 times the number of days since user opt-in to differentially private data collection for those four applications. We applaud Apple’s deployment of differential privacy for its bold demonstration of feasibility of innovation while guaranteeing rigorous privacy. However, we argue that in order to claim the full benefits of differentially private data collection, Apple must give full transparency of its implementation and privacy loss choices, enable user choice in areas related to privacy loss, and set meaningful defaults on the daily and device lifetime privacy loss permitted. ACM Reference Format: Jun Tang, Aleksandra Korolova, Xiaolong Bai, XueqiangWang, and Xiaofeng Wang. 2017. Privacy Loss in Apple’s Implementation of Differential Privacy", "title": "" }, { "docid": "b61a7e1ee0f8100016f61b766332d38f", "text": "We study the cost function for hierarchical clusterings introduced by [Dasgupta, 2016] where hierarchies are treated as first-class objects rather than deriving their cost from projections into flat clusters. It was also shown in [Dasgupta, 2016] that a top-down algorithm returns a hierarchical clustering of cost at most O (αn log n) times the cost of the optimal hierarchical clustering, where αn is the approximation ratio of the Sparsest Cut subroutine used. Thus using the best known approximation algorithm for Sparsest Cut due to Arora-Rao-Vazirani, the top-down algorithm returns a hierarchical clustering of cost at most O ( log3/2 n ) times the cost of the optimal solution. We improve this by giving an O(log n)approximation algorithm for this problem. Our main technical ingredients are a combinatorial characterization of ultrametrics induced by this cost function, deriving an Integer Linear Programming (ILP) formulation for this family of ultrametrics, and showing how to iteratively round an LP relaxation of this formulation by using the idea of sphere growing which has been extensively used in the context of graph partitioning. We also prove that our algorithm returns an O(log n)-approximate hierarchical clustering for a generalization of this cost function also studied in [Dasgupta, 2016]. Experiments show that the hierarchies found by using the ILP formulation as well as our rounding algorithm often have better projections into flat clusters than the standard linkage based algorithms. We conclude with constant factor inapproximability results for this problem: 1) no polynomial size LP or SDP can achieve a constant factor approximation for this problem and 2) no polynomial time algorithm can achieve a constant factor approximation under the assumption of the Small Set Expansion hypothesis.", "title": "" }, { "docid": "8c9b5370178ae19ae54441b3f59c57ad", "text": "In frequent pattern mining, there are several algorithms. Apriori is the classical and most famous algorithm. Objective of using Apriori algorithm is to find frequent itemsets and association between different itemsets i.e. association rule. In this paper, author considers data (bank data) and tries to obtain the result using Weka a data mining tool. Association rule algorithms are used to find out the best combination of different attributes in any data. In this paper author uses Apriori to find association rule. Here author consider three association rule algorithms: Apriori Association Rule, PredictiveApriori Association Rule and Tertius Association Rule. Author compares the result of these three algorithms and presents the result. According to the result obtained using data mining tool author find that Apriori Association algorithm performs better than the PredictiveApriori Association Rule and Tertius Association Rule algorithms.", "title": "" }, { "docid": "0556192205430f9837b8e560c68fb339", "text": "As service organizations have realized that service is fundamental to establish their competitive advantage, waiting time management has been a subject of much service research. This study aims to explore how waiting time impact on tourist satisfaction in theme parks. The findings from a survey of 102 tourists from the theme park in Shenzhen City in China confirm that perceived waiting time, waiting information and waiting environment are significant determinants of tourist satisfaction. Providing waiting information and improving waiting environment are effective ways to enhance tourists' service satisfaction.", "title": "" }, { "docid": "11c4f0610d701c08516899ebf14f14c4", "text": "Histone post-translational modifications impact many aspects of chromatin and nuclear function. Histone H4 Lys 20 methylation (H4K20me) has been implicated in regulating diverse processes ranging from the DNA damage response, mitotic condensation, and DNA replication to gene regulation. PR-Set7/Set8/KMT5a is the sole enzyme that catalyzes monomethylation of H4K20 (H4K20me1). It is required for maintenance of all levels of H4K20me, and, importantly, loss of PR-Set7 is catastrophic for the earliest stages of mouse embryonic development. These findings have placed PR-Set7, H4K20me, and proteins that recognize this modification as central nodes of many important pathways. In this review, we discuss the mechanisms required for regulation of PR-Set7 and H4K20me1 levels and attempt to unravel the many functions attributed to these proteins.", "title": "" }, { "docid": "f9bd24894ed3eace01f51966c61f2a5d", "text": "Ethanolic extract from the fruits of Pimpinella anisoides, an aromatic plant and a spice, exhibited activity against AChE and BChE, with IC(50) values of 227.5 and 362.1 microg/ml, respectively. The most abundant constituents of the extract were trans-anethole, (+)-limonene and (+)-sabinene. trans-Anethole exhibited the highest activity against AChE and BChE with IC(50) values of 134.7 and 209.6 microg/ml, respectively. The bicyclic monoterpene (+)-sabinene exhibited a promising activity against AChE (IC(50) of 176.5 microg/ml) and BChE (IC(50) of 218.6 microg/ml).", "title": "" }, { "docid": "d4cad35d1559531fa4d53609a5c0b58a", "text": "In the last few years, great efforts have been made to extend the linear projection technique (LPT) for multidimensional data (i.e., tensor), generally referred to as the multilinear projection technique (MPT). The vectorized nature of LPT requires high-dimensional data to be converted into vector, and hence may lose spatial neighborhood information of raw data. MPT well addresses this problem by encoding multidimensional data as general tensors of a second or even higher order. In this paper, we propose a novel multilinear projection technique, called multilinear spatial discriminant analysis (MSDA), to identify the underlying manifold of high-order tensor data. MSDA considers both the nonlocal structure and the local structure of data in the transform domain, seeking to learn the projection matrices from all directions of tensor data that simultaneously maximize the nonlocal structure and minimize the local structure. Different from multilinear principal component analysis (MPCA) that aims to preserve the global structure and tensor locality preserving projection (TLPP) that is in favor of preserving the local structure, MSDA seeks a tradeoff between the nonlocal (global) and local structures so as to drive its discriminant information from the range of the non-local structure and the range of the local structure. This spatial discriminant characteristic makes MSDA have more powerful manifold preserving ability than TLPP and MPCA. Theoretical analysis shows that traditional MPTs, such as multilinear linear discriminant analysis, TLPP, MPCA, and tensor maximum margin criterion, could be derived from the MSDA model by setting different graphs and constraints. Extensive experiments on face databases (ORL, CMU PIE, and the extended Yale-B) and the Weizmann action database demonstrate the effectiveness of the proposed MSDA method.", "title": "" }, { "docid": "e881c2ab6abc91aa8e7cbe54d861d36d", "text": "Tracing traffic using commodity hardware in contemporary highspeed access or aggregation networks such as 10-Gigabit Ethernet is an increasingly common yet challenging task. In this paper we investigate if today’s commodity hardware and software is in principle able to capture traffic from a fully loaded Ethernet. We find that this is only possible for data rates up to 1 Gigabit/s without reverting to using special hardware due to, e. g., limitations with the current PC buses. Therefore, we propose a novel way for monitoring higher speed interfaces (e. g., 10-Gigabit) by distributing their traffic across a set of lower speed interfaces (e. g., 1-Gigabit). This opens the next question: which system configuration is capable of monitoring one such 1-Gigabit/s interface? To answer this question we present a methodology for evaluating the performance impact of different system components including different CPU architectures and different operating system. Our results indicate that the combination of AMD Opteron with FreeBSD outperforms all others, independently of running in singleor multi-processor mode. Moreover, the impact of packet filtering, running multiple capturing applications, adding per packet analysis load, saving the captured packets to disk, and using 64-bit OSes is investigated.", "title": "" }, { "docid": "1106cd6413b478fd32d250458a2233c5", "text": "Submitted: Aug 7, 2013; Accepted: Sep 18, 2013; Published: Sep 25, 2013 Abstract: This article reviews the common used forecast error measurements. All error measurements have been joined in the seven groups: absolute forecasting errors, measures based on percentage errors, symmetric errors, measures based on relative errors, scaled errors, relative measures and other error measures. The formulas are presented and drawbacks are discussed for every accuracy measurements. To reduce the impact of outliers, an Integral Normalized Mean Square Error have been proposed. Due to the fact that each error measure has the disadvantages that can lead to inaccurate evaluation of the forecasting results, it is impossible to choose only one measure, the recommendations for selecting the appropriate error measurements are given.", "title": "" }, { "docid": "3f981b146b53f3e9422e2beb1cadbf3a", "text": "The article aims to be a guide to the interpretation of tumors specific to the nail, that is, tumors presenting peculiar histological features linked specifically to the nail unit. Therefore, the classical epithelial, fibroepithelial, and fibrous skin tumors occurring in the nail region are not analyzed. The interpretation of nail biopsies requires the identification and integration of the 2 main clinical modes of presentation of nail tumors, the acquired localized (monodactylous) longitudinal (ALL) band pattern, and the \"masked\" nail tumor. The ALL band pattern often allows the recognition of a nail tumor in its early phase of progression, with a limited differential diagnosis. The masked nail tumor mimics an inflammatory nail process, as a clinically misleading reactive benign lesion, which delays diagnosis with the subsequent development of partial nail loss and a locally destructive evolution. ALL band pattern appears as a longitudinal band starting at the matrix and extending to the tip of the nail plate. The band is usually single, rarely bifid. This clinical pattern can divided into 2 presentations. The generic term of ALL maculonychia could be proposed to define the macular aspect of the colored band of the nail plate. It encompasses 3 syndromes: longitudinal melanonychia, longitudinal erythronychia, and longitudinal leukonychia. ALL pachyonychia is a rare presentation. Pachyonychia indicates a localized thickening of the nail plate specific to the matrical nail tumor. In this group, there is differentiation toward cells of the nail matrix. The prototype tumor is the onychomatricoma, which present classically with a yellow (xantholeukonychia) band pattern. Recently, a new clinical band pattern has been described as longitudinal pachymelanonychia with 2 etiologies: pigmented onychomatricoma and onychocytic matricoma. The first part of this review delineate, in the first section, the distinctive microanatomical features of the nail unit and the second is dedicated to the most important pitfalls in pathological diagnosis of nail tumors because of nail surgery techniques. In the third section, the histopathology of ALL melanonychia and ALL erythronychia is discussed in a detailed description.", "title": "" }, { "docid": "04a4996eb5be0d321037cac5cb3c1ad6", "text": "Repeated retrieval enhances long-term retention, and spaced repetition also enhances retention. A question with practical and theoretical significance is whether there are particular schedules of spaced retrieval (e.g., gradually expanding the interval between tests) that produce the best learning. In the present experiment, subjects studied and were tested on items until they could recall each one. They then practiced recalling the items on 3 repeated tests that were distributed according to one of several spacing schedules. Increasing the absolute (total) spacing of repeated tests produced large effects on long-term retention: Repeated retrieval with long intervals between each test produced a 200% improvement in long-term retention relative to repeated retrieval with no spacing between tests. However, there was no evidence that a particular relative spacing schedule (expanding, equal, or contracting) was inherently superior to another. Although expanding schedules afforded a pattern of increasing retrieval difficulty across repeated tests, this did not translate into gains in long-term retention. Repeated spaced retrieval had powerful effects on retention, but the relative schedule of repeated tests had no discernible impact.", "title": "" } ]
scidocsrr
ed29e37b3889f24349f6f97756c57688
Open Source Software Development and Lotka's Law: Bibliometric Patterns in Programming
[ { "docid": "c63d32013627d0bcea22e1ad62419e62", "text": "According to its proponents, open source style software development has the capacity to compete successfully, and perhaps in many cases displace, traditional commercial development methods. In order to begin investigating such claims, we examine the development process of a major open source application, the Apache web server. By using email archives of source code change history and problem reports we quantify aspects of developer participation, core team size, code ownership, productivity, defect density, and problem resolution interval for this OSS project. This analysis reveals a unique process, which performs well on important measures. We conclude that hybrid forms of development that borrow the most effective techniques from both the OSS and commercial worlds may lead to high performance software processes.", "title": "" }, { "docid": "4072b14516d9a7b74bec64535cdb64d8", "text": "The idea of a unified citation index to the literature of science was first outlined by Eugene Garfield [1] in 1955 in the journal Science. Science Citation Index has since established itself as the gold standard for scientific information retrieval. It has also become the database of choice for citation analysts and evaluative bibliometricians worldwide. As scientific publication moves to the web, and novel approaches to scholarly communication and peer review establish themselves, new methods of citation and link analysis will emerge to capture often liminal expressions of peer esteem, influence and approbation. The web thus affords bibliometricians rich opportunities to apply and adapt their techniques to new contexts and content: the age of ‘bibliometric spectroscopy’ [2] is dawning.", "title": "" } ]
[ { "docid": "59626458b4f250a59bb5c47586afe023", "text": "Previous work for relation extraction from free text is mainly based on intra-sentence information. As relations might be mentioned across sentences, inter-sentence information can be leveraged to improve distantly supervised relation extraction. To effectively exploit inter-sentence information , we propose a ranking-based approach, which first learns a scoring function based on a listwise learning-to-rank model and then uses it for multi-label relation extraction. Experimental results verify the effectiveness of our method for aggregating information across sentences. Additionally, to further improve the ranking of high-quality extractions, we propose an effective method to rank relations from different entity pairs. This method can be easily integrated into our overall relation extraction framework, and boosts the precision significantly.", "title": "" }, { "docid": "266e06e43d8796324336dd5d23ec4d8b", "text": "A realistic power consumption model of wireless communication subsystems typically used in many sensor network node devices is presented. Simple power consumption models for major components are individually identified, and the effective transmission range of a sensor node is modeled by the output power of the transmitting power amplifier, sensitivity of the receiving low noise amplifier, and RF environment. Using this basic model, conditions for minimum sensor network power consumption are derived for communication of sensor data from a source device to a destination node. Power consumption model parameters are extracted for two types of wireless sensor nodes that are widely used and commercially available. For typical hardware configurations and RF environments, it is shown that whenever single hop routing is possible it is almost always more power efficient than multi-hop routing. Further consideration of communication protocol overhead also shows that single hop routing will be more power efficient compared to multi-hop routing under realistic circumstances. This power consumption model can be used to guide design choices at many different layers of the design space including, topology design, node placement, energy efficient routing schemes, power management and the hardware design of future wireless sensor network devices", "title": "" }, { "docid": "8e7d3462f93178f6c2901a429df22948", "text": "This article analyzes China's pension arrangement and notes that China has recently established a universal non-contributory pension plan covering urban non-employed workers and all rural residents, combined with the pension plan covering urban employees already in place. Further, in the latest reform, China has discontinued the special pension plan for civil servants and integrated this privileged welfare class into the urban old-age pension insurance program. With these steps, China has achieved a degree of universalism and integration of its pension arrangement unprecedented in the non-Western world. Despite this radical pension transformation strategy, we argue that the current Chinese pension arrangement represents a case of \"incomplete\" universalism. First, its benefit level is low. Moreover, the benefit level varies from region to region. Finally, universalism in rural China has been undermined due to the existence of the \"policy bundle.\" Additionally, we argue that the 2015 pension reform has created a situation in which the stratification of Chinese pension arrangements has been \"flattened,\" even though it remains stratified to some extent.", "title": "" }, { "docid": "01ddd5cf694df46a69341549f70529f8", "text": "The RiskTrack project aims to help in the prevention of terrorism through the identification of online radicalisation. In line with the European Union priorities in this matter, this project has been designed to identify and tackle the indicators that raise a red flag about which individuals or communities are being radicalised and recruited to commit violent acts of terrorism. Therefore, the main goals of this project will be twofold: On the one hand, it is needed to identify the main features and characteristics that can be used to evaluate a risk situation, to do that a risk assessment methodology studying how to detect signs of radicalisation (e.g., use of language, behavioural patterns in social networks...) will be designed. On the other hand, these features will be tested and analysed using advanced data mining methods, knowledge representation (semantic and ontology engineering) and multilingual technologies. The innovative aspect of this project is to not offer just a methodology on risk assessment, but also a tool that is build based on this methodology, so that the prosecutors, judges, law enforcement and other actors can obtain a short term tangible results.", "title": "" }, { "docid": "8d6dd69709ca5737fd39b108a36f2afc", "text": "We outline an architecture pathway to establish a permanent base on Mars that can support a 50-person crew. After base establishment, a long-term colony is formed with a gradual reduction in support and supplies from Earth. A steady cadence of missions is used to validate new technologies, build capabilities, reduce risks, and minimize costs. The missions prior to 2050 include assembly of an interplanetary vehicle in lunar vicinity, a deep space mission, a Mars orbital mission, and two Mars surface missions. We also examine the logistics associated with the base's growth in personnel, capability, and operations to the year 2100. Using a human exploration cost tool developed at Purdue, we find a peak yearly spending of $14.9 billion in 2030 with an average cost of $11.8 billion per year until 2050 and $8.2 billion per year after 2050. Assuming a starting budget of $9 billion per year (2016 NASA human spaceflight budget), an 8.8% increase per year from 2018 to 2024 would be required to meet the peak yearly costs. Unfortunately, such an increase is unlikely given NASA's relatively flat inflation-adjusted budget over the past 20 years. In addition, while commercial launch vehicle costs are dropping due to the emergence of reusable launch vehicles (e.g. SpaceX's Falcon 9), hardware costs are yet to reduce within NASA and their contractors, which suggests one or more of the following scenarios: 1) NASA must change how program costs are managed, 2) the publicized timeline for humans landing on Mars must slip, or 3) a new Mars architecture paradigm must be found.", "title": "" }, { "docid": "ca509048385b8cf28bd7b89c685f21b2", "text": "Recent studies on knowledge base completion, the task of recovering missing relationships based on recorded relations, demonstrate the importance of learning embeddings from multi-step relations. However, due to the size of knowledge bases, learning multi-step relations directly on top of observed instances could be costly. In this paper, we propose Implicit ReasoNets (IRNs), which is designed to perform large-scale inference implicitly through a search controller and shared memory. Unlike previous work, IRNs use training data to learn to perform multi-step inference through the shared memory, which is also jointly updated during training. While the inference procedure is not operating on top of observed instances for IRNs, our proposed model outperforms all previous approaches on the popular FB15k benchmark by more than 5.7%.", "title": "" }, { "docid": "d5a23b600f4983c9d3aabbbd202bd390", "text": "Knowledge Discovery in Databases (KDD) is an active and important research area with the promise for a high payoff in many business and scientific applications. One of the main tasks in KDD is classification. A particular efficient method for classification is decision tree induction. The selection of the attribute used at each node of the tree to split the data (split criterion) is crucial in order to correctly classify objects. Different split criteria were proposed in the literature (Information Gain, Gini Index, etc.). It is not obvious which of them will produce the best decision tree for a given data set. A large amount of empirical tests were conducted in order to answer this question. No conclusive results were found. In this paper we introduce a formal methodology, which allows us to compare multiple split criteria. This permits us to present fundamental insights into the decision process. Furthermore, we are able to present a formal description of how to select between split criteria for a given data set. As an illustration we apply the methodology to two widely used split criteria: Gini Index and Information Gain.", "title": "" }, { "docid": "70d8345da0193a048d3dff702834c075", "text": "Recurrent neural networks with various types of hidden units have been used to solve a diverse range of problems involving sequence data. Two of the most recent proposals, gated recurrent units (GRU) and minimal gated units (MGU), have shown comparable promising results on example public datasets. In this paper, we introduce three model variants of the minimal gated unit which further simplify that design by reducing the number of parameters in the forget-gate dynamic equation. These three model variants, referred to simply as MGU1, MGU2, and MGU3, were tested on sequences generated from the MNIST dataset and the real sequences from the Reuters Newswire Topics (RNT) dataset. Here, we report on the RNT results. The new models have shown similar accuracy to the MGU model while using fewer parameters and thus lower training expense. One model variant, namely MGU2, performed better than MGU on the datasets considered, and thus may be used as an alternate to MGU or GRU in recurrent neural networks.", "title": "" }, { "docid": "ee351931c35e5dd1ebe7d528568df394", "text": "We present an automatic method for fitting multiple B-spline curves to unorganized planar points. The method works on point clouds which have complicated topological structures and a single curve is insufficient for fitting the shape. A divide-and-merge algorithm is developed for dividing the unorganized data points into several groups while each group represents a smooth curve. Each point group is then fitted with a B-spline curve by the SDM method. Our algorithm also sets up automatically the control polygon of initial B-spline curves. Experiments demonstrate the capability of the presented algorithm in accurate reconstruction of topological structures of point clouds.", "title": "" }, { "docid": "e89124e33d7d208fcdd30c5cccc409d6", "text": "In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices.", "title": "" }, { "docid": "bbfc488e55fe2dfaff2af73a75c31edd", "text": "This overview covers a wide range of cannabis topics, initially examining issues in dispensaries and self-administration, plus regulatory requirements for production of cannabis-based medicines, particularly the Food and Drug Administration \"Botanical Guidance.\" The remainder pertains to various cannabis controversies that certainly require closer examination if the scientific, consumer, and governmental stakeholders are ever to reach consensus on safety issues, specifically: whether botanical cannabis displays herbal synergy of its components, pharmacokinetics of cannabis and dose titration, whether cannabis medicines produce cyclo-oxygenase inhibition, cannabis-drug interactions, and cytochrome P450 issues, whether cannabis randomized clinical trials are properly blinded, combatting the placebo effect in those trials via new approaches, the drug abuse liability (DAL) of cannabis-based medicines and their regulatory scheduling, their effects on cognitive function and psychiatric sequelae, immunological effects, cannabis and driving safety, youth usage, issues related to cannabis smoking and vaporization, cannabis concentrates and vape-pens, and laboratory analysis for contamination with bacteria and heavy metals. Finally, the issue of pesticide usage on cannabis crops is addressed. New and disturbing data on pesticide residues in legal cannabis products in Washington State are presented with the observation of an 84.6% contamination rate including potentially neurotoxic and carcinogenic agents. With ongoing developments in legalization of cannabis in medical and recreational settings, numerous scientific, safety, and public health issues remain.", "title": "" }, { "docid": "1b37c9f413f1c12d80f5995a40df4684", "text": "Various orodispersible drug formulations have been recently introduced into the market. Oral lyophilisates and orodispersible granules, tablets or films have enriched the therapeutic options. In particular, the paediatric and geriatric population may profit from the advantages like convenient administration, lack of swallowing, ease of use. Until now, only a few novel products made it to the market as the development and production usually is more expensive than for conventional oral drug dosage forms like tablets or capsules. The review reports the recent advances, existing and upcoming products, and the significance of formulating patient-friendly oral dosage forms. The preparation of the medicines can be performed both in pharmaceutical industry and in community pharmacies. Recent advances, e.g. drug printing technologies, may facilitate this process for community or hospital pharmacies. Still, regulatory guidelines and pharmacopoeial monographs lack appropriate methods, specifications and global harmonization to foster the development of innovative orodispersible drug dosage forms.", "title": "" }, { "docid": "76c05736b10834a396e370197d84d2d3", "text": "In recent years, crowd counting in still images has attracted many research interests due to its applications in public safety. However, it remains a challenging task for reasons of perspective and scale variations. In this paper, we propose an effective Skip-connection Convolutional Neural Network (SCNN) for crowd counting to overcome the issue of scale variations. The proposed SCNN architecture consists of several multi-scale units to extract multi-scale features. Each multi-scale unit including three convolutional layers builds connections between the input and each convolutional layer. In addition, we propose a scale-related training method to improve the accuracy and robustness of crowd counting. We evaluate our method on three crowd counting benchmarks. Experimental results verify the efficiency of the proposed method, and it achieves superior performance compared with other methods.", "title": "" }, { "docid": "3a4b9578345f0c1ac7a3cc194c783ed0", "text": "Current studies of influence maximization focus almost exclusively on unsigned social networks ignoring the polarities of the relationships between users. Influence maximization in signed social networks containing both positive relationships (e.g., friend or like) and negative relationships (e.g., enemy or dislike) is still a challenging problem which remains much open. A few studies made use of greedy algorithms to solve the problem of positive influence or negative influence maximization in signed social networks. Although greedy algorithm is able to achieve a good approximation, it is computational expensive and not efficient enough. Aiming at this drawback, we propose an alternative method based on Simulated Annealing (SA) for the positive influence maximization problem in this paper. Additionally, we also propose two heuristics to speed up the convergence process of the proposed method. Comprehensive experiments results on three signed social network datasets, Epinions, Slashdot and Wikipedia, demonstrate that our method can yield similar or better performance than the greedy algorithms in terms of positive influence spread but run faster.", "title": "" }, { "docid": "45f6bb33f098a61c4166e3b942501604", "text": "Estimating human age automatically via facial image analysis has lots of potential real-world applications, such as human computer interaction and multimedia communication. However, it is still a challenging problem for the existing computer vision systems to automatically and effectively estimate human ages. The aging process is determined by not only the person's gene, but also many external factors, such as health, living style, living location, and weather conditions. Males and females may also age differently. The current age estimation performance is still not good enough for practical use and more effort has to be put into this research direction. In this paper, we introduce the age manifold learning scheme for extracting face aging features and design a locally adjusted robust regressor for learning and prediction of human ages. The novel approach improves the age estimation accuracy significantly over all previous methods. The merit of the proposed approaches for image-based age estimation is shown by extensive experiments on a large internal age database and the public available FG-NET database.", "title": "" }, { "docid": "3172304147c13068b6cec8fd252cda5e", "text": "Widespread growth of open wireless hotspots has made it easy to carry out man-in-the-middle attacks and impersonate web sites. Although HTTPS can be used to prevent such attacks, its universal adoption is hindered by its performance cost and its inability to leverage caching at intermediate servers (such as CDN servers and caching proxies) while maintaining end-to-end security. To complement HTTPS, we revive an old idea from SHTTP, a protocol that offers end-to-end web integrity without confidentiality. We name the protocol HTTPi and give it an efficient design that is easy to deploy for today’s web. In particular, we tackle several previously-unidentified challenges, such as supporting progressive page loading on the client’s browser, handling mixed content, and defining access control policies among HTTP, HTTPi, and HTTPS content from the same domain. Our prototyping and evaluation experience show that HTTPi incurs negligible performance overhead over HTTP, can leverage existing web infrastructure such as CDNs or caching proxies without any modifications to them, and can make many of the mixed-content problems in existing HTTPS web sites easily go away. Based on this experience, we advocate browser and web server vendors to adopt HTTPi.", "title": "" }, { "docid": "aa9fd7c545a42b712e137d72734e20c9", "text": "Our long-term objective is to develop robots that engage in natural language-mediated cooperative tasks with humans. To support this goal, we are developing an amodal representation and associated processes which is called a grounded situation model (GSM). We are also developing a modular architecture in which the GSM resides in a centrally located module, around which there are language, perception, and action-related modules. The GSM acts as a sensor-updated \"structured blackboard\", that serves as a workspace with contents similar to a \"theatrical stage\" in the robot's \"mind\", which might be filled in with present, past or imagined situations. Two main desiderata drive the design of the GSM: first, \"parsing\" situations into ontological types and relations that reflect human language semantics, and second, allowing bidirectional translation between sensory-derived data/expectations and linguistic descriptions. We present an implemented system that allows of a range of conversational and assistive behavior by a manipulator robot. The robot updates beliefs (held in the GSM) about its physical environment, the human user, and itself, based on a mixture of linguistic, visual and proprioceptive evidence. It can answer basic questions about the present or past and also perform actions through verbal interaction. Most importantly, a novel contribution of our approach is the robot's ability for seamless integration of both language- and sensor-derived information about the situation: for example, the system can acquire parts of situations either by seeing them or by \"imagining\" them through descriptions given by the user: \"There is a red ball at the left\". These situations can later be used to create mental imagery and sensory expectations, thus enabling the aforementioned bidirectionality", "title": "" }, { "docid": "90e254138a5912daf0650f5ad794743c", "text": "Large scale graph processing represents an interesting challenge due to the lack of locality. This paper presents PathGraph for improving iterative graph computation on graphs with billions of edges. Our system design has three unique features: First, we model a large graph using a collection of tree-based partitions and use an path-centric computation rather than vertex-centric or edge-centric computation. Our parallel computation model significantly improves the memory and disk locality for performing iterative computation algorithms. Second, we design a compact storage that further maximize sequential access and minimize random access on storage media. Third, we implement the path-centric computation model by using a scatter/gather programming model, which parallels the iterative computation at partition tree level and performs sequential updates for vertices in each partition tree. The experimental results show that the path-centric approach outperforms vertex-centric and edge-centric systems on a number of graph algorithms for both in-memory and out-of-core graphs.", "title": "" }, { "docid": "a6ba94c0faf2fd41d8b1bd5a068c6d3d", "text": "The main mechanisms responsible for performance degradation of millimeter wave (mmWave) and terahertz (THz) on-chip antennas are reviewed. Several techniques to improve the performance of the antennas and several high efficiency antenna types are presented. In order to illustrate the effects of the chip topology on the antenna, simulations and measurements of mmWave and THz on-chip antennas are shown. Finally, different transceiver architectures are explored with emphasis on the challenges faced in a wireless multi-core environment.", "title": "" }, { "docid": "686585ee0ab55dfeaa98efef5b496035", "text": "This paper presents an embedded adaptive robust controller for trajectory tracking and stabilization of an omnidirectional mobile platform with parameter variations and uncertainties caused by friction and slip. Based on a dynamic model of the platform, the adaptive controller to achieve point stabilization, trajectory tracking, and path following is synthesized via the adaptive backstepping approach. This robust adaptive controller is then implemented into a high-performance field-programmable gate array chip using hardware/software codesign technique and system-on-a-programmable-chip design concept with a reusable user intellectual property core library. Furthermore, a soft-core processor and a real-time operating system are embedded into the same chip for realizing the control law to steer the mobile platform. Simulation results are conducted to show the effectiveness and merit of the proposed control method in comparison with a conventional proportional-integral feedback controller. The performance and applicability of the proposed embedded adaptive controller are exemplified by conducting several experiments on an autonomous omnidirectional mobile robot.", "title": "" } ]
scidocsrr
1dfbdd6d83ac595c6a23b457b10ef392
Good language-switchers are good task-switchers: evidence from Spanish-English and Mandarin-English bilinguals.
[ { "docid": "ca6e91eb89850bae6ff938dc2a7602d5", "text": "OBJECTIVES\nThere is strong epidemiologic evidence to suggest that older adults who maintain an active lifestyle in terms of social, mental, and physical engagement are protected to some degree against the onset of dementia. Such factors are said to contribute to cognitive reserve, which acts to compensate for the accumulation of amyloid and other brain pathologies. We present evidence that lifelong bilingualism is a further factor contributing to cognitive reserve.\n\n\nMETHODS\nData were collected from 211 consecutive patients diagnosed with probable Alzheimer disease (AD). Patients' age at onset of cognitive impairment was recorded, as was information on occupational history, education, and language history, including fluency in English and any other languages. Following this procedure, 102 patients were classified as bilingual and 109 as monolingual.\n\n\nRESULTS\nWe found that the bilingual patients had been diagnosed 4.3 years later and had reported the onset of symptoms 5.1 years later than the monolingual patients. The groups were equivalent on measures of cognitive and occupational level, there was no apparent effect of immigration status, and the monolingual patients had received more formal education. There were no gender differences.\n\n\nCONCLUSIONS\nThe present data confirm results from an earlier study, and thus we conclude that lifelong bilingualism confers protection against the onset of AD. The effect does not appear to be attributable to such possible confounding factors as education, occupational status, or immigration. Bilingualism thus appears to contribute to cognitive reserve, which acts to compensate for the effects of accumulated neuropathology.", "title": "" }, { "docid": "65463adcbfdf0c3236c3d84417d9ac21", "text": "This study investigated the possibility that lifelong bilingualism may lead to enhanced efficiency in the ability to shift between mental sets. We compared the performance of monolingual and fluent bilingual college students in a task-switching paradigm. Bilinguals incurred reduced switching costs in the task-switching paradigm when compared with monolinguals, suggesting that lifelong experience in switching between languages may contribute to increased efficiency in the ability to shift flexibly between mental sets. On the other hand, bilinguals did not differ from monolinguals in the differential cost of performing mixed-task as opposed to single-task blocks. Together, these results indicate that bilingual advantages in executive function most likely extend beyond inhibition of competing responses, and encompass flexible mental shifting as well.", "title": "" }, { "docid": "6b52cc8055bd565e1f04095da8a7a5e9", "text": "This study examined the effect of lifelong bilingualism on maintaining cognitive functioning and delaying the onset of symptoms of dementia in old age. The sample was selected from the records of 228 patients referred to a Memory Clinic with cognitive complaints. The final sample consisted of 184 patients diagnosed with dementia, 51% of whom were bilingual. The bilinguals showed symptoms of dementia 4 years later than monolinguals, all other measures being equivalent. Additionally, the rate of decline in Mini-Mental State Examination (MMSE) scores over the 4 years subsequent to the diagnosis was the same for a subset of patients in the two groups, suggesting a shift in onset age with no change in rate of progression.", "title": "" }, { "docid": "a4a6498da9e2579bdd333328a4de78f7", "text": "E xecutive functions (EFs), also called cognitive control, are critical for success in school and life. Although EF skills are rarely taught, they can be. The Tools of the Mind (Tools) curriculum improves EFs in preschoolers in regular classrooms with regular teachers at minimal expense. Core EF skills are (i) inhibitory control (resisting habits, temptations, or distractions), (ii) working memory (mentally holding and using information), and (iii) cognitive flexibility (adjusting to change) (1, 2).", "title": "" }, { "docid": "87fa3f2317b53520839bc3cb90cf291b", "text": "In an experimental study of language switching and selection, bilinguals named numerals in either their first or second language unpredictably. Response latencies (RTs) on switch trials (where the response language changed from the previous trial) were slower than on nonswitch trials. As predicted, the language-switching cost was consistently larger when switching to the dominant L 1 from the weaker L2 than vice versa such that, on switch trials, L 1 responses were slower than in L 2. This “paradoxical” asymmetry in the cost of switching languages is explained in terms of differences in relative strength of the bilingual’s two languages and the involuntary persistence of the previous language set across an intended switch of language. Naming in the weaker language, L 2, requires active inhibition or suppression of the stronger competitor language, L 1; the inhibition persists into the following (switch) trial in the form of “negative priming” of the L 1 lexicon as a whole. © 1999 Academic Press", "title": "" } ]
[ { "docid": "d7ce0c17a0fcd50dfb59e265808daaf7", "text": "OBJECTIVE\nTo examine the quality of oral health promotion research evidence and to assess the effectiveness of health promotion, aimed at improving oral health using a systematic and scientifically defensible methodology.\n\n\nBASIC RESEARCH DESIGN\nSystematic review of oral health promotion research evidence using electronic searching, iterative hand-searching, critical appraisal and data synthesis.\n\n\nCLINICAL SETTING\nThe settings of the primary research reviewed were clinical, community, schools or other institutions. The participants were children, the elderly, adults and people with handicaps and disabilities.\n\n\nINTERVENTIONS\nOnly studies which reported an evaluative component were included. Theoretical and purely descriptive papers were excluded.\n\n\nMAIN OUTCOME MEASURES\nThe review examined the evidence of effectiveness of oral health promotion on caries, oral hygiene, oral health related knowledge, attitudes and behaviours.\n\n\nRESULTS\nVery few definitive conclusions about the effectiveness of oral health promotion can be drawn from the currently available evidence. Caries and periodontal disease can be controlled by regular toothbrushing with a fluoride toothpaste but a cost-effective method for reliably promoting such behaviour has not yet been established. Knowledge levels can almost always be improved by oral health promotion initiatives but whether these shifts in knowledge and attitudes can be causally related to changes in behaviour or clinical indices of disease has also not been established.\n\n\nCONCLUSIONS\nOral health promotion which brings about the use of fluoride is effective for reducing caries. Chairside oral health promotion has been shown to be effective more consistently than other methods of health promotion. Mass media programmes have not been shown to be effective. The quality of oral health promotion evaluation research needs to be improved.", "title": "" }, { "docid": "e4132ac9af863c2c17489817898dbd1c", "text": "This paper presents automatic parallel parking for car-like vehicle, with highlights on a path planning algorithm for arbitrary initial angle using two tangential arcs of different radii. The algorithm is divided into three parts. Firstly, a simple kinematic model of the vehicle is established based on Ackerman steering geometry; secondly, not only a minimal size of the parking space is analyzed based on the size and the performance of the vehicle but also an appropriate target point is chosen based on the size of the parking space and the vehicle; Finally, a path is generated based on two tangential arcs of different radii. The simulation results show that the feasibility of the proposed algorithm.", "title": "" }, { "docid": "cddd8adea2d507d937db4052627136fd", "text": "For the reception of Satellite Digital Audio Radio Services (SDARS) and Global Positioning Systems (GPS) transmitted via satellite an invisible antenna combination embedded in the roof of a car is presented. Without changing the surface of the vehicle the antenna combination can be completely embedded in a metal cavity and covered by a thick dielectric part of the roof. The measurement results show a high efficiency and a large bandwidth which exceeds the necessary bandwidth significantly for both services. The antenna combination offers a radiation pattern which is tailored to the reception of SDARS signals transmitted via highly-elliptical-orbit (HEO) satellites, geostationary earth orbit (GEO) satellites and terrestrial repeaters and for GPS signals transmitted via medium earth orbit (MEO) satellites. Although the antennas are mounted in such a small mounting volume, the antennas are decoupled optimally.", "title": "" }, { "docid": "b3a9ad04e7df1b2250f0a7b625509efd", "text": "Emotions are very important in human-human communication but are usually ignored in human-computer interaction. Recent work focuses on recognition and generation of emotions as well as emotion driven behavior. Our work focuses on the use of emotions in dialogue systems that can be used with speech input or as well in multi-modal environments.This paper describes a framework for using emotional cues in a dialogue system and their informational characterization. We describe emotion models that can be integrated into the dialogue system and can be used in different domains and tasks. Our application of the dialogue system is planned to model multi-modal human-computer-interaction with a humanoid robotic system.", "title": "" }, { "docid": "125a65c489bbb8541577e65015a33fe9", "text": "Users of the TIMESAT program are welcome to contact the authors in order to receive the most updated version of the program. The authors are also happy to answer questions on optimal parameter settings.", "title": "" }, { "docid": "565dcf584448f6724a6529c3d2147a68", "text": "People are fond of taking and sharing photos in their social life, and a large part of it is face images, especially selfies. A lot of researchers are interested in analyzing attractiveness of face images. Benefited from deep neural networks (DNNs) and training data, researchers have been developing deep learning models that can evaluate facial attractiveness of photos. However, recent development on DNNs showed that they could be easily fooled even when they are trained on a large dataset. In this paper, we used two approaches to generate adversarial examples that have high attractiveness scores but low subjective scores for face attractiveness evaluation on DNNs. In the first approach, experimental results using the SCUT-FBP dataset showed that we could increase attractiveness score of 20 test images from 2.67 to 4.99 on average (score range: [1, 5]) without noticeably changing the images. In the second approach, we could generate similar images from noise image with any target attractiveness score. Results show by using this approach, a part of attractiveness information could be manipulated artificially.", "title": "" }, { "docid": "9c4845279d61619594461d140cfd9311", "text": "This paper presents a fusion approach for improving human action recognition based on two differing modality sensors consisting of a depth camera and an inertial body sensor. Computationally efficient action features are extracted from depth images provided by the depth camera and from accelerometer signals provided by the inertial body sensor. These features consist of depth motion maps and statistical signal attributes. For action recognition, both feature-level fusion and decision-level fusion are examined by using a collaborative representation classifier. In the feature-level fusion, features generated from the two differing modality sensors are merged before classification, while in the decision-level fusion, the Dempster-Shafer theory is used to combine the classification outcomes from two classifiers, each corresponding to one sensor. The introduced fusion framework is evaluated using the Berkeley multimodal human action database. The results indicate that because of the complementary aspect of the data from these sensors, the introduced fusion approaches lead to 2% to 23% recognition rate improvements depending on the action over the situations when each sensor is used individually.", "title": "" }, { "docid": "954d0ef5a1a648221ce8eb3f217f4071", "text": "Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into different categories. With a focus on graph convolutional networks, we review alternative architectures that have recently been developed; these learning paradigms include graph attention networks, graph autoencoders, graph generative networks, and graph spatial-temporal networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes and benchmarks of the existing algorithms on different learning tasks. Finally, we propose potential research directions in this", "title": "" }, { "docid": "6f7fc5e2953cfb8173ab5a54e3d16b93", "text": "There are three modalities in the reading comprehension setting: question, answer and context. The task of question answering or question generation aims to infer an answer or a question when given the counterpart based on context. We present a novel two-way neural sequence transduction model that connects three modalities, allowing it to learn two tasks simultaneously and mutually benefit one another. During training, the model receives question-context-answer triplets as input and captures the cross-modal interaction via a hierarchical attention process. Unlike previous joint learning paradigms that leverage the duality of question generation and question answering at data level, we solve such dual tasks at the architecture level by mirroring the network structure and partially sharing components at different layers. This enables the knowledge to be transferred from one task to another, helping the model to find a general representation for each modality. The evaluation on four public datasets shows that our dual-learning model outperforms the mono-learning counterpart as well as the state-of-the-art joint models on both question answering and question generation tasks.", "title": "" }, { "docid": "284587aa1992afe3c90fddc2cf2a8906", "text": "Plant genomes contribute to the structure and function of the plant microbiome, a key determinant of plant health and productivity. High-throughput technologies are revealing interactions between these complex communities and their hosts in unprecedented detail.", "title": "" }, { "docid": "b0d11ab83aa6ae18d1a2be7c8e8803b5", "text": "Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response-as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic--strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension.", "title": "" }, { "docid": "90fc857db7207f0a94dd91fbaa48be4f", "text": "We present a computational origami construction of Morley’s triangles and automated proof of correctness of the generalized Morley’s theorem in a streamlined process of solving-computing-proving. The whole process is realized by a computational origami system being developed by us. During the computational origami construction, geometric constraints in symbolic and numeric representation are generated and accumulated. Those constraints are then transformed into algebraic relations, which in turn are used to prove the correctness of the construction. The automated proof required non-trivial amount of computer resources, and shows the necessity of networked services of mathematical software. This example is considered to be a case study for innovative mathematical knowledge management.", "title": "" }, { "docid": "6fa6a26b351c45ac5f33f565bc9c01e8", "text": "Transfer learning, or inductive transfer, refers to the transfer of knowledge from a source task to a target task. In the context of convolutional neural networks (CNNs), transfer learning can be implemented by transplanting the learned feature layers from one CNN (derived from the source task) to initialize another (for the target task). Previous research has shown that the choice of the source CNN impacts the performance of the target task. In the current literature, there is no principled way for selecting a source CNN for a given target task despite the increasing availability of pre-trained source CNNs. In this paper we investigate the possibility of automatically ranking source CNNs prior to utilizing them for a target task. In particular, we present an information theoretic framework to understand the source-target relationship and use this as a basis to derive an approach to automatically rank source CNNs in an efficient, zero-shot manner. The practical utility of the approach is thoroughly evaluated using the PlacesMIT dataset, MNIST dataset and a real-world MRI database. Experimental results demonstrate the efficacy of the proposed ranking method for transfer learning.", "title": "" }, { "docid": "c66df34c3a9b34de22c8053044ce5eaa", "text": "Over the past decade, hospitals in Greece have made significant investments in adopting and implementing new hospital information systems (HISs). Whether these investments will prove beneficial for these organizations depends on the support that will be provided to ensure the effective use of the information systems implemented and also on the satisfaction of its users, which is one of the most important determinants of the success of these systems. Measuring end-user computing satisfaction has a long history within the IS discipline. A number of attempts have been made to evaluate the overall post hoc impact of HIS, focusing on the end-users and more specifically on their satisfaction and the parameters that determine it. The purpose of this paper is to build further upon the existing body of the relevant knowledge by testing past models and suggesting new conceptual perspectives on how end-user computing satisfaction (EUCS) is formed among hospital information system users. All models are empirically tested using data from hospital information system (HIS) users (283). Correlation, explanatory and confirmation factor analysis was performed to test the reliability and validity of the measurement models. The structural equation modeling technique was also used to evaluate the causal models. The empirical results of the study provide support for the EUCS model (incorporating new factors) and enhance the generalizability of the EUCS instrument and its robustness as a valid measure of computing satisfaction and a surrogate for system success in a variety of cultural and linguistic settings. Although the psychometric properties of EUCS appear to be robust across studies and user groups, it should not be considered as the final chapter in the validation and refinement of these scales. Continuing efforts should be made to validate and extend the instrument.", "title": "" }, { "docid": "0cbc2eb794f44b178a54d97aeff69c19", "text": "Automatic identification of predatory conversations i chat logs helps the law enforcement agencies act proactively through early detection of predatory acts in cyberspace. In this paper, we describe the novel application of a deep learnin g method to the automatic identification of predatory chat conversations in large volumes of ch at logs. We present a classifier based on Convolutional Neural Network (CNN) to address this problem domain. The proposed CNN architecture outperforms other classification techn iques that are common in this domain including Support Vector Machine (SVM) and regular Neural Network (NN) in terms of classification performance, which is measured by F 1-score. In addition, our experiments show that using existing pre-trained word vectors are no t suitable for this specific domain. Furthermore, since the learning algorithm runs in a m ssively parallel environment (i.e., general-purpose GPU), the approach can benefit a la rge number of computation units (neurons) compared to when CPU is used. To the best of our knowledge, this is the first tim e that CNNs are adapted and applied to this application do main.", "title": "" }, { "docid": "f96098449988c433fe8af20be0c468a5", "text": "Programmatic assessment is an integral approach to the design of an assessment program with the intent to optimise its learning function, its decision-making function and its curriculum quality-assurance function. Individual methods of assessment, purposefully chosen for their alignment with the curriculum outcomes and their information value for the learner, the teacher and the organisation, are seen as individual data points. The information value of these individual data points is maximised by giving feedback to the learner. There is a decoupling of assessment moment and decision moment. Intermediate and high-stakes decisions are based on multiple data points after a meaningful aggregation of information and supported by rigorous organisational procedures to ensure their dependability. Self-regulation of learning, through analysis of the assessment information and the attainment of the ensuing learning goals, is scaffolded by a mentoring system. Programmatic assessment-for-learning can be applied to any part of the training continuum, provided that the underlying learning conception is constructivist. This paper provides concrete recommendations for implementation of programmatic assessment.", "title": "" }, { "docid": "e7374affb280ac8c24d45f99a8b62c98", "text": "Deep generative models (DGMs) can effectively capture the underlying distributions of complex data by learning multilayered representations and performing inference. However, it is relatively insufficient to boost the discriminative ability of DGMs. This paper presents max-margin deep generative models (mmDGMs) and a class-conditional variant (mmDCGMs), which explore the strongly discriminative principle of max-margin learning to improve the predictive performance of DGMs in both supervised and semi-supervised learning, while retaining the generative capability. In semi-supervised learning, we use the predictions of a max-margin classifier as the missing labels instead of performing full posterior inference for efficiency; we also introduce additional max-margin and label-balance regularization terms of unlabeled data for effectiveness. We develop an efficient doubly stochastic subgradient algorithm for the piecewise linear objectives in different settings. Empirical results on various datasets demonstrate that: (1) max-margin learning can significantly improve the prediction performance of DGMs and meanwhile retain the generative ability; (2) in supervised learning, mmDGMs are competitive to the best fully discriminative networks when employing convolutional neural networks as the generative and recognition models; and (3) in semi-supervised learning, mmDCGMs can perform efficient inference and achieve state-of-the-art classification results on several benchmarks.", "title": "" }, { "docid": "afc97b142e1891b3cf0fb2a049b3e1cd", "text": "This study is designed to determine the relationship between job redesign, employee empowerment and intent to quit measured by affective organizational commitment among survivors of organizational restructuring and downsizing. It focused on middle level managers and employees in supervisory positions because survivors of this group are often called upon to assume expanded roles, functions and responsibilities in a post restructuring and downsizing environment. The results show statistically significant positive relationships between job redesign, empowerment and affective commitment. It therefore, provides empirical data to support theoretical models for managing and mitigating survivors’ intent to quit and subsequent voluntary turnover among survivors of organizational restructuring and downsizing. The implications of these findings, which suggest expanded roles for job redesign and employee empowerment, are discussed.", "title": "" }, { "docid": "68167dc56e802daf2899574c3094ab43", "text": "Gram-negative bacteria have evolved numerous two-component systems (TCSs) to cope with external environmental changes. The CpxA/CpxR TCS consisting of the kinase CpxA and the regulator CpxR, is known to be involved in the biofilm formation and virulence of Escherichia coli. However, the role of CpxA/CpxR remained unclear in Actinobacillus pleuropneumoniae, a bacterial pathogen that can cause porcine contagious pleuropneumonia (PCP). In this report, we show that CpxA/CpxR contributes to the biofilm formation ability of A. pleuropneumoniae. Furthermore, we demonstrate that CpxA/CpxR plays an important role in the expression of several biofilm-related genes in A. pleuropneumoniae, such as rpoE and pgaC. Furthermore, The results of electrophoretic mobility shift assays (EMSAs) and DNase I footprinting analysis demonstrate that CpxR-P can regulate the expression of the pgaABCD operon through rpoE. In an experimental infection of mice, the animals infected with a cpxA/cpxR mutant exhibited delayed mortality and lower bacterial loads in the lung than those infected with the wildtype bacteria. In conclusion, these results indicate that the CpxA/CpxR TCS plays a contributing role in the biofilm formation and virulence of A. pleuropneumoniae.", "title": "" }, { "docid": "11a140232485cb8bcc4914b8538ab5ea", "text": "We explain why we feel that the comparison betwen Common Lisp and Fortran in a recent article by Fateman et al. in this journal is not entirely fair.", "title": "" } ]
scidocsrr
b0e69140fa13f425e840b24c0633ee92
Factors Influencing Emoji Usage in Smartphone Mediated Communications
[ { "docid": "546af5877fcd3bbf8d1354701f1ead12", "text": "Recent studies have found that people interpret emoji characters inconsistently, creating significant potential for miscommunication. However, this research examined emoji in isolation, without consideration of any surrounding text. Prior work has hypothesized that examining emoji in their natural textual contexts would substantially reduce the observed potential for miscommunication. To investigate this hypothesis, we carried out a controlled study with 2,482 participants who interpreted emoji both in isolation and in multiple textual contexts. After comparing the variability of emoji interpretation in each condition, we found that our results do not support the hypothesis in prior work: when emoji are interpreted in textual contexts, the potential for miscommunication appears to be roughly the same. We also identify directions for future research to better understand the interplay between emoji and textual context.", "title": "" } ]
[ { "docid": "acd6c7715fb1e15a123778033672f070", "text": "Classical statistical inference of experimental data assumes that the treatment affects the test group but not the control group. This assumption will typically be violated when experimenting in marketplaces because of general equilibrium effects: changing test demand affects the supply available to the control group. We illustrate this with an email marketing campaign performed by eBay. Ignoring test-control interference leads to estimates of the campaign's effectiveness which are too large by a factor of around two. We present the simple economics of this bias in a supply and demand framework, showing that the bias is larger in magnitude where there is more inelastic supply, and is positive if demand is elastic.", "title": "" }, { "docid": "e9b3add48e75ae208fe4d0be7413ae53", "text": "Bacillus spp. are commonly used as probiotic species in the feed industry, however, their benefits need to be confirmed. This study describes a high throughput screening combined with the detailed characterization of endospore-forming bacteria with the aim to identify new Bacillus spp. strains for use as probiotic additives in pig feed. A total of 245 bacterial isolates derived from African fermented food, feces and soil were identified by 16S rRNA gene sequencing and screened for antimicrobial activity and growth in the presence of antibiotics, bile salts and at pH 4.0. Thirty-three Bacillus spp. isolates with the best characteristics were identified by gyrB and rpoB gene sequencing as B. amyloliquefaciens subsp. plantarum, B. amyloliquefaciens subsp. amyloliquefaciens, B. subtilis subsp. subtilis, B. licheniformis, B. mojavensis, B. pumilus and B. megaterium. These isolates were further investigated for their activity against the pathogenic bacteria, antibiotic susceptibility, sporulation rates, biofilm formation and production of glycosyl hydrolytic enzymes. Additionally, ten selected isolates were assessed for heat resistance of spores and the effect on porcine epithelial cells IPEC-J2. Isolates of B. amyloliquefaciens, B. subtilis and B. mojavensis, showed the best overall characteristics and, therefore, potential for usage as probiotic additives in feed. A large number of taxonomically diverse strains made it possible to reveal species and subspecies-specific trends, contributing to our understanding of the probiotic potential of Bacillus species.", "title": "" }, { "docid": "afe44962393bf0d250571f7cd7e82677", "text": "Analytics is a field of research and practice that aims to reveal new patterns of information through the collection of large sets of data held in previously distinct sources. Growing interest in data and analytics in education, teaching, and learning raises the priority for increased, high-quality research into the models, methods, technologies, and impact of analytics. The challenges of applying analytics on academic and ethical reliability to control over data. The other challenge is that the educational landscape is extremely turbulent at present, and key challenge is the appropriate collection, protection and use of large data sets. This paper brings out challenges of multi various pertaining to the domain by offering a big data model for higher education system.", "title": "" }, { "docid": "576d911990bb207eebaaca6ab137cc7a", "text": "The online fingerprints by biometric system is not widely used now a days and there is less scope as user is friendly with the system. This paper represents a framework and applying the latent fingerprints obtained from the crime scene. These prints would be matched with our database and we identify the criminal. For this process we have to get the fingerprints of all the citizens. This technique may reduce the crime to a large extent. Latent prints are different from the patent prints. These fingerprints are found at the time of crime and these fingerprints are left accidentally. By this approach we collect these fingerprints by chemicals, powder, lasers and other physical means. Sometimes, fingerprints have a broken curve and it is not so clear due to low pressure. We apply the M_join algorithm to join the curve to achieve better results. Thus, our proposed approach eliminates the pseudo minutiae and joins the broken curves in fingerprints.", "title": "" }, { "docid": "e37a93ff39840e1d6df589b415848a85", "text": "In this paper we propose a stacked generalization (or stacking) model for event extraction in bio-medical text. Event extraction deals with the process of extracting detailed biological phenomenon, which is more challenging compared to the traditional binary relation extraction such as protein-protein interaction. The overall process consists of mainly three steps: event trigger detection, argument extraction by edge detection and finding correct combination of arguments. In stacking, we use Linear Support Vector Classification (Linear SVC), Logistic Regression (LR) and Stochastic Gradient Descent (SGD) as base-level learning algorithms. As meta-level learner we use Linear SVC. In edge detection step, we find out the arguments of triggers detected in trigger detection step using a SVM classifier. To find correct combination of arguments, we use rules generated by studying the properties of bio-molecular event expressions, and form an event expression consisting of event trigger, its class and arguments. The output of trigger detection is fed to edge detection for argument extraction. Experiments on benchmark datasets of BioNLP2011 show the recall, precision and Fscore of 48.96%, 66.46% and 56.38%, respectively. Comparisons with the existing systems show that our proposed model attains state-of-the-art performance.", "title": "" }, { "docid": "48a35175b4ceb3b411bd941f7adec5f9", "text": "Pattern synthesis of linear antennas utilizing the spherical bessel functions is presented. This leads to antenna current distribution by the Legendre polynomials of the first kind, which are of finite support. Some examples are given to illustrate this procedure.", "title": "" }, { "docid": "96bd733f9168bed4e400f315c57a48e8", "text": "New phase transition phenomena have recently been discovered for the stochastic block model, for the special case of two non-overlapping symmetric communities. This gives raise in particular to new algorithmic challenges driven by the thresholds. This paper investigates whether a general phenomenon takes place for multiple communities, without imposing symmetry. In the general stochastic block model SBM(n,p,W), n vertices are split into k communities of relative size {pi}i∈[k], and vertices in community i and j connect independently with probability {Wij}i,j∈[k]. This paper investigates the partial and exact recovery of communities in the general SBM (in the constant and logarithmic degree regimes), and uses the generality of the results to tackle overlapping communities. The contributions of the paper are: (i) an explicit characterization of the recovery threshold in the general SBM in terms of a new f-divergence function D+, which generalizes the Hellinger and Chernoff divergences, and which provides an operational meaning to a divergence function analog to the KL-divergence in the channel coding theorem, (ii) the development of an algorithm that recovers the communities all the way down to the optimal threshold and runs in quasi-linear time, showing that exact recovery has no information-theoretic to computational gap for multiple communities, (iii) the development of an efficient algorithm that detects communities in the constant degree regime with an explicit accuracy bound that can be made arbitrarily close to 1 when a prescribed signal-to-noise ratio [defined in terms of the spectrum of diag(p)W] tends to infinity.", "title": "" }, { "docid": "037ea3bdc1adf619a3e2cccf6fb113c5", "text": "This chapter focuses on the expression of ideologies in various structures of text and talk. It is situated within the broader framework of a research project on discourse and ideology which has been conducted at the University of Amsterdam since 1993. The theoretical premise of this study is that ideologies are typically, though not exclusively, expressed and reproduced in discourse and communication, including non-verbal semiotic messages, such as pictures, photographs and movies. Obviously, ideologies are also enacted in other forms of action and interaction, and their reproduction is often embedded in organizational and institutional contexts. Thus, racist ideologies may be expressed and reproduced in racist talk, comics or movies in the context of the mass media, but they may also be enacted in many forms of discrimination and institutionalized by racist parties within the context of the mass media or of Western parliamentary democracies. However, among the many forms of reproduction and interaction, discourse plays a prominent role as the preferential site for the explicit, verbal formulation and the persuasive communication of ideological propositions.", "title": "" }, { "docid": "6b5bde39af1260effa0587d8c6afa418", "text": "This survey highlights the major issues concerning privacy and security in online social networks. Firstly, we discuss research that aims to protect user data from the various attack vantage points including other users, advertisers, third party application developers, and the online social network provider itself. Next we cover social network inference of user attributes, locating hubs, and link prediction. Because online social networks are so saturated with sensitive information, network inference plays a major privacy role. As a response to the issues brought forth by client-server architectures, distributed social networks are discussed. We then cover the challenges that providers face in maintaining the proper operation of an online social network including minimizing spam messages, and reducing the number of sybil accounts. Finally, we present research in anonymizing social network data. This area is of particular interest in order to continue research in this field both in academia and in industry.", "title": "" }, { "docid": "b85ca4a4b564fcb61001fd13332ddc65", "text": "Although the archaeological site of Edzná is one of the more accessible Mayan ruins, being located scarcely 60 km to the southeast of the port-city of Campeche, it has until recently escaped the notice which its true significance would seem to merit. Not only does it appear to have been the earliest major Mayan urban center, dating to the middle of the second century before the Christian era and having served as the focus of perhaps as many as 20, 000 inhabitants, but there is also a growing body of evidence to suggest that it played a key role in the development of Mayan astronomy and calendrics. Among the innovations that seemingly had their origin in Edzná are the Maya's fixing of their New Year's Day, the concept of \"year bearers\", and what is probably the oldest lunar observatory in the New World.", "title": "" }, { "docid": "5a4959ef609e2ed64018aed292b7f27f", "text": "With thousands of alerts identified by IDSs every day, the process of distinguishing which alerts are important (i.e., true positives) and which are is irrelevant (i.e., false positives) is become more complicated. The security administrator must analyze each single alert either a true of false alert. This paper proposes an alert prioritization model, which is based on risk assessment. The model uses indicators, such as priority, reliability, asset value, as decision factors to calculate alert's risk. The objective is to determine the impact of certain alerts generated by IDS on the security status of an information system, also improve the detection of intrusions using snort by classifying the most critical alerts by their levels of risk, thus, only the alerts that presents a real threat will be displayed to the security administrator, so, we reduce the number of false positives, also we minimize the analysis time of the alerts. The model was evaluated using KDD Cup 99 Dataset as test environment and a pattern matching algorithm.", "title": "" }, { "docid": "f6ad0d01cb66c1260c1074c4f35808c6", "text": "BACKGROUND\nUnilateral spatial neglect causes difficulty attending to one side of space. Various rehabilitation interventions have been used but evidence of their benefit is lacking.\n\n\nOBJECTIVES\nTo assess whether cognitive rehabilitation improves functional independence, neglect (as measured using standardised assessments), destination on discharge, falls, balance, depression/anxiety and quality of life in stroke patients with neglect measured immediately post-intervention and at longer-term follow-up; and to determine which types of interventions are effective and whether cognitive rehabilitation is more effective than standard care or an attention control.\n\n\nSEARCH METHODS\nWe searched the Cochrane Stroke Group Trials Register (last searched June 2012), MEDLINE (1966 to June 2011), EMBASE (1980 to June 2011), CINAHL (1983 to June 2011), PsycINFO (1974 to June 2011), UK National Research Register (June 2011). We handsearched relevant journals (up to 1998), screened reference lists, and tracked citations using SCISEARCH.\n\n\nSELECTION CRITERIA\nWe included randomised controlled trials (RCTs) of cognitive rehabilitation specifically aimed at spatial neglect. We excluded studies of general stroke rehabilitation and studies with mixed participant groups, unless more than 75% of their sample were stroke patients or separate stroke data were available.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently selected studies, extracted data, and assessed study quality. For subgroup analyses, review authors independently categorised the approach underlying the cognitive intervention as either 'top-down' (interventions that encourage awareness of the disability and potential compensatory strategies) or 'bottom-up' (interventions directed at the impairment but not requiring awareness or behavioural change, e.g. wearing prisms or patches).\n\n\nMAIN RESULTS\nWe included 23 RCTs with 628 participants (adding 11 new RCTs involving 322 new participants for this update). Only 11 studies were assessed to have adequate allocation concealment, and only four studies to have a low risk of bias in all categories assessed. Most studies measured outcomes using standardised neglect assessments: 15 studies measured effect on activities of daily living (ADL) immediately after the end of the intervention period, but only six reported persisting effects on ADL. One study (30 participants) reported discharge destination and one study (eight participants) reported the number of falls.Eighteen of the 23 included RCTs compared cognitive rehabilitation with any control intervention (placebo, attention or no treatment). Meta-analyses demonstrated no statistically significant effect of cognitive rehabilitation, compared with control, for persisting effects on either ADL (five studies, 143 participants) or standardised neglect assessments (eight studies, 172 participants), or for immediate effects on ADL (10 studies, 343 participants). In contrast, we found a statistically significant effect in favour of cognitive rehabilitation compared with control, for immediate effects on standardised neglect assessments (16 studies, 437 participants, standardised mean difference (SMD) 0.35, 95% confidence interval (CI) 0.09 to 0.62). However, sensitivity analyses including only studies of high methodological quality removed evidence of a significant effect of cognitive rehabilitation.Additionally, five of the 23 included RCTs compared one cognitive rehabilitation intervention with another. These included three studies comparing a visual scanning intervention with another cognitive rehabilitation intervention, and two studies (three comparison groups) comparing a visual scanning intervention plus another cognitive rehabilitation intervention with a visual scanning intervention alone. Only two small studies reported a measure of functional disability and there was considerable heterogeneity within these subgroups (I² > 40%) when we pooled standardised neglect assessment data, limiting the ability to draw generalised conclusions.Subgroup analyses exploring the effect of having an attention control demonstrated some evidence of a statistically significant difference between those comparing rehabilitation with attention control and those with another control or no treatment group, for immediate effects on standardised neglect assessments (test for subgroup differences, P = 0.04).\n\n\nAUTHORS' CONCLUSIONS\nThe effectiveness of cognitive rehabilitation interventions for reducing the disabling effects of neglect and increasing independence remains unproven. As a consequence, no rehabilitation approach can be supported or refuted based on current evidence from RCTs. However, there is some very limited evidence that cognitive rehabilitation may have an immediate beneficial effect on tests of neglect. This emerging evidence justifies further clinical trials of cognitive rehabilitation for neglect. However, future studies need to have appropriate high quality methodological design and reporting, to examine persisting effects of treatment and to include an attention control comparator.", "title": "" }, { "docid": "aa6dd2e44b992dd7f11c5d82f0b11556", "text": "It is well known that violent video games increase aggression, and that stress increases aggression. Many violent video games can be stressful because enemies are trying to kill players. The present study investigates whether violent games increase aggression by inducing stress in players. Stress was measured using cardiac coherence, defined as the synchronization of the rhythm of breathing to the rhythm of the heart. We predicted that cardiac coherence would mediate the link between exposure to violent video games and subsequent aggression. Specifically, we predicted that playing a violent video game would decrease cardiac coherence, and that cardiac coherence, in turn, would correlate negatively with aggression. Participants (N = 77) played a violent or nonviolent video game for 20 min. Cardiac coherence was measured before and during game play. After game play, participants had the opportunity to blast a confederate with loud noise through headphones during a reaction time task. The intensity and duration of noise blasts given to the confederate was used to measure aggression. As expected, violent video game players had lower cardiac coherence levels and higher aggression levels than did nonviolent game players. Cardiac coherence, in turn, was negatively related to aggression. This research offers another possible reason why violent games can increase aggression-by inducing stress. Cardiac coherence can be a useful tool to measure stress induced by violent video games. Cardiac coherence has several desirable methodological features as well: it is noninvasive, stable against environmental disturbances, relatively inexpensive, not subject to demand characteristics, and easy to use.", "title": "" }, { "docid": "1b647a09085a41e66f8c1e3031793fed", "text": "In this paper we apply distributional semantic information to document-level machine translation. We train monolingual and bilingual word vector models on large corpora and we evaluate them first in a cross-lingual lexical substitution task and then on the final translation task. For translation, we incorporate the semantic information in a statistical document-level decoder (Docent), by enforcing translation choices that are semantically similar to the context. As expected, the bilingual word vector models are more appropriate for the purpose of translation. The final document-level translator incorporating the semantic model outperforms the basic Docent (without semantics) and also performs slightly over a standard sentencelevel SMT system in terms of ULC (the average of a set of standard automatic evaluation metrics for MT). Finally, we also present some manual analysis of the translations of some concrete documents.", "title": "" }, { "docid": "2833cd652f82047b018dc6ddbe6e0705", "text": "Plants play an important role in Earth's ecology by providing sustenance, shelter and maintaining a healthy atmosphere. Some of these plants have important medicinal properties. Automatic recognition of plant leaf is a challenging problem in the area of computer vision. An efficient Ayurvedic plant leaf recognition system will beneficial to many sectors of society which include medicinal field botanic research etc. With the help of image processing and pattern recognition, we can easily recognize the leaf images. This paper gives a survey on different leaf recognition methods and classifications. Plant leaf classification is a technique where leaf is classified based on its different features.", "title": "" }, { "docid": "10862d64a8297336b4a15b1d8ca9946e", "text": "The paper proposes an analytical framework for comparing different business models for producing information goods and digital services. It is based on three dimensions that also refer to contrasted literature: the economics of matching, the economics of assembling and the economics of knowledge management. Our framework attempts to identify the principal trade-offs at the core of choices among alternative digital business models, and to compare them in terms of competitiveness and efficiency. It also highlights the role played by users in the production of information goods and competition with pure suppliers.", "title": "" }, { "docid": "c15369f923be7c8030cc8f2b1f858ced", "text": "An important goal of scientific data analysis is to understand the behavior of a system or process based on a sample of the system. In many instances it is possible to observe both input parameters and system outputs, and characterize the system as a high-dimensional function. Such data sets arise, for instance, in large numerical simulations, as energy landscapes in optimization problems, or in the analysis of image data relating to biological or medical parameters. This paper proposes an approach to analyze and visualizing such data sets. The proposed method combines topological and geometric techniques to provide interactive visualizations of discretely sampled high-dimensional scalar fields. The method relies on a segmentation of the parameter space using an approximate Morse-Smale complex on the cloud of point samples. For each crystal of the Morse-Smale complex, a regression of the system parameters with respect to the output yields a curve in the parameter space. The result is a simplified geometric representation of the Morse-Smale complex in the high dimensional input domain. Finally, the geometric representation is embedded in 2D, using dimension reduction, to provide a visualization platform. The geometric properties of the regression curves enable the visualization of additional information about each crystal such as local and global shape, width, length, and sampling densities. The method is illustrated on several synthetic examples of two dimensional functions. Two use cases, using data sets from the UCI machine learning repository, demonstrate the utility of the proposed approach on real data. Finally, in collaboration with domain experts the proposed method is applied to two scientific challenges. The analysis of parameters of climate simulations and their relationship to predicted global energy flux and the concentrations of chemical species in a combustion simulation and their integration with temperature.", "title": "" }, { "docid": "be1b9731df45408571e75d1add5dfe9c", "text": "We investigate a new commonsense inference task: given an event described in a short free-form text (“X drinks coffee in the morning”), a system reasons about the likely intents (“X wants to stay awake”) and reactions (“X feels alert”) of the event’s participants. To support this study, we construct a new crowdsourced corpus of 25,000 event phrases covering a diverse range of everyday events and situations. We report baseline performance on this task, demonstrating that neural encoder-decoder models can successfully compose embedding representations of previously unseen events and reason about the likely intents and reactions of the event participants. In addition, we demonstrate how commonsense inference on people’s intents and reactions can help unveil the implicit gender inequality prevalent in modern movie scripts.", "title": "" }, { "docid": "8cd62b12b4406db29b289a3e1bd5d05a", "text": "Humor generation is a very hard problem in the area of computational humor. In this paper, we present a joke generation model based on neural networks. The model can generate a short joke relevant to the topic that the user specifies. Inspired by the architecture of neural machine translation and neural image captioning, we use an encoder for representing user-provided topic information and an RNN decoder for joke generation. We trained the model by short jokes of Conan O’Brien with the help of POS Tagger. We evaluate the performance of our model by human ratings from five English speakers. In terms of the average score, our model outperforms a probabilistic model that puts words into slots in a fixed-structure sentence.", "title": "" }, { "docid": "92dbb257f6d087ce61f5c560c34bf46f", "text": "This study investigates eCommerce adoption in family run SMEs (small and medium sized enterprises). Specifically, the objectives of the study are twofold: (a) to examine environmental and organisational determinants of eCommerce adoption in the family business context; (b) to explore the moderating effect of business strategic orientation on the relationships between adoption determinants and adoption decision. A quantitative questionnaire survey was executed. The sampling frame was outlined based on the OneSource database and 88 companies were involved. Results of logistic regression analyses proffer support that ‘external pressure’ and ‘perceived benefits’ are predictors of eCommerce adoption. Moreover, the findings indicate that the strategic orientation of family businesses will function as a moderator in the adoption process. 2008 Elsevier B.V. All rights reserved.", "title": "" } ]
scidocsrr
bf988e327f2379a4bf7d85ebefe32640
Large scale image annotation: learning to rank with joint word-image embeddings
[ { "docid": "0784d5907a8e5f1775ad98a25b1b0b31", "text": "The Internet contains billions of images, freely available online. Methods for efficiently searching this incredibly rich resource are vital for a large number of applications. These include object recognition, computer graphics, personal photo collections, online image search tools. In this paper, our goal is to develop efficient image search and scene matching techniques that are not only fast, but also require very little memory, enabling their use on standard hardware or even on handheld devices. Our approach uses recently developed machine learning techniques to convert the Gist descriptor (a real valued vector that describes orientation energies at different scales and orientations within an image) to a compact binary code, with a few hundred bits per image. Using our scheme, it is possible to perform real-time searches with millions from the Internet using a single large PC and obtain recognition results comparable to the full descriptor. Using our codes on high quality labeled images from the LabelMe database gives surprisingly powerful recognition results using simple nearest neighbor techniques.", "title": "" } ]
[ { "docid": "cbde86d9b73371332a924392ae1f10d0", "text": "The difficulty to solve multiple objective combinatorial optimization problems with traditional techniques has urged researchers to look for alternative, better performing approaches for them. Recently, several algorithms have been proposed which are based on the Ant Colony Optimization metaheuristic. In this contribution, the existing algorithms of this kind are reviewed and experimentally tested in several instances of the bi-objective traveling salesman problem, comparing their performance with that of two well-known multi-objective genetic algorithms.", "title": "" }, { "docid": "ffea50948eab00d47f603d24bcfc1bfd", "text": "A statistical pattern-recognition technique was applied to the classification of musical instrument tones within a taxonomic hierarchy. Perceptually salient acoustic features— related to the physical properties of source excitation and resonance structure—were measured from the output of an auditory model (the log-lag correlogram) for 1023 isolated tones over the full pitch ranges of 15 orchestral instruments. The data set included examples from the string (bowed and plucked), woodwind (single, double, and air reed), and brass families. Using 70%/30% splits between training and test data, maximum a posteriori classifiers were constructed based on Gaussian models arrived at through Fisher multiplediscriminant analysis. The classifiers distinguished transient from continuant tones with approximately 99% correct performance. Instrument families were identified with approximately 90% performance, and individual instruments were identified with an overall success rate of approximately 70%. These preliminary analyses compare favorably with human performance on the same task and demonstrate the utility of the hierarchical approach to classification.", "title": "" }, { "docid": "53f28f66d99f5e706218447e226cf7cc", "text": "The Connectionist Inductive Learning and Logic Programming System, C-IL2P, integrates the symbolic and connectionist paradigms of Artificial Intelligence through neural networks that perform massively parallel Logic Programming and inductive learning from examples and background knowledge. This work presents an extension of C-IL2P that allows the implementation of Extended Logic Programs in Neural Networks. This extension makes C-IL2P applicable to problems where the background knowledge is represented in a Default Logic. As a case example, we have applied the system for fault diagnosis of a simplified power system generation plant, obtaining good preliminary results.", "title": "" }, { "docid": "8fa31615d2164e9146be35d046dd71cf", "text": "An empirical investigation of information retrieval (IR) using the MEDLINE 1 database was carried out to study user behaviour, performance and to investigate the reasons for sub-optimal searches. The experimental subjects were drawn from two groups of final year medical students who differed in their knowledge of the search system, i.e. novice and expert users. The subjects carried out four search tasks and their recall and precision performance was recorded. Data was captured on the search strategies used, duration and logs of submitted queries. Differences were found between the groups for the performance measure of recall in only one of the four experimental tasks. Overall performance was poor. Analysis of strategies, timing data and query logs showed that there were many different causes for search failure or success. Poor searchers either gave up too quickly, employed few search terms, used only simple queries or used the wrong search terms. Good searchers persisted longer, used a larger, richer set of terms, constructed more complex queries and were more diligent in evaluating the retrieved results. However, individual performances were not correlated with all of these factors. Poor performers frequently exhibited several factors of good searcher behaviour and failed for just one reason. Overall end-user searching behaviour is complex and it appears that just one factor can cause poor performance, whereas good performance can result from sub-optimal strategies that compensate for some difficulties. The implications of the results for the design of IR interfaces are discussed.", "title": "" }, { "docid": "08804b3859d70c6212bba05c7e792f9a", "text": "Both linear mixed models (LMMs) and sparse regression models are widely used in genetics applications, including, recently, polygenic modeling in genome-wide association studies. These two approaches make very different assumptions, so are expected to perform well in different situations. However, in practice, for a given dataset one typically does not know which assumptions will be more accurate. Motivated by this, we consider a hybrid of the two, which we refer to as a \"Bayesian sparse linear mixed model\" (BSLMM) that includes both these models as special cases. We address several key computational and statistical issues that arise when applying BSLMM, including appropriate prior specification for the hyper-parameters and a novel Markov chain Monte Carlo algorithm for posterior inference. We apply BSLMM and compare it with other methods for two polygenic modeling applications: estimating the proportion of variance in phenotypes explained (PVE) by available genotypes, and phenotype (or breeding value) prediction. For PVE estimation, we demonstrate that BSLMM combines the advantages of both standard LMMs and sparse regression modeling. For phenotype prediction it considerably outperforms either of the other two methods, as well as several other large-scale regression methods previously suggested for this problem. Software implementing our method is freely available from http://stephenslab.uchicago.edu/software.html.", "title": "" }, { "docid": "68e3c37660f862e6a4af132ad1a9fa52", "text": "Under the requirements of reducing emissions, air pollution and achieving higher fuel economy, companies are developing electric, hybrid electric, and plug-in hybrid electric vehicles. However, the high cost of these technologies and the low autonomy are very restrictive. In this paper a new concept of fast on-board battery charger for Electric Vehicles (EVs) is proposed which uses the electric motor like filter and the same converter for charging and traction mode.", "title": "" }, { "docid": "d568194d6b856243056c072c96c76115", "text": "OBJECTIVE\nTo develop an evidence-based guideline to help clinicians make decisions about when and how to safely taper and stop antipsychotics; to focus on the highest level of evidence available and seek input from primary care professionals in the guideline development, review, and endorsement processes.\n\n\nMETHODS\nThe overall team comprised 9 clinicians (1 family physician, 1 family physician specializing in long-term care, 1 geriatric psychiatrist, 2 geriatricians, 4 pharmacists) and a methodologist; members disclosed conflicts of interest. For guideline development, a systematic process was used, including the GRADE (Grading of Recommendations Assessment, Development and Evaluation) approach. Evidence was generated from a Cochrane systematic review of antipsychotic deprescribing trials for the behavioural and psychological symptoms of dementia, and a systematic review was conducted to assess the evidence behind the benefits of using antipsychotics for insomnia. A review of reviews of the harms of continued antipsychotic use was performed, as well as narrative syntheses of patient preferences and resource implications. This evidence and GRADE quality-of-evidence ratings were used to generate recommendations. The team refined guideline content and recommendation wording through consensus and synthesized clinical considerations to address common front-line clinician questions. The draft guideline was distributed to clinicians and stakeholders for review and revisions were made at each stage.\n\n\nRECOMMENDATIONS\nWe recommend deprescribing antipsychotics for adults with behavioural and psychological symptoms of dementia treated for at least 3 months (symptoms stabilized or no response to an adequate trial) and for adults with primary insomnia treated for any duration or secondary insomnia in which underlying comorbidities are managed. A decision-support algorithm was developed to accompany the guideline.\n\n\nCONCLUSION\nAntipsychotics are associated with harms and can be safely tapered. Patients and caregivers might be more amenable to deprescribing if they understand the rationale (potential for harm), are involved in developing the tapering plan, and are offered behavioural advice or management. This guideline provides recommendations for making decisions about when and how to reduce the dose of or stop antipsychotics. Recommendations are meant to assist with, not dictate, decision making in conjunction with patients and families.", "title": "" }, { "docid": "4daad9b24e477160999f350043125116", "text": "Recent research studied the problem of publishing microdata without revealing sensitive information, leading to the privacy preserving paradigms of k-anonymity and `-diversity. k-anonymity protects against the identification of an individual’s record. `-diversity, in addition, safeguards against the association of an individual with specific sensitive information. However, existing approaches suffer from at least one of the following drawbacks: (i) The information loss metrics are counter-intuitive and fail to capture data inaccuracies inflicted for the sake of privacy. (ii) `-diversity is solved by techniques developed for the simpler k-anonymity problem, which introduces unnecessary inaccuracies. (iii) The anonymization process is inefficient in terms of computation and I/O cost. In this paper we propose a framework for efficient privacy preservation that addresses these deficiencies. First, we focus on one-dimensional (i.e., single attribute) quasiidentifiers, and study the properties of optimal solutions for k-anonymity and `-diversity, based on meaningful information loss metrics. Guided by these properties, we develop efficient heuristics to solve the one-dimensional problems in linear time. Finally, we generalize our solutions to multi-dimensional quasi-identifiers using space-mapping techniques. Extensive experimental evaluation shows that our techniques clearly outperform the state-of-the-art, in terms of execution time and information loss.", "title": "" }, { "docid": "b6f4a2122f8fe1bc7cb4e59ad7cf8017", "text": "The use of biomass to provide energy has been fundamental to the development of civilisation. In recent times pressures on the global environment have led to calls for an increased use of renewable energy sources, in lieu of fossil fuels. Biomass is one potential source of renewable energy and the conversion of plant material into a suitable form of energy, usually electricity or as a fuel for an internal combustion engine, can be achieved using a number of different routes, each with specific pros and cons. A brief review of the main conversion processes is presented, with specific regard to the production of a fuel suitable for spark ignition gas engines.", "title": "" }, { "docid": "1ebe3d66ee2cde36d2c384996765a643", "text": "In this work we investigate the problem of using public consensus networks – exemplified by systems like Ethereum and Bitcoin – to perform cryptographic functionalities that involve the manipulation of secret data, such as cryptographic access control. We consider a hybrid paradigm in which a secure client-side functionality manages cryptographic secrets, while an online consensus network performs public computation. Using this approach, we explore both the constructive and potentially destructive implications of such systems. We first show that this combination allows for the construction of stateful interactive functionalities (including general computation) from a stateless client-side functionality, which can be implemented using inexpensive trusted hardware or even purely cryptographic functionalities such as Witness Encryption. We then describe a number of practical applications that can be achieved today. These include rate limited mandatory logging; strong encrypted backups from weak passwords; enforcing fairness in multi-party computation; and destructive applications such as autonomous ransomware, which allows for payments without an online party.", "title": "" }, { "docid": "a5bfeab5278eb5bbe45faac0535f0b81", "text": "In modern computer systems, system event logs have always been the primary source for checking system status. As computer systems become more and more complex, the interaction between software and hardware increases frequently. The components will generate enormous log information, including running reports and fault information. The sheer quantity of data is a great challenge for analysis relying on the manual method. In this paper, we implement a management and analysis system of log information, which can assist system administrators to understand the real-time status of the entire system, classify logs into different fault types, and determine the root cause of the faults. In addition, we improve the existing fault correlation analysis method based on the results of system log classification. We apply the system in a cloud computing environment for evaluation. The results show that our system can classify fault logs automatically and effectively. With the proposed system, administrators can easily detect the root cause of faults.", "title": "" }, { "docid": "9032066fb608f190eca3fe35c817f6f2", "text": "Superposition Benchmark is a next-generation GPU benchmark that continues the line of UNIGINE benchmarks, famous for outstanding and innovative 3D graphics. The UNIGINE team developed a brand-new lighting effect that makes an interactive real-time-rendered environment look photorealistic: screen-space ray tracing global illumination (SSRTGI), a technology that brings rendered ray tracing closer to real physics. Ordinary screen-space effects, such as SSAO and SSGI, do not treat objects as obstacles for light rays, so they are not able to provide realistic lighting simulation. They produce only a rough imitation, while SSRTGI provides real 180-degree ray tracing for each pixel in the scene with a fixed number of steps per ray to define occlusions. This ray-tracing technique provides ambient occlusion with more realistic shadows between the objects. The use of bent normals helps to reduce the noise and smooth borders of shadows, while global illumination recreates light reflections from surfaces. Working together, these effects provide incredibly realistic lighting and shadow-play simulation for real-time interactive rendering.", "title": "" }, { "docid": "6b754a8f97e8150118afdb0212af3d1d", "text": "Association Rule Mining is a data mining technique which is well suited for mining Marketbasket dataset. The research described in the current paper came out during the early days of data mining research and was also meant to demonstrate the feasibility of fast scalable data mining algorithms. Although a few algorithms for mining association rules existed at the time, the Apriori and Apriori TID algorithms greatly reduced the overhead costs associated with generating association rules.", "title": "" }, { "docid": "7d024e9ccf20923ade005970ddef1bcc", "text": "Mamdani Fuzzy Model is an important technique in Computational Intelligence (CI) study. This paper presents an implementation of a supervised learning method based on membership function training in the context of Mamdani fuzzy models. Specifically, auto zoom function of a digital camera is modelled using Mamdani technique. The performance of control method is verified through a series of simulation and numerical results are provided as illustrations. Keywords-component: Mamdani fuzzy model, fuzzy logic, auto zoom, digital camera", "title": "" }, { "docid": "cd8cad6445b081e020d90eb488838833", "text": "Heavy metal pollution has become one of the most serious environmental problems today. The treatment of heavy metals is of special concern due to their recalcitrance and persistence in the environment. In recent years, various methods for heavy metal removal from wastewater have been extensively studied. This paper reviews the current methods that have been used to treat heavy metal wastewater and evaluates these techniques. These technologies include chemical precipitation, ion-exchange, adsorption, membrane filtration, coagulation-flocculation, flotation and electrochemical methods. About 185 published studies (1988-2010) are reviewed in this paper. It is evident from the literature survey articles that ion-exchange, adsorption and membrane filtration are the most frequently studied for the treatment of heavy metal wastewater.", "title": "" }, { "docid": "52bce24f8ec738f9b9dfd472acd6b101", "text": "Human action recognition in videos is a challenging problem with wide applications. State-of-the-art approaches often adopt the popular bag-of-features representation based on isolated local patches or temporal patch trajectories, where motion patterns like object relationships are mostly discarded. This paper proposes a simple representation specifically aimed at the modeling of such motion relationships. We adopt global and local reference points to characterize motion information, so that the final representation can be robust to camera movement. Our approach operates on top of visual codewords derived from local patch trajectories, and therefore does not require accurate foreground-background separation, which is typically a necessary step to model object relationships. Through an extensive experimental evaluation, we show that the proposed representation offers very competitive performance on challenging benchmark datasets, and combining it with the bag-of-features representation leads to substantial improvement. On Hollywood2, Olympic Sports, and HMDB51 datasets, we obtain 59.5%, 80.6% and 40.7% respectively, which are the best reported results to date.", "title": "" }, { "docid": "185a7af4cc822a86d7fccfcf2c06c06c", "text": "Fingerprint identification has been a great challenge due to its complex search of database. This paper proposes an efficient fingerprint search algorithm based on database clustering, which narrows down the search space of fine matching. Fingerprint is non-uniformly partitioned by a circular tessellation to compute a multi-scale orientation field as the main search feature. The average ridge distance is employed as an auxiliary feature. A modified K-means clustering technique is proposed to partition the orientation feature space into clusters. Based on the database clustering, a hierarchical query processing is proposed to facilitate an efficient fingerprint search, which not only greatly speeds up the search process but also improves the retrieval accuracy. The experimental results show the effectiveness and superiority of the proposed fingerprint search algorithm. 2006 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b16992ec2416b420b2115037c78cfd4b", "text": "Dictionary learning algorithms or supervised deep convolution networks have considerably improved the efficiency of predefined feature representations such as SIFT. We introduce a deep scattering convolution network, with complex wavelet filters over spatial and angular variables. This representation brings an important improvement to results previously obtained with predefined features over object image databases such as Caltech and CIFAR. The resulting accuracy is comparable to results obtained with unsupervised deep learning and dictionary based representations. This shows that refining image representations by using geometric priors is a promising direction to improve image classification and its understanding.", "title": "" }, { "docid": "3647b5e0185c0120500fff8061265abd", "text": "Human and machine visual sensing is enhanced when surface properties of objects in scenes, including color, can be reliably estimated despite changes in the ambient lighting conditions. We describe a computational method for estimating surface spectral reflectance when the spectral power distribution of the ambient light is not known.", "title": "" } ]
scidocsrr
afa3c5ad6d31c9d1b326d9f03f453465
The biological effects of childhood trauma.
[ { "docid": "a82aac21da1e5c10b2118353fde4b510", "text": "OBJECTIVE\nReturning children to their biological families after placement in foster care (ie, reunification) has been prioritized with legislation. Comprehensive studies of child behavioral health functioning after reunification, however, have not been conducted. This study examined outcomes for youth who were reunified after placement in foster care as compared with youth who did not reunify.\n\n\nDESIGN\nProspective cohort.\n\n\nSETTING\nChildren who entered foster care in San Diego, California, and who remained in foster care for at least 5 months. Participants. A cohort of 149 ethnically diverse youth, 7 to 12 years old, who entered foster care between May 1990, and October 1991. Seventy-five percent of those interviewed at Time 1 were interviewed at Time 2 (6 years later).\n\n\nOUTCOME MEASURES\n1) Risk behaviors: delinquent, sexual, self-destructive, substance use, and total risk behaviors; 2) Life-course outcomes: pregnancy, tickets/arrests, suspensions, dropping out of school, and grades; 3) Current symptomatology: externalizing, internalizing, total behavior problems, and total competence.\n\n\nRESULTS\nCompared with youth who were not reunified, reunified youth showed more self-destructive behavior (0.15 vs -0.11), substance use (0.16 vs -0.11), and total risk behavior problem standardized scores (0.12 vs -0.09). Reunified youth were more likely to have received a ticket or have been arrested (49.2% vs 30.2%), to have dropped out of school (20.6% vs 9.4%), and to have received lower grades (6.5 vs 7.4). Reunified youth reported more current problems in internalizing behaviors (56.6 vs 53.0), and total behavior problems (59.5 vs 55.7), and lower total competence (41.1 vs 45.0). There were no statistically significant differences between the groups on delinquency, sexual behaviors, pregnancy, suspensions, or externalizing behaviors. Reunification status was a significant predictor of negative outcomes in 8 of the 9 regression equations after controlling for Time 1 behavior problems, age, and gender.\n\n\nCONCLUSIONS\nThese findings suggest that youth who reunify with their biological families after placement in foster care have more negative outcomes than youth who do not reunify. The implications of these findings for policy and practice are discussed.", "title": "" } ]
[ { "docid": "df5cf5cd42e216ef723a6e2295a92f02", "text": "This integrative literature review assesses the relationship between hospital nurses' work environment characteristics and patient safety outcomes and recommends directions for future research based on examination of the literature. Using an electronic search of five databases, 18 studies published in English between 1999 and 2016 were identified for review. All but one study used a cross-sectional design, and only four used a conceptual/theoretical framework to guide the research. No definition of work environment was provided in most studies. Differing variables and instruments were used to measure patient outcomes, and findings regarding the effects of work environment on patient outcomes were inconsistent. To clarify the relationship between nurses' work environment characteristics and patient safety outcomes, researchers should consider using a longitudinal study design, using a theoretical foundation, and providing clear operational definitions of concepts. Moreover, given the inconsistent findings of previous studies, they should choose their measurement methodologies with care.", "title": "" }, { "docid": "8d944292adb1ac527601619f5343fd8f", "text": "This paper addresses the control of the standard twin rotor multi-input-multi-output (MIMO) control system problem. First, nonlinear dynamic model of the system was derived using basic laws of physics. While it was possible to linearize some of the plant nonlinearities, some of them were not, such as squared input. In the conventional approach, this problem has been solved by linearizing the possible nonlinearities, while neglecting the others, at the cost of reduced performance. Most importantly, the latter partially linearized approach demands linearized plant model at every operating point, which makes the controller implementation complicated when considering a wide dynamic operating range. In order to overcome limitations of the partially linearized approach, we derive a nonlinear controller. In that we use the tracking error dynamics to arrive at a compromise between the tracking performances of the pitch and yaw axes control and the smoothness of the actuator input. Furthermore, one of the key features of twin rotor MIMO system is its dynamic cross coupling. Therefore, in order to get rid of the cross coupling a dynamic de-coupler was used. First the plant and the controller was simulated in MATLAB/Simulink™ environment. Then, it was implemented in the hardware to control the pitch and yaw of an actual Twin Rotor MIMO system. The performance assessed using Integrated Absolute Error (IAE) and Accumulated Squared Input (ASI) metrics which represents the tracking performance and the energy input respectively. The performance matrices of the controller were compared with a PID controller tuned to the partially linearized plant model and the results show approximately more than 25% improvement both in terms of IAE and ASI.", "title": "" }, { "docid": "c3fcc103374906a1ba21658c5add67fe", "text": "Behavioural scoring models are generally used to estimate the probability that a customer of a financial institution who owns a credit product will default on this product in a fixed time horizon. However, one single customer usually purchases many credit products from an institution while behavioural scoring models generally treat each of these products independently. In order to make credit risk management easier and more efficient, it is interesting to develop customer default scoring models. These models estimate the probability that a customer of a certain financial institution will have credit issues with at least one product in a fixed time horizon. In this study, three strategies to develop customer default scoring models are described. One of the strategies is regularly utilized by financial institutions and the other two will be proposed herein. The performance of these strategies is compared by means of an actual data bank supplied by a financial institution and a Monte Carlo simulation study. Journal of the Operational Research Society advance online publication, 20 April 2016; doi:10.1057/jors.2016.23", "title": "" }, { "docid": "a72932cd98f425eafc19b9786da4319d", "text": "Recommender systems are changing from novelties used by a few E-commerce sites, to serious business tools that are re-shaping the world of E-commerce. Many of the largest commerce Web sites are already using recommender systems to help their customers find products to purchase. A recommender system learns from a customer and recommends products that she will find most valuable from among the available products. In this paper we present an explanation of how recommender systems help E-commerce sites increase sales, and analyze six sites that use recommender systems including several sites that use more than one recommender system. Based on the examples, we create a taxonomy of recommender systems, including the interfaces they present to customers, the technologies used to create the recommendations, and the inputs they need from customers. We conclude with ideas for new applications of recommender systems to E-commerce.", "title": "" }, { "docid": "885bf946dbbfc462cd066794fe486da3", "text": "Efficient implementation of block cipher is important on the way to achieving high efficiency with good understand ability. Numerous number of block cipher including Advance Encryption Standard have been implemented using different platform. However the understanding of the AES algorithm step by step is very complicated. This paper presents the implementation of AES algorithm and explains Avalanche effect with the help of Avalanche test result. For this purpose we use Xilinx ISE 9.1i platform in Algorithm development and ModelSim SE 6.3f platform for results confirmation and computation.", "title": "" }, { "docid": "146b5beb0c82f230a6896599269c5b81", "text": "The link between the built environment and human behavior has long been of interest to the field of urban planning, but direct assessments of the links between the built environment and physical activity as it influences personal health are still rare in the field. Yet the concepts, theories, and methods used by urban planners provide a foundation for an emerging body of research on the relationship between the built environment and physical activity. Recent research efforts in urban planning have focused on the idea that land use and design policies can be used to increase transit use as well as walking and bicycling. The development of appropriate measures for the built environment and for travel behavior is an essential element of this research. The link between the built environment and travel behavior is then made using theoretical frameworks borrowed from economics, and in particular, the concept of travel as a derived demand. The available evidence lends itself to the argument that a combination of urban design, land use patterns, and transportation systems that promotes walking and bicycling will help create active, healthier, and more livable communities. To provide more conclusive evidence, however, researchers must address the following issues: An alternative to the derived-demand framework must be developed for walking, measures of the built environment must be refined, and more-complete data on walking must be developed. In addition, detailed data on the built environment must be spatially matched to detailed data on travel behavior.", "title": "" }, { "docid": "03e267aeeef5c59aab348775d264afce", "text": "Visual relations, such as person ride bike and bike next to car, offer a comprehensive scene understanding of an image, and have already shown their great utility in connecting computer vision and natural language. However, due to the challenging combinatorial complexity of modeling subject-predicate-object relation triplets, very little work has been done to localize and predict visual relations. Inspired by the recent advances in relational representation learning of knowledge bases and convolutional object detection networks, we propose a Visual Translation Embedding network (VTransE) for visual relation detection. VTransE places objects in a low-dimensional relation space where a relation can be modeled as a simple vector translation, i.e., subject + predicate &#x2248; object. We propose a novel feature extraction layer that enables object-relation knowledge transfer in a fully-convolutional fashion that supports training and inference in a single forward/backward pass. To the best of our knowledge, VTransE is the first end-toend relation detection network. We demonstrate the effectiveness of VTransE over other state-of-the-art methods on two large-scale datasets: Visual Relationship and Visual Genome. Note that even though VTransE is a purely visual model, it is still competitive to the Lu&#x2019;s multi-modal model with language priors [27].", "title": "" }, { "docid": "250c1a5ac98dc6556bc62cc05555499d", "text": "Smartphones are programmable and equipped with a set of cheap but powerful embedded sensors, such as accelerometer, digital compass, gyroscope, GPS, microphone, and camera. These sensors can collectively monitor a diverse range of human activities and the surrounding environment. Crowdsensing is a new paradigm which takes advantage of the pervasive smartphones to sense, collect, and analyze data beyond the scale of what was previously possible. With the crowdsensing system, a crowdsourcer can recruit smartphone users to provide sensing service. Existing crowdsensing applications and systems lack good incentive mechanisms that can attract more user participation. To address this issue, we design incentive mechanisms for crowdsensing. We consider two system models: the crowdsourcer-centric model where the crowdsourcer provides a reward shared by participating users, and the user-centric model where users have more control over the payment they will receive. For the crowdsourcer-centric model, we design an incentive mechanism using a Stackelberg game, where the crowdsourcer is the leader while the users are the followers. We show how to compute the unique Stackelberg Equilibrium, at which the utility of the crowdsourcer is maximized, and none of the users can improve its utility by unilaterally deviating from its current strategy. For the user-centric model, we design an auction-based incentive mechanism, which is computationally efficient, individually rational, profitable, and truthful. Through extensive simulations, we evaluate the performance and validate the theoretical properties of our incentive mechanisms.", "title": "" }, { "docid": "f6290309397db11673e02933641476f2", "text": "In this paper, a comprehensive survey of the pioneer as well as the state of-the-art localization and tracking methods in the wireless sensor networks is presented. Localization is mostly applicable for the static sensor nodes, whereas, tracking for the mobile sensor nodes. The localization algorithms are broadly classified as range-based and range-free methods. The estimated range (distance) between an anchor and an unknown node is highly erroneous in an indoor scenario. This limitation can be handled up to a large extent by employing a large number of existing access points (APs) in the range free localization method. Recent works emphasize on the use multisensor data like magnetic, inertial, compass, gyroscope, ultrasound, infrared, visual and/or odometer to improve the localization accuracy further. Additionally, tracking method does the future prediction of location based on the past location history. A smooth trajectory is noted even if some of the received measurements are erroneous. Real experimental set-ups such as National Instruments (NI) wireless sensor nodes, Crossbow motes and hand-held devices for carrying out the localization and tracking are also highlighted herein. Keywords—Wireless Sensor Networks, Localization, Tracking", "title": "" }, { "docid": "e08bc715d679ba0442883b4b0e481998", "text": "Rheology, as a branch of physics, studies the deformation and flow of matter in response to an applied stress or strain. According to the materials’ behaviour, they can be classified as Newtonian or non-Newtonian (Steffe, 1996; Schramm, 2004). The most of the foodstuffs exhibit properties of non-Newtonian viscoelastic systems (Abang Zaidel et al., 2010). Among them, the dough can be considered as the most unique system from the point of material science. It is viscoelastic system which exhibits shear-thinning and thixotropic behaviour (Weipert, 1990). This behaviour is the consequence of dough complex structure in which starch granules (75-80%) are surrounded by three-dimensional protein (20-25%) network (Bloksma, 1990, as cited in Weipert, 2006). Wheat proteins are consisted of gluten proteins (80-85% of total wheat protein) which comprise of prolamins (in wheat gliadins) and glutelins (in wheat glutenins) and non gluten proteins (15-20% of the total wheat proteins) such as albumins and globulins (Veraverbeke & Delcour, 2002). Gluten complex is a viscoelastic protein responsible for dough structure formation. Among the cereal technologists, rheology is widely recognized as a valuable tool in quality assessment of flour. Hence, in the cereal scientific community, rheological measurements are generally employed throughout the whole processing chain in order to monitor the mechanical properties, molecular structure and composition of the material, to imitate materials’ behaviour during processing and to anticipate the quality of the final product (Dobraszczyk & Morgenstern, 2003). Rheology is particularly important technique in revealing the influence of flour constituents and additives on dough behaviour during breadmaking. There are many test methods available to measure rheological properties, which are commonly divided into empirical (descriptive, imitative) and fundamental (basic) (Scott Blair, 1958 as cited in Weipert, 1990). Although being criticized due to their shortcomings concerning inflexibility in defining the level of deforming force, usage of strong deformation forces, interpretation of results in relative non-SI units, large sample requirements and its impossibility to define rheological parameters such as stress, strain, modulus or viscosity (Weipert, 1990; Dobraszczyk & Morgenstern, 2003), empirical rheological measurements are still indispensable in the cereal quality laboratories. According to the empirical rheological parameters it is possible to determine the optimal flour quality for a particular purpose. The empirical techniques used for dough quality", "title": "" }, { "docid": "df610551aec503acd1a31fb519fdeabe", "text": "A small form factor, 79 GHz, MIMO radar sensor with 2D angle of arrival estimation capabilities was designed for automotive applications. It offers a 0.05 m distance resolution required to make small minimum distance measurements. The radar dimensions are 42×44×20 mm3 enabling installation in novel side locations. This aspect, combined with a wide field of view, creates a coverage that compliments the near range coverage gaps of existing long and medium range radars. Therefore, this radar supports novel radar applications such as parking aid and can be used to create a 360 degrees safety cocoon around the car.", "title": "" }, { "docid": "5b07bc318cb0f5dd7424cdcc59290d31", "text": "The current practice used in the design of physical interactive products (such as handheld devices), often suffers from a divide between exploration of form and exploration of interactivity. This can be attributed, in part, to the fact that working prototypes are typically expensive, take a long time to manufacture, and require specialized skills and tools not commonly available in design studios.We have designed a prototyping tool that, we believe, can significantly reduce this divide. The tool allows designers to rapidly create functioning, interactive, physical prototypes early in the design process using a collection of wireless input components (buttons, sliders, etc.) and a sketch of form. The input components communicate with Macromedia Director to enable interactivity.We believe that this tool can improve the design practice by: a) Improving the designer's ability to explore both the form and interactivity of the product early in the design process, b) Improving the designer's ability to detect problems that emerge from the combination of the form and the interactivity, c) Improving users' ability to communicate their ideas, needs, frustrations and desires, and d) Improving the client's understanding of the proposed design, resulting in greater involvement and support for the design.", "title": "" }, { "docid": "5809cfa325b79dacd952ec23d3631dd8", "text": "Slif uses a combination of text-mining and image processing to extract information from figures in the biomedical literature. It also uses innovative extensions to traditional latent topic modeling to provide new ways to traverse the literature. Slif provides a publicly available searchable database (http://slif.cbi.cmu.edu). Slif originally focused on fluorescence microscopy images. We have now extended it to classify panels into more image types. We also improved the classification into subcellular classes by building a more representative training set. To get the most out of the human labeling effort, we used active learning to select images to label. We developed models that take into account the structure of the document (with panels inside figures inside papers) and the multi-modality of the information (free and annotated text, images, information from external databases). This has allowed us to provide new ways to navigate a large collection of documents.", "title": "" }, { "docid": "46fe86de189eba0df238cdb65ee4fe2a", "text": "The linkage of ImageNet WordNet synsets to Wikidata items will leverage deep learning algorithm with access to a rich multilingual knowledge graph. Here I will describe our ongoing efforts in linking the two resources and issues faced in matching the Wikidata and WordNet knowledge graphs. I show an example on how the linkage can be used in a deep learning setting with real-time image classification and labeling in a non-English language and discuss what opportunities lies ahead.", "title": "" }, { "docid": "fb518b8c43a0359a41da47c7b8717c96", "text": "Bayesian optimization techniques have been successfully applied to robotics, planning, sensor placement, recommendation, advertising, intelligent user interfaces and automatic algorithm configuration. Despite these successes, the approach is restricted to problems of moderate dimension, and several workshops on Bayesian optimization have identified its scaling to high dimensions as one of the holy grails of the field. In this paper, we introduce a novel random embedding idea to attack this problem. The resulting Random EMbedding Bayesian Optimization (REMBO) algorithm is very simple and applies to domains with both categorical and continuous variables. The experiments demonstrate that REMBO can effectively solve high-dimensional problems, including automatic parameter configuration of a popular mixed integer linear programming solver.", "title": "" }, { "docid": "4080a3a6d4272e44541a7082a311cacb", "text": "Cyberbullying is a repeated act that harasses, humiliates, threatens, or hassles other people through electronic devices and online social networking websites. Cyberbullying through the internet is more dangerous than traditional bullying, because it can potentially amplify the humiliation to an unlimited online audience. According to UNICEF and a survey by the Indonesian Ministry of Communication and Information, 58% of 435 adolescents do not understand about cyberbullying. Some of them might even have been the bullies, but since they did not understand about cyberbullying they could not recognise the negative effects of their bullying. The bullies may not recognise the harm of their actions, because they do not see immediate responses from their victims. Our study aimed to detect cyberbullying actors based on texts and the credibility analysis of users and notify them about the harm of cyberbullying. We collected data from Twitter. Since the data were unlabelled, we built a web-based labelling tool to classify tweets into cyberbullying and non-cyberbullying tweets. We obtained 301 cyberbullying tweets, 399 non-cyberbullying tweets, 2,053 negative words and 129 swear words from the tool. Afterwards, we applied SVM and KNN to learn about and detect cyberbullying texts. The results show that SVM results in the highest f1-score, 67%. We also measured the credibility analysis of users and found 257 Normal Users, 45 Harmful Bullying Actors, 53 Bullying Actors and 6 Prospective Bullying Actors.", "title": "" }, { "docid": "2fb3e787ee9a4afac71292151965ec5c", "text": "We propose the 3dSOBS+ algorithm, a newly designed approach for moving object detection based on a neural background model automatically generated by a self-organizing method. The algorithm is able to accurately handle scenes containing moving backgrounds, gradual illumination variations, and shadows cast by moving objects, and is robust against false detections for different types of videos taken with stationary cameras. Experimental results and comparisons conducted on the Background Models Challenge benchmark dataset demonstrate the improvements achieved by the proposed algorithm, that compares well with the state-of-the-art methods. 2013 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "d569902303b93274baf89527e666adc0", "text": "We present a novel sparse representation based approach for the restoration of clipped audio signals. In the proposed approach, the clipped signal is decomposed into overlapping frames and the declipping problem is formulated as an inverse problem, per audio frame. This problem is further solved by a constrained matching pursuit algorithm, that exploits the sign pattern of the clipped samples and their maximal absolute value. Performance evaluation with a collection of music and speech signals demonstrate superior results compared to existing algorithms, over a wide range of clipping levels.", "title": "" }, { "docid": "68810ad35e71ea7d080e7433e227e40e", "text": "Mobile devices, ubiquitous in modern lifestyle, embody and provide convenient access to our digital lives. Being small and mobile, they are easily lost or stole, therefore require strong authentication to mitigate the risk of unauthorized access. Common knowledge-based mechanism like PIN or pattern, however, fail to scale with the high frequency but short duration of device interactions and ever increasing number of mobile devices carried simultaneously. To overcome these limitations, we present CORMORANT, an extensible framework for risk-aware multi-modal biometric authentication across multiple mobile devices that offers increased security and requires less user interaction.", "title": "" } ]
scidocsrr
1b40446108166d658a623143a9988cf1
The kaleidoscope of effective gamification: deconstructing gamification in business applications
[ { "docid": "fd6b7a0e915a32fe172a757b5a08e5ef", "text": "More Americans now play video games than go to the movies (NPD Group, 2009). The meteoric rise in popularity of video games highlights the need for research approaches that can deepen our scientific understanding of video game engagement. This article advances a theory-based motivational model for examining and evaluating the ways by which video game engagement shapes psychological processes and influences well-being. Rooted in self-determination theory (Deci & Ryan, 2000; Ryan & Deci, 2000a), our approach suggests that both the appeal and well-being effects of video games are based in their potential to satisfy basic psychological needs for competence, autonomy, and relatedness. We review recent empirical evidence applying this perspective to a number of topics including need satisfaction in games and short-term well-being, the motivational appeal of violent game content, motivational sources of postplay aggression, the antecedents and consequences of disordered patterns of game engagement, and the determinants and effects of immersion. Implications of this model for the future study of game motivation and the use of video games in interventions are discussed.", "title": "" }, { "docid": "4f6a6f633e512a33fc0b396765adcdf0", "text": "Interactive systems often require calibration to ensure that input and output are optimally configured. Without calibration, user performance can degrade (e.g., if an input device is not adjusted for the user's abilities), errors can increase (e.g., if color spaces are not matched), and some interactions may not be possible (e.g., use of an eye tracker). The value of calibration is often lost, however, because many calibration processes are tedious and unenjoyable, and many users avoid them altogether. To address this problem, we propose calibration games that gather calibration data in an engaging and entertaining manner. To facilitate the creation of calibration games, we present design guidelines that map common types of calibration to core tasks, and then to well-known game mechanics. To evaluate the approach, we developed three calibration games and compared them to standard procedures. Users found the game versions significantly more enjoyable than regular calibration procedures, without compromising the quality of the data. Calibration games are a novel way to motivate users to carry out calibrations, thereby improving the performance and accuracy of many human-computer systems.", "title": "" }, { "docid": "a07a7aec933bb6bde818cd97c639a218", "text": "This paper presents a framework for evaluating and designing game design patterns commonly called as “achievements”. The results are based on empirical studies of a variety of popular achievement systems. The results, along with the framework for analyzing and designing achievements, present two definitions of game achievements. From the perspective of the achievement system, an achievement appears as a challenge consisting of a signifying element, rewards and completion logics whose fulfilment conditions are defined through events in other systems (usually games). From the perspective of a single game, an achievement appears as an optional challenge provided by a meta-game that is independent of a single game session and yields possible reward(s).", "title": "" }, { "docid": "f1c00253a57236ead67b013e7ce94a5e", "text": "A meta-analysis of 128 studies examined the effects of extrinsic rewards on intrinsic motivation. As predicted, engagement-contingent, completion-contingent, and performance-contingent rewards significantly undermined free-choice intrinsic motivation (d = -0.40, -0.36, and -0.28, respectively), as did all rewards, all tangible rewards, and all expected rewards. Engagement-contingent and completion-contingent rewards also significantly undermined self-reported interest (d = -0.15, and -0.17), as did all tangible rewards and all expected rewards. Positive feedback enhanced both free-choice behavior (d = 0.33) and self-reported interest (d = 0.31). Tangible rewards tended to be more detrimental for children than college students, and verbal rewards tended to be less enhancing for children than college students. The authors review 4 previous meta-analyses of this literature and detail how this study's methods, analyses, and results differed from the previous ones.", "title": "" } ]
[ { "docid": "cd1fd8340276cc5aab392a7e5136056e", "text": "We propose a novel two-step mining and optimization framework for inferring the root cause of anomalies that appear in road traffic data. We model road traffic as a time-dependent flow on a network formed by partitioning a city into regions bounded by major roads. In the first step we identify link anomalies based on their deviation from their historical traffic profile. However, link anomalies on their own shed very little light on what caused them to be anomalous. In the second step we take a generative approach by modeling the flow in a network in terms of the origin-destination (OD) matrix which physically relates the latent flow between origin and destination and the observable flow on the links. The key insight is that instead of using all of link traffic as the observable vector we only use the link anomaly vector. By solving an L1 inverse problem we infer the routes (the origin-destination pairs) which gave rise to the link anomalies. Experiments on a very large GPS data set consisting on nearly eight hundred million data points demonstrate that we can discover routes which can clearly explain the appearance of link anomalies. The use of optimization techniques to explain observable anomalies in a generative fashion is, to the best of our knowledge, entirely novel.", "title": "" }, { "docid": "63b73a09437ce848426847f17ce9703d", "text": "A main distinguishing feature of a wireless network compared with a wired network is its broadcast nature, in which the signal transmitted by a node may reach several other nodes, and a node may receive signals from several other nodes simultaneously. Rather than a blessing, this feature is treated more as an interference-inducing nuisance in most wireless networks today (e.g., IEEE 802.11). The goal of this paper is to show how the concept of network coding can be applied at the physical layer to turn the broadcast property into a capacityboosting advantage in wireless ad hoc networks. Specifically, we propose a physical-layer network coding (PNC) scheme to coordinate transmissions among nodes. In contrast to “straightforward” network coding which performs coding arithmetic on digital bit streams after they have been received, PNC makes use of the additive nature of simultaneously arriving electromagnetic (EM) waves for equivalent coding operation. PNC can yield higher capacity than straightforward network coding when applied to wireless networks. We believe this is a first paper that ventures into EM-wave-based network coding at the physical layer and demonstrates its potential for boosting network capacity. PNC opens up a whole new research area because of its implications and new design requirements for the physical, MAC, and network layers of ad hoc wireless stations. The resolution of the many outstanding but interesting issues in PNC may lead to a revolutionary new paradigm for wireless ad hoc networking.", "title": "" }, { "docid": "777cbf7e5c5bdf4457ce24520bbc8036", "text": "Recently, both industry and academia have proposed many different roadmaps for the future of DRAM. Consequently, there is a growing need for an extensible DRAM simulator, which can be easily modified to judge the merits of today's DRAM standards as well as those of tomorrow. In this paper, we present Ramulator, a fast and cycle-accurate DRAM simulator that is built from the ground up for extensibility. Unlike existing simulators, Ramulator is based on a generalized template for modeling a DRAM system, which is only later infused with the specific details of a DRAM standard. Thanks to such a decoupled and modular design, Ramulator is able to provide out-of-the-box support for a wide array of DRAM standards: DDR3/4, LPDDR3/4, GDDR5, WIO1/2, HBM, as well as some academic proposals (SALP, AL-DRAM, TL-DRAM, RowClone, and SARP). Importantly, Ramulator does not sacrifice simulation speed to gain extensibility: according to our evaluations, Ramulator is 2.5× faster than the next fastest simulator. Ramulator is released under the permissive BSD license.", "title": "" }, { "docid": "88a15c0efdfeba3e791ea88862aee0c3", "text": "Logic-based approaches to legal problem solving model the rule-governed nature of legal argumentation, justification, and other legal discourse but suffer from two key obstacles: the absence of efficient, scalable techniques for creating authoritative representations of legal texts as logical expressions; and the difficulty of evaluating legal terms and concepts in terms of the language of ordinary discourse. Data-centric techniques can be used to finesse the challenges of formalizing legal rules and matching legal predicates with the language of ordinary parlance by exploiting knowledge latent in legal corpora. However, these techniques typically are opaque and unable to support the rule-governed discourse needed for persuasive argumentation and justification. This paper distinguishes representative legal tasks to which each approach appears to be particularly well suited and proposes a hybrid model that exploits the complementarity of each.", "title": "" }, { "docid": "e7f771269ee99c04c69d1a7625a4196f", "text": "This report is a summary of Device-associated (DA) Module data collected by hospitals participating in the National Healthcare Safety Network (NHSN) for events occurring from January through December 2010 and re­ ported to the Centers for Disease Control and Prevention (CDC) by July 7, 2011. This report updates previously published DA Module data from the NHSN and provides contemporary comparative rates. This report comple­ ments other NHSN reports, including national and state-specific reports of standardized infection ratios for select health care-associated infections (HAIs). The NHSN was established in 2005 to integrate and supersede 3 legacy surveillance systems at the CDC: the National Nosocomial Infections Surveillance system, the Dialysis Surveillance Network, and the National Sur­ veillance System for Healthcare Workers. NHSN data col­ lection, reporting, and analysis are organized into 3 components—Patient Safety, Healthcare Personnel", "title": "" }, { "docid": "97c5b202cdc1f7d8220bf83663a0668f", "text": "Despite significant recent progress, the best available visual saliency models still lag behind human performance in predicting eye fixations in free-viewing of natural scenes. Majority of models are based on low-level visual features and the importance of top-down factors has not yet been fully explored or modeled. Here, we combine low-level features such as orientation, color, intensity, saliency maps of previous best bottom-up models with top-down cognitive visual features (e.g., faces, humans, cars, etc.) and learn a direct mapping from those features to eye fixations using Regression, SVM, and AdaBoost classifiers. By extensive experimenting over three benchmark eye-tracking datasets using three popular evaluation scores, we show that our boosting model outperforms 27 state-of-the-art models and is so far the closest model to the accuracy of human model for fixation prediction. Furthermore, our model successfully detects the most salient object in a scene without sophisticated image processings such as region segmentation.", "title": "" }, { "docid": "3d1265024a27d1d8ea963eaa5ff70aaa", "text": "The highly disagreeable sensation of pain results from an extraordinarily complex and interactive series of mechanisms integrated at all levels of the neuroaxis, from the periphery, via the dorsal horn to higher cerebral structures. Pain is usually elicited by the activation of specific nociceptors ('nociceptive pain'). However, it may also result from injury to sensory fibres, or from damage to the CNS itself ('neuropathic pain'). Although acute and subchronic, nociceptive pain fulfils a warning role, chronic and/or severe nociceptive and neuropathic pain is maladaptive. Recent years have seen a progressive unravelling of the neuroanatomical circuits and cellular mechanisms underlying the induction of pain. In addition to familiar inflammatory mediators, such as prostaglandins and bradykinin, potentially-important, pronociceptive roles have been proposed for a variety of 'exotic' species, including protons, ATP, cytokines, neurotrophins (growth factors) and nitric oxide. Further, both in the periphery and in the CNS, non-neuronal glial and immunecompetent cells have been shown to play a modulatory role in the response to inflammation and injury, and in processes modifying nociception. In the dorsal horn of the spinal cord, wherein the primary processing of nociceptive information occurs, N-methyl-D-aspartate receptors are activated by glutamate released from nocisponsive afferent fibres. Their activation plays a key role in the induction of neuronal sensitization, a process underlying prolonged painful states. In addition, upon peripheral nerve injury, a reduction of inhibitory interneurone tone in the dorsal horn exacerbates sensitized states and further enhance nociception. As concerns the transfer of nociceptive information to the brain, several pathways other than the classical spinothalamic tract are of importance: for example, the postsynaptic dorsal column pathway. In discussing the roles of supraspinal structures in pain sensation, differences between its 'discriminative-sensory' and 'affective-cognitive' dimensions should be emphasized. The purpose of the present article is to provide a global account of mechanisms involved in the induction of pain. Particular attention is focused on cellular aspects and on the consequences of peripheral nerve injury. In the first part of the review, neuronal pathways for the transmission of nociceptive information from peripheral nerve terminals to the dorsal horn, and therefrom to higher centres, are outlined. This neuronal framework is then exploited for a consideration of peripheral, spinal and supraspinal mechanisms involved in the induction of pain by stimulation of peripheral nociceptors, by peripheral nerve injury and by damage to the CNS itself. Finally, a hypothesis is forwarded that neurotrophins may play an important role in central, adaptive mechanisms modulating nociception. An improved understanding of the origins of pain should facilitate the development of novel strategies for its more effective treatment.", "title": "" }, { "docid": "46bee248655c79a0364fee437bc43eaf", "text": "Parkinson disease (PD) is a universal public health problem of massive measurement. Machine learning based method is used to classify between healthy people and people with Parkinson’s disease (PD). This paper presents a comprehensive review for the prediction of Parkinson disease buy using machine learning based approaches. The brief introduction of various computational intelligence techniques based approaches used for the prediction of Parkinson diseases are presented .This paper also presents the summary of results obtained by various researchers available in literature to predict the Parkinson diseases. Keywords— Parkinson’s disease, classification, random forest, support vector machine, machine learning, signal processing, artificial neural network.", "title": "" }, { "docid": "e9aea5919d3d38184fc13c10f1751293", "text": "The distinct protein aggregates that are found in Alzheimer's, Parkinson's, Huntington's and prion diseases seem to cause these disorders. Small intermediates — soluble oligomers — in the aggregation process can confer synaptic dysfunction, whereas large, insoluble deposits might function as reservoirs of the bioactive oligomers. These emerging concepts are exemplified by Alzheimer's disease, in which amyloid β-protein oligomers adversely affect synaptic structure and plasticity. Findings in other neurodegenerative diseases indicate that a broadly similar process of neuronal dysfunction is induced by diffusible oligomers of misfolded proteins.", "title": "" }, { "docid": "16da6b46cd53304923720ba4b5e92427", "text": "Despite its unambiguous advantages, cellular phone use has been associated with harmful or potentially disturbing behaviors. Problematic use of the mobile phone is considered as an inability to regulate one’s use of the mobile phone, which eventually involves negative consequences in daily life (e.g., financial problems). The current article describes what can be considered dysfunctional use of the mobile phone and emphasizes its multifactorial nature. Validated assessment instruments to measure problematic use of the mobile phone are described. The available literature on risk factors for dysfunctional mobile phone use is then reviewed, and a pathways model that integrates the existing literature is proposed. Finally, the assumption is made that dysfunctional use of the mobile phone is part of a spectrum of cyber addictions that encompasses a variety of dysfunctional behaviors and implies involvement in specific online activities (e.g., video games, gambling, social networks, sex-related websites).", "title": "" }, { "docid": "27101c9dcb89149b68d3ad47b516db69", "text": "A brain-computer interface (BCI) is a hardware and software communications system that permits cerebral activity alone to control computers or external devices. The immediate goal of BCI research is to provide communications capabilities to severely disabled people who are totally paralyzed or 'locked in' by neurological neuromuscular disorders, such as amyotrophic lateral sclerosis, brain stem stroke, or spinal cord injury. Here, we review the state-of-the-art of BCIs, looking at the different steps that form a standard BCI: signal acquisition, preprocessing or signal enhancement, feature extraction, classification and the control interface. We discuss their advantages, drawbacks, and latest advances, and we survey the numerous technologies reported in the scientific literature to design each step of a BCI. First, the review examines the neuroimaging modalities used in the signal acquisition step, each of which monitors a different functional brain activity such as electrical, magnetic or metabolic activity. Second, the review discusses different electrophysiological control signals that determine user intentions, which can be detected in brain activity. Third, the review includes some techniques used in the signal enhancement step to deal with the artifacts in the control signals and improve the performance. Fourth, the review studies some mathematic algorithms used in the feature extraction and classification steps which translate the information in the control signals into commands that operate a computer or other device. Finally, the review provides an overview of various BCI applications that control a range of devices.", "title": "" }, { "docid": "acfe7531f67a40e27390575a69dcd165", "text": "This paper reviews the relationship between attention deficit hyperactivity disorder (ADHD) and academic performance. First, the relationship at different developmental stages is examined, focusing on pre-schoolers, children, adolescents and adults. Second, the review examines the factors underpinning the relationship between ADHD and academic underperformance: the literature suggests that it is the symptoms of ADHD and underlying cognitive deficits not co-morbid conduct problems that are at the root of academic impairment. The review concludes with an overview of the literature examining strategies that are directed towards remediating the academic impairment of individuals with ADHD.", "title": "" }, { "docid": "08634303d285ec95873e003eeac701eb", "text": "This paper describes the application of adaptive neuro-fuzzy inference system (ANFIS) model for classification of electroencephalogram (EEG) signals. Decision making was performed in two stages: feature extraction using the wavelet transform (WT) and the ANFIS trained with the backpropagation gradient descent method in combination with the least squares method. Five types of EEG signals were used as input patterns of the five ANFIS classifiers. To improve diagnostic accuracy, the sixth ANFIS classifier (combining ANFIS) was trained using the outputs of the five ANFIS classifiers as input data. The proposed ANFIS model combined the neural network adaptive capabilities and the fuzzy logic qualitative approach. Some conclusions concerning the saliency of features on classification of the EEG signals were obtained through analysis of the ANFIS. The performance of the ANFIS model was evaluated in terms of training performance and classification accuracies and the results confirmed that the proposed ANFIS model has potential in classifying the EEG signals.", "title": "" }, { "docid": "592431c03450be59f10e56dcabed0ebf", "text": "Recent advances in machine learning have led to innovative applications and services that use computational structures to reason about complex phenomenon. Over the past several years, the security and machine-learning communities have developed novel techniques for constructing adversarial samples--malicious inputs crafted to mislead (and therefore corrupt the integrity of) systems built on computationally learned models. The authors consider the underlying causes of adversarial samples and the future countermeasures that might mitigate them.", "title": "" }, { "docid": "211037c38a50ff4169f3538c3b6af224", "text": "In this paper we present a method to obtain a depth map from a single image of a scene by exploiting both image content and user interaction. Assuming that regions with low gradients will have similar depth values, we formulate the problem as an optimization process across a graph, where pixels are considered as nodes and edges between neighbouring pixels are assigned weights based on the image gradient. Starting from a number of userdefined constraints, depth values are propagated between highly connected nodes i.e. with small gradients. Such constraints include, for example, depth equalities and inequalities between pairs of pixels, and may include some information about perspective. This framework provides a depth map of the scene, which is useful for a number of applications.", "title": "" }, { "docid": "ab156ab101063353a64bbcd51e47b88f", "text": "Spontaneous lens absorption (SLA) is a rare complication of hypermature cataract. However, this condition has been reported in several cases of hypermature cataracts that were caused by trauma, senility, uveitic disorders such as Fuchs’ uveitis syndrome (FUS), and infectious disorders including leptospirosis and rubella. We report a case of spontaneous absorption of a hypermature cataract secondary to FUS. To our knowledge, this is the first report of SLA that was followed by dislocation of the capsular remnants into the vitreous and resulted in a misdiagnosis as crystalline lens luxation.", "title": "" }, { "docid": "bfdc5925a540686d03b6314bf2009db3", "text": "This paper describes our programmable analog technology based around floating-gate transistors that allow for non-volatile storage as well as computation through the same device. We describe the basic concepts for floating-gate devices, capacitor-based circuits, and the basic charge modification mechanisms that makes this analog technology programmable. We describe the techniques to extend these techniques to program an nonhomogenious array of floating-gate devices.", "title": "" }, { "docid": "5bc7e46eedc9b525d36c72169eea8a3e", "text": "Training object class detectors typically requires a large set of images in which objects are annotated by boundingboxes. However, manually drawing bounding-boxes is very time consuming. We propose a new scheme for training object detectors which only requires annotators to verify bounding-boxes produced automatically by the learning algorithm. Our scheme iterates between re-training the detector, re-localizing objects in the training images, and human verification. We use the verification signal both to improve re-training and to reduce the search space for re-localisation, which makes these steps different to what is normally done in a weakly supervised setting. Extensive experiments on PASCAL VOC 2007 show that (1) using human verification to update detectors and reduce the search space leads to the rapid production of high-quality bounding-box annotations, (2) our scheme delivers detectors performing almost as good as those trained in a fully supervised setting, without ever drawing any bounding-box, (3) as the verification task is very quick, our scheme substantially reduces total annotation time by a factor 6×-9×.", "title": "" }, { "docid": "08675a0dc7a2f370d33704470297cec3", "text": "Construal level theory (CLT) is an account of how psychological distance influences individuals' thoughts and behavior. CLT assumes that people mentally construe objects that are psychologically near in terms of low-level, detailed, and contextualized features, whereas at a distance they construe the same objects or events in terms of high-level, abstract, and stable characteristics. Research has shown that different dimensions of psychological distance (time, space, social distance, and hypotheticality) affect mental construal and that these construals, in turn, guide prediction, evaluation, and behavior. The present paper reviews this research and its implications for consumer psychology.", "title": "" }, { "docid": "241fd5f03bbe92c9ce9006333fac4f3e", "text": "This article presents a comprehensive survey of research concerning interactions between associative learning and attention in humans. Four main findings are described. First, attention is biased toward stimuli that predict their consequences reliably (learned predictiveness). This finding is consistent with the approach taken by Mackintosh (1975) in his attentional model of associative learning in nonhuman animals. Second, the strength of this attentional bias is modulated by the value of the outcome (learned value). That is, predictors of high-value outcomes receive especially high levels of attention. Third, the related but opposing idea that uncertainty may result in increased attention to stimuli (Pearce & Hall, 1980), receives less support. This suggests that hybrid models of associative learning, incorporating the mechanisms of both the Mackintosh and Pearce-Hall theories, may not be required to explain data from human participants. Rather, a simpler model, in which attention to stimuli is determined by how strongly they are associated with significant outcomes, goes a long way to account for the data on human attentional learning. The last main finding, and an exciting area for future research and theorizing, is that learned predictiveness and learned value modulate both deliberate attentional focus, and more automatic attentional capture. The automatic influence of learning on attention does not appear to fit the traditional view of attention as being either goal-directed or stimulus-driven. Rather, it suggests a new kind of “derived” attention.", "title": "" } ]
scidocsrr
0ba7bee5877e7d35d8b2a407d79faf5d
MalwareTextDB: A Database for Annotated Malware Articles
[ { "docid": "afd00b4795637599f357a7018732922c", "text": "We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result.", "title": "" } ]
[ { "docid": "d321b5fb596c9d54a085e158f95de6b1", "text": "This paper presents a planar electronically steerable series-fed patch array for 2.4-GHz industrial, scientific, and medical band applications. The proposed steerable array uses 0deg tunable positive/negative-refractive-index (PRI/NRI) phase shifters to center its radiation about the broadside direction and allow scanning in both directions off the broadside. Using the PRI/NRI phase shifters also minimizes the squinting of the main beam across the operating bandwidth. The tunable PRI/NRI phase shifters employ 0.13-mum CMOS tunable active inductors, as well as varactors in order to extend their phase tuning range and maintain a low return loss across the entire phase tuning range. The feed network of the proposed array uses lambda/4 impedance transformers. This allows using identical interstage phase shifters, which share the same control voltages to tune all stages. Furthermore, using the impedance transformers in combination with the CMOS-based constant-impedance PRI/NRI phase shifters guarantees a low return loss for the antenna array across its entire scan angle range. The antenna array was fabricated, and is capable of continuously steering its main beam from -27deg to +22deg off the broadside direction with a gain of 8.4 dBi at 2.4 GHz. This is achieved by changing the varactors' control voltage from 3.5 to 15 V. Across the entire scan angle range, the array return loss is less than -10 dB across a bandwidth of 70 MHz, and the relative sidelobe level is always less than -10 dB. Furthermore, the proposed design achieves very low beam squinting of 1.3deg/100 MHz at broadside and a 1-dB compression point of 4.5 dBm.", "title": "" }, { "docid": "eb319648d52f9037f6d0548a080ff19e", "text": "We present a brief overview of the incentive sensitization theory of addiction. This posits that addiction is caused primarily by drug-induced sensitization in the brain mesocorticolimbic systems that attribute incentive salience to reward-associated stimuli. If rendered hypersensitive, these systems cause pathological incentive motivation ('wanting') for drugs. We address some current questions including: what is the role of learning in incentive sensitization and addiction? Does incentive sensitization occur in human addicts? Is the development of addiction-like behaviour in animals associated with sensitization? What is the best way to model addiction symptoms using animal models? And, finally, what are the roles of affective pleasure or withdrawal in addiction?", "title": "" }, { "docid": "006ea5f44521c42ec513edc1cbff1c43", "text": "In 2004 we published in this journal an article describing OntoLearn, one of the first systems to automatically induce a taxonomy from documents and Web sites. Since then, OntoLearn has continued to be an active area of research in our group and has become a reference work within the community. In this paper we describe our next-generation taxonomy learning methodology, which we name OntoLearn Reloaded. Unlike many taxonomy learning approaches in the literature, our novel algorithm learns both concepts and relations entirely from scratch via the automated extraction of terms, definitions, and hypernyms. This results in a very dense, cyclic and potentially disconnected hypernym graph. The algorithm then induces a taxonomy from this graph via optimal branching and a novel weighting policy. Our experiments show that we obtain high-quality results, both when building brand-new taxonomies and when reconstructing sub-hierarchies of existing taxonomies.", "title": "" }, { "docid": "395b6f7f49631420bd6a33560c3ea6f0", "text": "The authors examined the effects of divided attention (DA) at encoding and retrieval in free recall, cued recall, and recognition memory in 4 experiments. Lists of words or word pairs were presented auditorily and recalled orally; the secondary task was a visual continuous reaction-time (RT) task with manual responses. At encoding, DA was associated with large reductions in memory performance, but small increases in RT; trade-offs between memory and RT were under conscious control. In contrast, DA at retrieval resulted in small or no reductions in memory, but in comparatively larger increases in RT, especially in free recall. Memory performance was sensitive to changes in task emphasis at encoding but not at retrieval. The results are discussed in terms of controlled and automatic processes and speculatively linked to underlying neuropsychological mechanisms.", "title": "" }, { "docid": "7993e83655c632cc1a13b9a09b7e8c3c", "text": "1 Of increasing importance in the civilian and military population is the recognition of Major Depressive Disorder at its earliest stages and intervention before the onset of severe symptoms. Toward the goal of more effective monitoring of depression severity, we investigate automatic classifiers of depression state, that have the important property of mitigating nuisances due to data variability, such as speaker and channel effects, unrelated to levels of depression. To assess our measures, we use a 35-speaker free-response speech database of subjects treated for depression over a six-week duration, along with standard clinical HAMD depression ratings. Preliminary experiments indicate that by mitigating nuisances, thus focusing on depression severity as a class, we can significantly improve classification accuracy over baseline Gaussian-mixture-model-based classifiers.", "title": "" }, { "docid": "738555e605ee2b90ff99bef6d434162d", "text": "In this paper we present two deep-learning systems that competed at SemEval-2017 Task 4 “Sentiment Analysis in Twitter”. We participated in all subtasks for English tweets, involving message-level and topic-based sentiment polarity classification and quantification. We use Long Short-Term Memory (LSTM) networks augmented with two kinds of attention mechanisms, on top of word embeddings pre-trained on a big collection of Twitter messages. Also, we present a text processing tool suitable for social network messages, which performs tokenization, word normalization, segmentation and spell correction. Moreover, our approach uses no hand-crafted features or sentiment lexicons. We ranked 1st (tie) in Subtask A, and achieved very competitive results in the rest of the Subtasks. Both the word embeddings and our text processing tool1 are available to the research community.", "title": "" }, { "docid": "782341e7a40a95da2a430faae977dea0", "text": "Current Web services standards lack the means for expressing a service's nonfunctional attributes - namely, its quality of service. QoS can be objective (encompassing reliability, availability, and request-to-response time) or subjective (focusing on user experience). QoS attributes are key to dynamically selecting the services that best meet user needs. This article addresses dynamic service selection via an agent framework coupled with a QoS ontology. With this approach, participants can collaborate to determine each other's service quality and trustworthiness.", "title": "" }, { "docid": "d568194d6b856243056c072c96c76115", "text": "OBJECTIVE\nTo develop an evidence-based guideline to help clinicians make decisions about when and how to safely taper and stop antipsychotics; to focus on the highest level of evidence available and seek input from primary care professionals in the guideline development, review, and endorsement processes.\n\n\nMETHODS\nThe overall team comprised 9 clinicians (1 family physician, 1 family physician specializing in long-term care, 1 geriatric psychiatrist, 2 geriatricians, 4 pharmacists) and a methodologist; members disclosed conflicts of interest. For guideline development, a systematic process was used, including the GRADE (Grading of Recommendations Assessment, Development and Evaluation) approach. Evidence was generated from a Cochrane systematic review of antipsychotic deprescribing trials for the behavioural and psychological symptoms of dementia, and a systematic review was conducted to assess the evidence behind the benefits of using antipsychotics for insomnia. A review of reviews of the harms of continued antipsychotic use was performed, as well as narrative syntheses of patient preferences and resource implications. This evidence and GRADE quality-of-evidence ratings were used to generate recommendations. The team refined guideline content and recommendation wording through consensus and synthesized clinical considerations to address common front-line clinician questions. The draft guideline was distributed to clinicians and stakeholders for review and revisions were made at each stage.\n\n\nRECOMMENDATIONS\nWe recommend deprescribing antipsychotics for adults with behavioural and psychological symptoms of dementia treated for at least 3 months (symptoms stabilized or no response to an adequate trial) and for adults with primary insomnia treated for any duration or secondary insomnia in which underlying comorbidities are managed. A decision-support algorithm was developed to accompany the guideline.\n\n\nCONCLUSION\nAntipsychotics are associated with harms and can be safely tapered. Patients and caregivers might be more amenable to deprescribing if they understand the rationale (potential for harm), are involved in developing the tapering plan, and are offered behavioural advice or management. This guideline provides recommendations for making decisions about when and how to reduce the dose of or stop antipsychotics. Recommendations are meant to assist with, not dictate, decision making in conjunction with patients and families.", "title": "" }, { "docid": "ed9a497608e017ccf2e99ecdba2d9b80", "text": "The next fifth generation (5G) of wireless communication networks comes with a set of new features to satisfy the demand of data-intensive applications: millimeter wave frequencies, massive antenna arrays, beamforming, dense cells, et c. In this paper, we investigate the use of beamforming techniques through various architectures and evaluate the performance of 5G wireless access networks, using a capacity-based network deployment tool. This tool is proposed and applied to a realistic area in Ghent, Belgium, to simulate realistic 5G networks that respond to the instantaneous bit rate required by the active use rs. The results show that, with beamforming, 5G networks require almost 15% more base stations and 4 times less power to provide more capacity to the users and same coverage performances, in comparison with the 4G reference network. Moreover, they are 3 times more energy efficient than the 4G network and the hybrid beamforming architecture appears to be a suitable architecture for beamforming to be considered when designing a 5G cellular network.", "title": "" }, { "docid": "2cf7921cce2b3077c59d9e4e2ab13afe", "text": "Scientists and consumers preference focused on natural colorants due to the emergence of negative health effects of synthetic colorants which is used for many years in foods. Interest in natural colorants is increasing with each passing day as a consequence of their antimicrobial and antioxidant effects. The biggest obstacle in promotion of natural colorants as food pigment agents is that it requires high investment. For this reason, the R&D studies related issues are shifted to processes to reduce cost and it is directed to pigment production from microorganisms with fermentation. Nowadays, there is pigments obtained by commercially microorganisms or plants with fermantation. These pigments can be use for both food colorant and food supplement. In this review, besides colourant and antioxidant properties, antimicrobial properties of natural colorants are discussed.", "title": "" }, { "docid": "6cfc078d0b908cb020417d4503e5bade", "text": "How does an entrepreneur’s social network impact crowdfunding? Based on social capital theory, we developed a research model and conducted a comparative study using objective data collected from China and the U.S. We found that an entrepreneur’s social network ties, obligations to fund other entrepreneurs, and the shared meaning of the crowdfunding project between the entrepreneur and the sponsors had significant effects on crowdfunding performance in both China and the U.S. The predictive power of the three dimensions of social capital was stronger in China than it was in the U.S. Obligation also had a greater impact in China. 2014 Elsevier B.V. All rights reserved. § This study is supported by the Natural Science Foundation of China (71302186), the Chinese Ministry of Education Humanities and Social Sciences Young Scholar Fund (12YJCZH306), the China National Social Sciences Fund (11AZD077), and the Fundamental Research Funds for the Central Universities (JBK120505). * Corresponding author. Tel.: +1 218 726 7334. E-mail addresses: haichao_zheng@163.com (H. Zheng), dli@d.umn.edu (D. Li), kaitlynwu@swufe.edu.cn (J. Wu), xuyun@swufe.edu.cn (Y. Xu).", "title": "" }, { "docid": "3e7e40f82ebb83b4314c974334c8ce0c", "text": "Three-dimensional shape reconstruction of 2D landmark points on a single image is a hallmark of human vision, but is a task that has been proven difficult for computer vision algorithms. We define a feed-forward deep neural network algorithm that can reconstruct 3D shapes from 2D landmark points almost perfectly (i.e., with extremely small reconstruction errors), even when these 2D landmarks are from a single image. Our experimental results show an improvement of up to two-fold over state-of-the-art computer vision algorithms; 3D shape reconstruction error (measured as the Procrustes distance between the reconstructed shape and the ground-truth) of human faces is <inline-formula><tex-math notation=\"LaTeX\">$<.004$</tex-math><alternatives> <inline-graphic xlink:href=\"martinez-ieq1-2772922.gif\"/></alternatives></inline-formula>, cars is .0022, human bodies is .022, and highly-deformable flags is .0004. Our algorithm was also a top performer at the 2016 3D Face Alignment in the Wild Challenge competition (done in conjunction with the European Conference on Computer Vision, ECCV) that required the reconstruction of 3D face shape from a single image. The derived algorithm can be trained in a couple hours and testing runs at more than 1,000 frames/s on an i7 desktop. We also present an innovative data augmentation approach that allows us to train the system efficiently with small number of samples. And the system is robust to noise (e.g., imprecise landmark points) and missing data (e.g., occluded or undetected landmark points).", "title": "" }, { "docid": "8c1de6e57121c349cadc45068b69bb1f", "text": "PURPOSE\nTo assess the relationship between serum insulin-like growth factor I (IGF-I) and diabetic retinopathy.\n\n\nMETHODS\nThis was a clinic-based cross-sectional study conducted at the Emory Eye Center. A total of 225 subjects were classified into four groups, based on diabetes status and retinopathy findings: no diabetes mellitus (no DM; n=99), diabetes with no background diabetic retinopathy (no BDR; n=42), nonproliferative diabetic retinopathy (NPDR; n=41), and proliferative diabetic retinopathy (PDR; n=43). Key exclusion criteria included type 1 diabetes and disorders that affect serum IGF-I levels, such as acromegaly. Subjects underwent dilated fundoscopic examination and were tested for hemoglobin A1c, serum creatinine, and serum IGF-I, between December 2009 and March 2010. Serum IGF-I levels were measured using an immunoassay that was calibrated against an international standard.\n\n\nRESULTS\nBetween the groups, there were no statistical differences with regards to age, race, or sex. Overall, diabetic subjects had similar serum IGF-I concentrations compared to nondiabetic subjects (117.6 µg/l versus 122.0 µg/l; p=0.497). There was no significant difference between serum IGF-I levels among the study groups (no DM=122.0 µg/l, no BDR=115.4 µg/l, NPDR=118.3 µg/l, PDR=119.1 µg/l; p=0.897). Among the diabetic groups, the mean IGF-I concentration was similar between insulin-dependent and non-insulin-dependent subjects (116.8 µg/l versus 118.2 µg/l; p=0.876). The univariate analysis of the IGF-I levels demonstrated statistical significance in regard to age (p=0.002, r=-0.20), body mass index (p=0.008, r=-0.18), and race (p=0.040).\n\n\nCONCLUSIONS\nThere was no association between serum IGF-I concentrations and diabetic retinopathy in this large cross-sectional study.", "title": "" }, { "docid": "b4a2c3679fe2490a29617c6a158b9dbc", "text": "We present a general approach to automating ethical decisions, drawing on machine learning and computational social choice. In a nutshell, we propose to learn a model of societal preferences, and, when faced with a specific ethical dilemma at runtime, efficiently aggregate those preferences to identify a desirable choice. We provide a concrete algorithm that instantiates our approach; some of its crucial steps are informed by a new theory of swap-dominance efficient voting rules. Finally, we implement and evaluate a system for ethical decision making in the autonomous vehicle domain, using preference data collected from 1.3 million people through the Moral Machine website.", "title": "" }, { "docid": "e0e57c28292cae771ce127a1e1a146db", "text": "Because of their wide abundance, their renewable and environmentally benign nature, and their outstanding mechanical properties, a great deal of attention has been paid recently to cellulosic nanofibrillar structures as components in nanocomposites. A first major challenge has been to find efficient ways to liberate cellulosic fibrils from different source materials, including wood, agricultural residues, or bacterial cellulose. A second major challenge has involved the lack of compatibility of cellulosic surfaces with a variety of plastic materials. The water-swellable nature of cellulose, especially in its non-crystalline regions, also can be a concern in various composite materials. This review of recent work shows that considerable progress has been achieved in addressing these issues and that there is potential to use cellulosic nano-components in a wide range of high-tech applications.", "title": "" }, { "docid": "192d5862ea76ae82282f0f6079638f40", "text": "Access control is an important security mechanism that is used to limit user access to information and resources and to prevent malicious users from making unauthorized access. Traditional access control models are well suited for centralized and relatively static environments in which information about subjects and objects are known in priori, but they can hardly meet the needs of open and dynamic network environments. Access control in open network environments must therefore adapt to dynamic addition and deletion of subjects and objects. In this paper, we use game theory to analyze trust-based access control to help compute trust values that involve several factors. By viewing access control as a game played between the requester and the provider entities, we can develop strategies that would motivate subjects to make honest access to objects continuously to get the most payoffs.", "title": "" }, { "docid": "26aad391498670aee81e6b705c11e3b7", "text": "BACKGROUND\nAn aging population means that chronic illnesses, such as diabetes, are becoming more prevalent and demands for care are rising. Members of primary care teams should organize and coordinate patient care with a view to improving quality of care and impartial adherence to evidence-based practices for all patients. The aims of the present study were: to ascertain the prevalence of diabetes in an Italian population, stratified by age, gender and citizenship; and to identify the rate of compliance with recommended guidelines for monitoring diabetes, to see whether disparities exist in the quality of diabetes patient management.\n\n\nMETHODS\nA population-based analysis was performed on a dataset obtained by processing public health administration databases. The presence of diabetes and compliance with standards of care were estimated using appropriate algorithms. A multilevel logistic regression analysis was applied to assess factors affecting compliance with standards of care.\n\n\nRESULTS\n1,948,622 Italians aged 16+ were included in the study. In this population, 105,987 subjects were identified as having diabetes on January 1st, 2009. The prevalence of diabetes was 5.43% (95% CI 5.33-5.54) overall, 5.87% (95% CI 5.82-5.92) among males, and 5.05% (95% CI 5.00-5.09) among females. HbA1c levels had been tested in 60.50% of our diabetic subjects, LDL cholesterol levels in 57.50%, and creatinine levels in 63.27%, but only 44.19% of the diabetic individuals had undergone a comprehensive assessment during one year of care. Statistical differences in diabetes care management emerged relating to gender, age, diagnostic latency period, comorbidity and citizenship.\n\n\nCONCLUSIONS\nProcess management indicators need to be used not only for the overall assessment of health care processes, but also to monitor disparities in the provision of health care.", "title": "" }, { "docid": "791e8444df172d8e925d0a83297c599a", "text": "Compared to the traditional stand-alone game client, browser-based Multiplayer Online Game (MOG), which requires no explicit installation and is able to achieve cross-platform easily, is getting more and more popular. With the rapid development of HTML5 standard and other Web technologies, MOG systems based on WebGL and WebSocket seem to be very promising. We implemented such a framework for browser-based multiplayer online games and studied its performance and feasibility. Our analytical result shows that Three.js 3D Engine and jWebSocket based MOG can easily support the interaction of a small group of users.", "title": "" }, { "docid": "7ec7841679e688822d32d17a5b26f5f6", "text": "The software-defined network (SDN) advocates a centralized network control, where a controller manages a network from a global view of the network. Large SDN networks may consist of multiple controllers or controller domains that distribute the network management between them, where each controller has a logically centralized but physically distributed vision of the network. In this context, a key challenge faced by providers is to define a scalable control network that exploits the benefits of SDN when used in conjunction with efficient management strategies. Most of the control layer models proposed are not concerned with controller scalability, because they assume that commercial controllers are scalable in terms of capacity (quantity of flows processed per second). However, it has been demonstrated that overloads and long propagation delays among controllers and controllers-switches can lead to a long response time of the controllers, affecting their ability to respond to network events in a very short time and reducing the reliability of communication. In this work we define the principles for designing a scalable control layer for SDN, and show the desired control layer characteristics that optimize the management of the network. We address these principles from the perspective of the controller placement problem. For this purpose we improve and evaluate our previous approach, the algorithm called k-Critical. K-Critical discovers the minimum number of controllers and their location to create a robust control topology that deals robustly with failures and balances the load among the selected controllers. The results demonstrate the effectiveness of our solution by comparing it with other controller placement solutions.", "title": "" }, { "docid": "5ff8d6415a2601afdc4a15c13819f5bb", "text": "This paper studies the e ects of various types of online advertisements on purchase conversion by capturing the dynamic interactions among advertisement clicks themselves. It is motivated by the observation that certain advertisement clicks may not result in immediate purchases, but they stimulate subsequent clicks on other advertisements which then lead to purchases. We develop a stochastic model based on mutually exciting point processes, which model advertisement clicks and purchases as dependent random events in continuous time. We incorporate individual random e ects to account for consumer heterogeneity and cast the model in the Bayesian hierarchical framework. We propose a new metric of conversion probability to measure the conversion e ects of online advertisements. Simulation algorithms for mutually exciting point processes are developed to evaluate the conversion probability and for out-of-sample prediction. Model comparison results show the proposed model outperforms the benchmark model that ignores exciting e ects among advertisement clicks. We nd that display advertisements have relatively low direct e ect on purchase conversion, but they are more likely to stimulate subsequent visits through other advertisement formats. We show that the commonly used measure of conversion rate is biased in favor of search advertisements and underestimates the conversion e ect of display advertisements the most. Our model also furnishes a useful tool to predict future purchases and clicks on online", "title": "" } ]
scidocsrr
47a4c011cd344587efa81897df4a8247
The bi-elliptical deformable contour and its application to automated tongue segmentation in Chinese medicine
[ { "docid": "f3c2663cb0341576d754bb6cd5f2c0f5", "text": "This article surveys deformable models, a promising and vigorously researched computer-assisted medical image analysis technique. Among model-based techniques, deformable models offer a unique and powerful approach to image analysis that combines geometry, physics and approximation theory. They have proven to be effective in segmenting, matching and tracking anatomic structures by exploiting (bottom-up) constraints derived from the image data together with (top-down) a priori knowledge about the location, size and shape of these structures. Deformable models are capable of accommodating the significant variability of biological structures over time and across different individuals. Furthermore, they support highly intuitive interaction mechanisms that, when necessary, allow medical scientists and practitioners to bring their expertise to bear on the model-based image interpretation task. This article reviews the rapidly expanding body of work on the development and application of deformable models to problems of fundamental importance in medical image analysis, including segmentation, shape representation, matching and motion tracking.", "title": "" } ]
[ { "docid": "0d27f38d701e3ed5e4efcdb2f9043e44", "text": "BACKGROUND\nThe mechanical, rheological, and pharmacological properties of hyaluronic acid (HA) gels differ by their proprietary crosslinking technologies.\n\n\nOBJECTIVE\nTo examine the different properties of a range of HA gels using simple and easily reproducible laboratory tests to better understand their suitability for particular indications.\n\n\nMETHODS AND MATERIALS\nHyaluronic acid gels produced by one of 7 different crosslinking technologies were subjected to tests for cohesivity, resistance to stretch, and microscopic examination. These 7 gels were: non-animal stabilized HA (NASHA® [Restylane®]), 3D Matrix (Surgiderm® 24 XP), cohesive polydensified matrix (CPM® [Belotero® Balance]), interpenetrating network-like (IPN-like [Stylage® M]), Vycross® (Juvéderm Volbella®), optimal balance technology (OBT® [Emervel Classic]), and resilient HA (RHA® [Teosyal Global Action]).\n\n\nRESULTS\nCohesivity varied for the 7 gels, with NASHA being the least cohesive and CPM the most cohesive. The remaining gels could be described as partially cohesive. The resistance to stretch test confirmed the cohesivity findings, with CPM having the greatest resistance. Light microscopy of the 7 gels revealed HA particles of varying size and distribution. CPM was the only gel to have no particles visible at a microscopic level.\n\n\nCONCLUSION\nHyaluronic acid gels are produced with a range of different crosslinking technologies. Simple laboratory tests show how these can influence a gel's behavior, and can help physicians select the optimal product for a specific treatment indication. Versions of this paper have been previously published in French and in Dutch in the Belgian journal Dermatologie Actualité. Micheels P, Sarazin D, Tran C, Salomon D. Un gel d'acide hyaluronique est-il semblable à son concurrent? Derm-Actu. 2015;14:38-43. J Drugs Dermatol. 2016;15(5):600-606..", "title": "" }, { "docid": "8e34d3c0f25abc171599b76e3c4f07e8", "text": "During the past 100 years clinical studies of amnesia have linked memory impairment to damage of the hippocampus. Yet the damage in these cases has not usually been confined to the hippocampus, and the status of memory functions has often been based on incomplete neuropsychological information. Thus, the human cases have until now left some uncertainty as to whether lesions limited to the hippocampus are sufficient to cause amnesia. Here we report a case of amnesia in a patient (R.B.) who developed memory impairment following an ischemic episode. During the 5 years until his death, R.B. exhibited marked anterograde amnesia, little if any retrograde amnesia, and showed no signs of cognitive impairment other than memory. Thorough histological examination revealed a circumscribed bilateral lesion involving the entire CA1 field of the hippocampus. Minor pathology was found elsewhere in the brain (e.g., left globus pallidus, right postcentral gyrus, left internal capsule), but the only damage that could be reasonably associated with the memory defect was the lesion in the hippocampus. To our knowledge, this is the first reported case of amnesia following a lesion limited to the hippocampus in which extensive neuropsychological and neuropathological analyses have been carried out.", "title": "" }, { "docid": "ea5e08627706532504b9beb6f4dc6650", "text": "This paper highlights the role that reinforcement learning can play in the optimization of treatment policies for chronic illnesses. Before applying any off-the-shelf reinforcement learning methods in this setting, we must first tackle a number of challenges. We outline some of these challenges and present methods for overcoming them. First, we describe a multiple imputation approach to overcome the problem of missing data. Second, we discuss the use of function approximation in the context of a highly variable observation set. Finally, we discuss approaches to summarizing the evidence in the data for recommending a particular action and quantifying the uncertainty around the Q-function of the recommended policy. We present the results of applying these methods to real clinical trial data of patients with schizophrenia.", "title": "" }, { "docid": "3baf11f31351e92c7ff56b066434ae2c", "text": "Unlike images which are represented in regular dense grids, 3D point clouds are irregular and unordered, hence applying convolution on them can be difficult. In this paper, we extend the dynamic filter to a new convolution operation, named PointConv. PointConv can be applied on point clouds to build deep convolutional networks. We treat convolution kernels as nonlinear functions of the local coordinates of 3D points comprised of weight and density functions. With respect to a given point, the weight functions are learned with multi-layer perceptron networks and the density functions through kernel density estimation. A novel reformulation is proposed for efficiently computing the weight functions, which allowed us to dramatically scale up the network and significantly improve its performance. The learned convolution kernel can be used to compute translation-invariant and permutation-invariant convolution on any point set in the 3D space. Besides, PointConv can also be used as deconvolution operators to propagate features from a subsampled point cloud back to its original resolution. Experiments on ModelNet40, ShapeNet, and ScanNet show that deep convolutional neural networks built on PointConv are able to achieve state-ofthe-art on challenging semantic segmentation benchmarks on 3D point clouds. Besides, our experiments converting CIFAR-10 into a point cloud showed that networks built on PointConv can match the performance of convolutional networks in 2D images of a similar structure.", "title": "" }, { "docid": "95350d45a65cb6932f26be4c4d417a30", "text": "This paper presents a detailed performance comparison (including efficiency, EMC performance and component electrical stress) between boost and buck type PFC under critical conduction mode (CRM). In universal input (90–265Vac) applications, the CRM buck PFC has around 1% higher efficiency compared to its counterpart at low-line (90Vac) condition. Due to the low voltage swing of switch, buck PFC has a better CM EMI performance than boost PFC. It seems that the buck PFC is more attractive in low power applications which only need to meet the IEC61000-3-2 Class D standard based on the comparison. The experimental results from two 100-W prototypes are also presented for side by side comparison.", "title": "" }, { "docid": "ccf40417ca3858d69c4cd3fd031ea7c1", "text": "Online social networks (OSNs) have become popular platforms for people to connect and interact with each other. Among those networks, Pinterest has recently become noteworthy for its growth and promotion of visual over textual content. The purpose of this study is to analyze this imagebased network in a gender-sensitive fashion, in order to understand (i) user motivation and usage pattern in the network, (ii) how communications and social interactions happen and (iii) how users describe themselves to others. This work is based on more than 220 million items generated by 683,273 users. We were able to find significant differences w.r.t. all mentioned aspects. We observed that, although the network does not encourage direct social communication, females make more use of lightweight interactions than males. Moreover, females invest more effort in reciprocating social links, are more active and generalist in content generation, and describe themselves using words of affection and positive emotions. Males, on the other hand, are more likely to be specialists and tend to describe themselves in an assertive way. We also observed that each gender has different interests in the network, females tend to make more use of the network’s commercial capabilities, while males are more prone to the role of curators of items that reflect their personal taste. It is important to understand gender differences in online social networks, so one can design services and applications that leverage human social interactions and provide more targeted and relevant user experiences.", "title": "" }, { "docid": "7d0bbf3a83881a97b0217b427b596b76", "text": "This paper proposes a novel tracker which is controlled by sequentially pursuing actions learned by deep reinforcement learning. In contrast to the existing trackers using deep networks, the proposed tracker is designed to achieve a light computation as well as satisfactory tracking accuracy in both location and scale. The deep network to control actions is pre-trained using various training sequences and fine-tuned during tracking for online adaptation to target and background changes. The pre-training is done by utilizing deep reinforcement learning as well as supervised learning. The use of reinforcement learning enables even partially labeled data to be successfully utilized for semi-supervised learning. Through evaluation of the OTB dataset, the proposed tracker is validated to achieve a competitive performance that is three times faster than state-of-the-art, deep network&#x2013;based trackers. The fast version of the proposed method, which operates in real-time on GPU, outperforms the state-of-the-art real-time trackers.", "title": "" }, { "docid": "7916a261319dad5f257a0b8e0fa97fec", "text": "INTRODUCTION\nPreliminary research has indicated that recreational ketamine use may be associated with marked cognitive impairments and elevated psychopathological symptoms, although no study to date has determined how these are affected by differing frequencies of use or whether they are reversible on cessation of use. In this study we aimed to determine how variations in ketamine use and abstention from prior use affect neurocognitive function and psychological wellbeing.\n\n\nMETHOD\nWe assessed a total of 150 individuals: 30 frequent ketamine users, 30 infrequent ketamine users, 30 ex-ketamine users, 30 polydrug users and 30 controls who did not use illicit drugs. Cognitive tasks included spatial working memory, pattern recognition memory, the Stockings of Cambridge (a variant of the Tower of London task), simple vigilance and verbal and category fluency. Standardized questionnaires were used to assess psychological wellbeing. Hair analysis was used to verify group membership.\n\n\nRESULTS\nFrequent ketamine users were impaired on spatial working memory, pattern recognition memory, Stockings of Cambridge and category fluency but exhibited preserved verbal fluency and prose recall. There were no differences in the performance of the infrequent ketamine users or ex-users compared to the other groups. Frequent users showed increased delusional, dissociative and schizotypal symptoms which were also evident to a lesser extent in infrequent and ex-users. Delusional symptoms correlated positively with the amount of ketamine used currently by the frequent users.\n\n\nCONCLUSIONS\nFrequent ketamine use is associated with impairments in working memory, episodic memory and aspects of executive function as well as reduced psychological wellbeing. 'Recreational' ketamine use does not appear to be associated with distinct cognitive impairments although increased levels of delusional and dissociative symptoms were observed. As no performance decrements were observed in the ex-ketamine users, it is possible that the cognitive impairments observed in the frequent ketamine group are reversible upon cessation of ketamine use, although delusional symptoms persist.", "title": "" }, { "docid": "7624a6ca581c0096c6e5bc484a3d772e", "text": "We describe two systems for text simplification using typed dependency structures, one that performs lexical and syntactic simplification, and another that performs sentence compression optimised to satisfy global text constraints such as lexical density, the ratio of difficult words, and text length. We report a substantial evaluation that demonstrates the superiority of our systems, individually and in combination, over the state of the art, and also report a comprehension based evaluation of contemporary automatic text simplification systems with target non-native readers.", "title": "" }, { "docid": "19f96525e1e3dcc563a7b2138c8b1547", "text": "The state of the art in bidirectional search has changed significantly a very short time period; we now can answer questions about unidirectional and bidirectional search that until very recently we were unable to answer. This paper is designed to provide an accessible overview of the recent research in bidirectional search in the context of the broader efforts over the last 50 years. We give particular attention to new theoretical results and the algorithms they inspire for optimal and nearoptimal node expansions when finding a shortest path. Introduction and Overview Shortest path algorithms have a long history dating to Dijkstra’s algorithm (DA) (Dijkstra 1959). DA is the canonical example of a best-first search which prioritizes state expansions by their g-cost (distance from the start state). Historically, there were two enhancements to DA developed relatively quickly: bidirectional search and the use of heuristics. Nicholson (1966) suggested bidirectional search where the search proceeds from both the start and the goal simultaneously. In a two dimensional search space a search to radius r will visit approximately r states. A bidirectional search will perform two searches of approximately (r/2) states, a reduction of a factor of two. In exponential state spaces the reduction is from b to 2b, an exponential gain in both memory and time. This is illustrated in Figure 1, where the large circle represents a unidirectional search towards the goal, while the smaller circles represent the two parts of a bidirectional search. Just two years later, DA was independently enhanced with admissible heuristics (distance estimates to the goal) that resulted in the A* algorithm (Hart, Nilsson, and Raphael 1968). A* is goal directed – the search is focused towards the goal by the heuristic. This significantly reduces the search effort required to find a path to the goal. The obvious challenge was whether these two enhancements could be effectively combined into bidirectional heuristic search (Bi-HS). Pohl (1969) first addressed this challenge showing that in practice unidirectional heuristic search (Uni-HS) seemed to beat out Bi-HS. Many Bi-HS Copyright c © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. algorithms were developed over the years (see a short survey below), but no such algorithm was shown to consistently outperform Uni-HS. Barker and Korf (2015) recently hypothesized that in most cases one should either use bidirectional brute-force search (Bi-BS) or Uni-HS (e.g. A*), but that Bi-HS is never the best approach. This work spurred further research into Bi-HS, and has lead to new theoretical understanding on the nature of Bi-HS as well as new Bi-HS algorithms (e.g., MM, fMM and NBS described below) with strong theoretical guarantees. The purpose of this paper is to provide a high-level picture of this new line of work while placing it in the larger context of previous work on bidirectional search. While there are still many questions yet to answer, we have, for the first time, the full suite of analytic tools necessary to determine whether bidirectional search will be useful on a given problem instance. This is coupled with a Bi-HS algorithm that is guaranteed to expand no more than twice the minimum number of the necessary state expansions in practice. With these tools we can illustrate use-cases for bidirectional search and point to areas of future research. Terminology and Background We define a shortest-path problem as a n-tuple (start, goal, expF , expB , hF , hB), where the goal is to find the least-cost path between start and goal in a graph G. G is not provided a priori, but is provided implicitly through the expF and expB functions that can expand and return the forward (backwards) successors of any state. Bidirectional search algorithms interleave two separate searches, a search forward from start and a search backward from goal. We use fF , gF and hF to indicate f -, g-, and h-costs in the forward search and fB , gB and hB similarly in the backward search. Likewise, OpenF and OpenB store states generated in the forward and backward directions, respectively. Finally, gminF , gminB , fminF and fminB denote the minimal gand f -values in OpenF and OpenB respectively. d(x, y) denotes the shortest distance between x and y. Front-to-end algorithms use two heuristic functions. The forward heuristic, hF , is forward admissible iff hF (u) ≤ d(u, goal) for all u in G and is forward consistent iff hF (u) ≤ d(u, u′) + hF (u′) for all u and u′ in G. The backward heuristic, hB , is backward admissible iff hB(v) ≤", "title": "" }, { "docid": "9c38fcfcbfeaf0072e723bd7e1e7d17d", "text": "BACKGROUND\nAllicin (diallylthiosulfinate) is the major volatile- and antimicrobial substance produced by garlic cells upon wounding. We tested the hypothesis that allicin affects membrane function and investigated 1) betanine pigment leakage from beetroot (Beta vulgaris) tissue, 2) the semipermeability of the vacuolar membrane of Rhoeo discolor cells, 3) the electrophysiology of plasmalemma and tonoplast of Chara corallina and 4) electrical conductivity of artificial lipid bilayers.\n\n\nMETHODS\nGarlic juice and chemically synthesized allicin were used and betanine loss into the medium was monitored spectrophotometrically. Rhoeo cells were studied microscopically and Chara- and artificial membranes were patch clamped.\n\n\nRESULTS\nBeet cell membranes were approximately 200-fold more sensitive to allicin on a mol-for-mol basis than to dimethyl sulfoxide (DMSO) and approximately 400-fold more sensitive to allicin than to ethanol. Allicin-treated Rhoeo discolor cells lost the ability to plasmolyse in an osmoticum, confirming that their membranes had lost semipermeability after allicin treatment. Furthermore, allicin and garlic juice diluted in artificial pond water caused an immediate strong depolarization, and a decrease in membrane resistance at the plasmalemma of Chara, and caused pore formation in the tonoplast and artificial lipid bilayers.\n\n\nCONCLUSIONS\nAllicin increases the permeability of membranes.\n\n\nGENERAL SIGNIFICANCE\nSince garlic is a common foodstuff the physiological effects of its constituents are important. Allicin's ability to permeabilize cell membranes may contribute to its antimicrobial activity independently of its activity as a thiol reagent.", "title": "" }, { "docid": "f21850cde63b844e95db5b9916db1c30", "text": "Foreign Exchange (Forex) market is a complex and challenging task for prediction due to uncertainty movement of exchange rate. However, these movements over timeframe also known as historical Forex data that offered a generic repeated trend patterns. This paper uses the features extracted from trend patterns to model and predict the next day trend. Hidden Markov Models (HMMs) is applied to learn the historical trend patterns, and use to predict the next day movement trends. We use the 2011 Forex historical data of Australian Dollar (AUS) and European Union Dollar (EUD) against the United State Dollar (USD) for modeling, and the 2012 and 2013 Forex historical data for validating the proposed model. The experimental results show outperforms prediction result for both years.", "title": "" }, { "docid": "b36e9a2f1143fa242c4d372cb0ba38b3", "text": "Invariance to nuisance transformations is one of the desirable properties of effective representations. We consider transformations that form a group and propose an approach based on kernel methods to derive local group invariant representations. Locality is achieved by defining a suitable probability distribution over the group which in turn induces distributions in the input feature space. We learn a decision function over these distributions by appealing to the powerful framework of kernel methods and generate local invariant random feature maps via kernel approximations. We show uniform convergence bounds for kernel approximation and provide generalization bounds for learning with these features. We evaluate our method on three real datasets, including Rotated MNIST and CIFAR-10, and observe that it outperforms competing kernel based approaches. The proposed method also outperforms deep CNN on RotatedMNIST and performs comparably to the recently proposed group-equivariant CNN.", "title": "" }, { "docid": "72fde59972907a8092e5c091c4efc20b", "text": "A new technique for facial expression recognition is proposed, which uses the two-dimensional (2D) discrete cosine transform (DCT) over the entire face image as a feature detector and a constructive one-hidden-layer feedforward neural network as a facial expression classifier. An input-side pruning technique, proposed previously by the authors, is also incorporated into the constructive learning process to reduce the network size without sacrificing the performance of the resulting network. The proposed technique is applied to a database consisting of images of 60 men, each having five facial expression images (neutral, smile, anger, sadness, and surprise). Images of 40 men are used for network training, and the remaining images of 20 men are used for generalization and testing. Confusion matrices calculated in both network training and generalization for four facial expressions (smile, anger, sadness, and surprise) are used to evaluate the performance of the trained network. It is demonstrated that the best recognition rates are 100% and 93.75% (without rejection), for the training and generalizing images, respectively. Furthermore, the input-side weights of the constructed network are reduced by approximately 30% using our pruning method. In comparison with the fixed structure back propagation-based recognition methods in the literature, the proposed technique constructs one-hidden-layer feedforward neural network with fewer number of hidden units and weights, while simultaneously provide improved generalization and recognition performance capabilities.", "title": "" }, { "docid": "65385cdaac98022605efd2fd82bb211b", "text": "As electric vehicles (EVs) take a greater share in the personal automobile market, their penetration may bring higher peak demand at the distribution level. This may cause potential transformer overloads, feeder congestions, and undue circuit faults. This paper focuses on the impact of charging EVs on a residential distribution circuit. Different EV penetration levels, EV types, and charging profiles are considered. In order to minimize the impact of charging EVs on a distribution circuit, a demand response strategy is proposed in the context of a smart distribution network. In the proposed DR strategy, consumers will have their own choices to determine which load to control and when. Consumer comfort indices are introduced to measure the impact of demand response on consumers' lifestyle. The proposed indices can provide electric utilities a better estimation of the customer acceptance of a DR program, and the capability of a distribution circuit to accommodate EV penetration.", "title": "" }, { "docid": "abb76c1a8619887d09f07288fe3a50a3", "text": "This paper proposes the Enhanced Shrinking and Expanding Algorithm (ESEA) with a new categorization method. The ESEA overcomes anomalies in the original Shrinking and Expanding Algorithm (SEA) which fails to locate singular points (SPs) in many cases. Experimental results show that the accuracy rate of the ESEA reaches 94.7%, a 32.5% increase from the SEA. In the proposed fingerprint categorization method, each fingerprint will be assigned to a specific subclass. The search for a specific fingerprint can therefore be performed only on specific subclasses containing a small portion of a large fingerprint database, which will save enormous computational time.", "title": "" }, { "docid": "f1ae820d7e067dabfda5efc1229762d8", "text": "Data from 574 participants were used to assess perceptions of message, site, and sponsor credibility across four genres of websites; to explore the extent and effects of verifying web-based information; and to measure the relative influence of sponsor familiarity and site attributes on perceived credibility.The results show that perceptions of credibility differed, such that news organization websites were rated highest and personal websites lowest, in terms of message, sponsor, and overall site credibility, with e-commerce and special interest sites rated between these, for the most part.The results also indicated that credibility assessments appear to be primarily due to website attributes (e.g. design features, depth of content, site complexity) rather than to familiarity with website sponsors. Finally, there was a negative relationship between self-reported and observed information verification behavior and a positive relationship between self-reported verification and internet/web experience. The findings are used to inform the theoretical development of perceived web credibility. 319 new media & society Copyright © 2007 SAGE Publications Los Angeles, London, New Delhi and Singapore Vol9(2):319–342 [DOI: 10.1177/1461444807075015] ARTICLE 319-342 NMS-075015.qxd 9/3/07 11:54 AM Page 319 © 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution. at Universiteit van Amsterdam SAGE on April 25, 2007 http://nms.sagepub.com Downloaded from", "title": "" }, { "docid": "ede2ac0db923cf825853486f92ed19cf", "text": "Personalized recommendation has become increasingly pervasive nowadays. Users receive recommendations on products, movies, point-of-interests and other online services. Traditional collaborative filtering techniques have demonstrated effectiveness in a wide range of recommendation tasks, but they are unable to capture complex relationships between users and items. There is a surge of interest in applying deep learning to recommender systems due to its nonlinear modeling capacity and recent success in other domains such as computer vision and speech recognition. However, prior work does not incorporate contexual information, which is usually largely available in many recommendation tasks. In this paper, we propose a deep learning based model for contexual recommendation. Specifically, the model consists of a denoising autoencoder neural network architecture augmented with a context-driven attention mechanism, referred to as Attentive Contextual Denoising Autoencoder (ACDA). The attention mechanism is utilized to encode the contextual attributes into the hidden representation of the user's preference, which associates personalized context with each user's preference to provide recommendation targeted to that specific user. Experiments conducted on multiple real-world datasets from Meetup and Movielens on event and movie recommendations demonstrate the effectiveness of the proposed model over the state-of-the-art recommenders.", "title": "" }, { "docid": "4636e3ade7c3bdc73ca29f9e74ec870c", "text": "For many organizations, Information Technology (IT) enabled business initiatives and IT infrastructure constitute major investments that, if not managed properly, may impair rather than enhance the organization's competitive position. Especially since the advent of Sarbanes–Oxley (SOX), both management and IT professionals are concerned with design, implementation, and assessment of IT governance strategies to ensure that technology truly serves the needs of the business. Via an in-depth study within one organisation, this research explores the factors influencing IT governance structures, processes, and outcome metrics. Interview responses to open-ended questions indicated that more effective IT governance performance outcomes are associated with a shared understanding of business and IT objectives; active involvement of IT steering committees; a balance of business and IT representatives in IT decisions; and comprehensive and well-communicated IT strategies and policies. IT governance also plays a prominent role in fostering project success and delivering business value. © 2007 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "54914127480f75f5faa03dabcd5fc8f9", "text": "The Linked Hypernyms Dataset (LHD) provides entities described by Dutch, English and German Wikipedia articles with types in the DBpedia namespace. The types are extracted from the first sentences of Wikipedia articles using Hearst pattern matching over part-of-speech annotated text and disambiguated to DBpedia concepts. The dataset covers 1.3 million RDF type triples from English Wikipedia, out of which 1 million RDF type triples were found not to overlap with DBpedia, and 0.4 million with YAGO2s. There are about 770 thousand German and 650 thousand Dutch Wikipedia entities assigned a novel type, which exceeds the number of entities in the localized DBpedia for the respective language. RDF type triples from the German dataset have been incorporated to the German DBpedia. Quality assessment was performed altogether based on 16.500 human ratings and annotations. For the English dataset, the average accuracy is 0.86, for German 0.77 and for Dutch 0.88. The accuracy of raw plain text hypernyms exceeds 0.90 for all languages. The LHD release described and evaluated in this article targets DBpedia 3.8, LHD version for the DBpedia 3.9 containing approximately 4.5 million RDF type triples is also available.", "title": "" } ]
scidocsrr
8c6b6386f8b2cbfc311c13829270f216
Investigating enterprise systems adoption: uncertainty avoidance, intrinsic motivation, and the technology acceptance model
[ { "docid": "4506bc1be6e7b42abc34d79dc426688a", "text": "The growing interest in Structured Equation Modeling (SEM) techniques and recognition of their importance in IS research suggests the need to compare and contrast different types of SEM techniques so that research designs can be selected appropriately. After assessing the extent to which these techniques are currently being used in IS research, the article presents a running example which analyzes the same dataset via three very different statistical techniques. It then compares two classes of SEM: covariance-based SEM and partial-least-squaresbased SEM. Finally, the article discusses linear regression models and offers guidelines as to when SEM techniques and when regression techniques should be used. The article concludes with heuristics and rule of thumb thresholds to guide practice, and a discussion of the extent to which practice is in accord with these guidelines.", "title": "" }, { "docid": "48dd3e8e071e7dd580ea42b528ee9427", "text": "Information systems (IS) implementation is costly and has a relatively low success rate. Since the seventies, IS research has contributed to a better understanding of this process and its outcomes. The early efforts concentrated on the identification of factors that facilitated IS use. This produced a long list of items that proved to be of little practical value. It became obvious that, for practical reasons, the factors had to be grouped into a model in a way that would facilitate analysis of IS use. In 1985, Fred Davis suggested the technology acceptance model (TAM). It examines the mediating role of perceived ease of use and perceived usefulness in their relation between systems characteristics (external variables) and the probability of system use (an indicator of system success). More recently, Davis proposed a new version of his model: TAM2. It includes subjective norms, and was tested with longitudinal research designs. Overall the two explain about 40% of system’s use. Analysis of empirical research using TAM shows that results are not totally consistent or clear. This suggests that significant factors are not included in the models. We conclude that TAM is a useful model, but has to be integrated into a broader one which would include variables related to both human and social change processes, and to the adoption of the innovation model. # 2002 Elsevier Science B.V. All rights reserved.", "title": "" } ]
[ { "docid": "3c014a3e17f6d200a132e31b51ad7fad", "text": "This paper studies a fault tolerant control strategy for a four wheel skid steering mobile robot (SSMR). Through this work the fault diagnosis procedure is accomplished using structural analysis technique while fault accommodation is based on a Recursive Least Squares (RLS) approximation. The goal is to detect faults as early as possible and recalculate command inputs in order to achieve fault tolerance, which means that despites the faults occurrences the system is able to recover its original task with the same or degraded performance. Fault tolerance can be considered that it is constituted by two basic tasks, fault diagnosis and control redesign. In our research using the diagnosis approach presented in our previous work we addressed mainly to the second task proposing a framework for fault tolerant control, which allows retaining acceptable performance under systems faults. In order to prove the efficacy of the proposed method, an experimental procedure was carried out using a Pioneer 3-AT mobile robot.", "title": "" }, { "docid": "3b09ca926dc51289d96935ec69aa70a8", "text": "It has been argued that clinical applications of advanced technology may hold promise for addressing impairments associated with autism spectrum disorders. This pilot feasibility study evaluated the application of a novel adaptive robot-mediated system capable of both administering and automatically adjusting joint attention prompts to a small group of preschool children with autism spectrum disorders (n = 6) and a control group (n = 6). Children in both groups spent more time looking at the humanoid robot and were able to achieve a high level of accuracy across trials. However, across groups, children required higher levels of prompting to successfully orient within robot-administered trials. The results highlight both the potential benefits of closed-loop adaptive robotic systems as well as current limitations of existing humanoid-robotic platforms.", "title": "" }, { "docid": "93b87e8dde0de0c1b198f6a073858d80", "text": "The current project is an initial attempt at validating the Virtual Reality Cognitive Performance Assessment Test (VRCPAT), a virtual environment-based measure of learning and memory. To examine convergent and discriminant validity, a multitrait-multimethod matrix was used in which we hypothesized that the VRCPAT's total learning and memory scores would correlate with other neuropsychological measures involving learning and memory but not with measures involving potential confounds (i.e., executive functions; attention; processing speed; and verbal fluency). Using a sequential hierarchical strategy, each stage of test development did not proceed until specified criteria were met. The 15-minute VRCPAT battery and a 1.5-hour in-person neuropsychological assessment were conducted with a sample of 30 healthy adults, between the ages of 21 and 36, that included equivalent distributions of men and women from ethnically diverse populations. Results supported both convergent and discriminant validity. That is, findings suggest that the VRCPAT measures a capacity that is (a) consistent with that assessed by traditional paper-and-pencil measures involving learning and memory and (b) inconsistent with that assessed by traditional paper-and-pencil measures assessing neurocognitive domains traditionally assumed to be other than learning and memory. We conclude that the VRCPAT is a valid test that provides a unique opportunity to reliably and efficiently study memory function within an ecologically valid environment.", "title": "" }, { "docid": "5871d4c7eff6523e129e467ccc92ab36", "text": "The exquisite mechanical functionality and versatility of the human hand emerges from complex neuro-musculo-skeletal interactions that are not completely understood. I have found it useful to work within a theoretical/experimental paradigm that outlines the fundamental neuro-musculo-skeletal components and their interactions. In this integrative paradigm, the laws of mechanics, the specifications of the manipulation task, and the sensorimotor signals define the interactions among hand anatomy, the nervous system, and manipulation function. Thus, our collaborative research activities emphasize a firm grounding in the mechanics of finger function, insistence on anatomical detail, and meticulous characterization of muscle activity. This overview of our work on precision pinch (i.e., the ability to produce and control fingertip forces) presents some of our findings around three Research Themes: Mechanics-based quantification of manipulation ability; Anatomically realistic musculoskeletal finger models; and Neural control of finger muscles. I conclude that (i) driving the fingers to some limit of sensorimotor performance is instrumental to elucidating motor control strategies; (ii) that the cross-over of tendons from flexors to extensors in the extensor mechanism is needed to produce force in every direction, and (iii) the anatomical routing of multiarticular muscles makes co-contraction unavoidable for many tasks. Moreover, creating realistic and clinically useful finger models still requires developing new computational means to simulate the viscoelastic tendinous networks of the extensor mechanism, and the muscle-bone-ligament interactions in complex articulations. Building upon this neuromuscular biomechanics paradigm is of immense clinical relevance: it will be instrumental to the development of clinical treatments to preserve and restore manual ability in people suffering from neurological and orthopedic conditions. This understanding will also advance the design and control of robotic hands whose performance lags far behind that of their biological counterparts.", "title": "" }, { "docid": "bae76f4857e39619f975f3db687d6223", "text": "Athletes in any sports can greatly benefit from feedback systems for improving the quality of their training. In this paper, we present a golf swing training system which incorporates wearable motion sensors to obtain inertial information and provide feedback on the quality of movements. The sensors are placed on a golf club and athlete’s body at positions which capture the unique movements of a golf swing. We introduce a quantitative model which takes into consideration signal processing techniques on the collected data and quantifies the correctness of the performed actions. We evaluate the effectiveness of our framework on data obtained from four subjects and discuss ongoing research.", "title": "" }, { "docid": "41df967b371c9e649a551706c87025a0", "text": "Quantum computers could be used to solve certain problems exponentially faster than classical computers, but are challenging to build because of their increased susceptibility to errors. However, it is possible to detect and correct errors without destroying coherence, by using quantum error correcting codes. The simplest of these are three-quantum-bit (three-qubit) codes, which map a one-qubit state to an entangled three-qubit state; they can correct any single phase-flip or bit-flip error on one of the three qubits, depending on the code used. Here we demonstrate such phase- and bit-flip error correcting codes in a superconducting circuit. We encode a quantum state, induce errors on the qubits and decode the error syndrome—a quantum state indicating which error has occurred—by reversing the encoding process. This syndrome is then used as the input to a three-qubit gate that corrects the primary qubit if it was flipped. As the code can recover from a single error on any qubit, the fidelity of this process should decrease only quadratically with error probability. We implement the correcting three-qubit gate (known as a conditional-conditional NOT, or Toffoli, gate) in 63 nanoseconds, using an interaction with the third excited state of a single qubit. We find 85 ± 1 per cent fidelity to the expected classical action of this gate, and 78 ± 1 per cent fidelity to the ideal quantum process matrix. Using this gate, we perform a single pass of both quantum bit- and phase-flip error correction and demonstrate the predicted first-order insensitivity to errors. Concatenation of these two codes in a nine-qubit device would correct arbitrary single-qubit errors. In combination with recent advances in superconducting qubit coherence times, this could lead to scalable quantum technology.", "title": "" }, { "docid": "bb460a219e87f2732a47ad7cc5329069", "text": "This paper studies the complexity of some approximate solutions of linear programming problems with real coefficients.", "title": "" }, { "docid": "13630f611d3390b91b29ded67d4c81b1", "text": "With better natural language semantic representations, computers can do more applications more efficiently as a result of better understanding of natural text. However, no single semantic representation at this time fulfills all requirements needed for a satisfactory representation. Logic-based representations like first-order logic capture many of the linguistic phenomena using logical constructs, and they come with standardized inference mechanisms, but standard first-order logic fails to capture the “graded” aspect of meaning in languages. Distributional models use contextual similarity to predict the “graded” semantic similarity of words and phrases but they do not adequately capture logical structure. In addition, there are a few recent attempts to combine both representations either on the logic side (still, not a graded representation), or in the distribution side(not full logic). We propose using probabilistic logic to represent natural language semantics combining the expressivity and the automated inference of logic, and the gradedness of distributional representations. We evaluate this semantic representation on two tasks, Recognizing Textual Entailment (RTE) and Semantic Textual Similarity (STS). Doing RTE and STS better is an indication of a better semantic understanding. Our system has three main components, 1. Parsing and Task Representation, 2. Knowledge Base Construction, and 3. Inference. The input natural sentences of the RTE/STS task are mapped to logical form using Boxer which is a rule based system built on top of a CCG parser, then they are used to formulate the RTE/STS problem in probabilistic logic. Then, a knowledge base is represented as weighted inference rules collected from different sources like WordNet and on-the-fly lexical rules from distributional semantics. An advantage of using probabilistic logic is that more rules can be added from more resources easily by mapping them to logical rules and weighting them appropriately. The last component is the inference, where we solve the probabilistic logic inference problem using an appropriate probabilistic logic tool like Markov Logic Network (MLN), or Probabilistic Soft Logic (PSL). We show how to solve the inference problems in MLNs efficiently for RTE using a modified closed-world assumption and a new inference algorithm, and how to adapt MLNs and PSL for STS by relaxing conjunctions. Experiments show that our semantic representation can handle RTE and STS reasonably well. For the future work, our short-term goals are 1. better RTE task representation and finite domain handling, 2. adding more inference rules, precompiled and on-the-fly, 3. generalizing the modified closed– world assumption, 4. enhancing our inference algorithm for MLNs, and 5. adding a weight learning step to better adapt the weights. On the longer-term, we would like to apply our semantic representation to the question answering task, support generalized quantifiers, contextualize WordNet rules we use, apply our semantic representation to languages other than English, and implement a probabilistic logic Inference Inspector that can visualize the proof structure. Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE OCT 2014 2. REPORT TYPE 3. DATES COVERED 00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE Natural Language Semantics using Probabilistic Logic 5a. CONTRACT NUMBER", "title": "" }, { "docid": "bb1554d174df80e7db20e943b4a69249", "text": "Any static, global analysis of the expression and data relationships in a program requires a knowledge of the control flow of the program. Since one of the primary reasons for doing such a global analysis in a compiler is to produce optimized programs, control flow analysis has been embedded in many compilers and has been described in several papers. An early paper by Prosser [5] described the use of Boolean matrices (or, more particularly, connectivity matrices) in flow analysis. The use of “dominance” relationships in flow analysis was first introduced by Prosser and much expanded by Lowry and Medlock [6]. References [6,8,9] describe compilers which use various forms of control flow analysis for optimization. Some recent developments in the area are reported in [4] and in [7].\n The underlying motivation in all the different types of control flow analysis is the need to codify the flow relationships in the program. The codification may be in connectivity matrices, in predecessor-successor tables, in dominance lists, etc. Whatever the form, the purpose is to facilitate determining what the flow relationships are; in other words to facilitate answering such questions as: is this an inner loop?, if an expression is removed from the loop where can it be correctly and profitably placed?, which variable definitions can affect this use?\n In this paper the basic control flow relationships are expressed in a directed graph. Various graph constructs are then found and shown to codify interesting global relationships.", "title": "" }, { "docid": "0b0e389556e7c132690d7f2a706664d1", "text": "E-government challenges are well researched in literature and well known by governments. However, being aware of the challenges of e-government implementation is not sufficient, as challenges may interrelate and impact each other. Therefore, a systematic analysis of the challenges and their interrelationships contributes to providing a better understanding of how to tackle the challenges and how to develop sustainable solutions. This paper aims to investigate existing challenges of e-government and their interdependencies in Tanzania. The collection of e-government challenges in Tanzania is implemented through interviews, desk research and observations of actors in their job. In total, 32 challenges are identified. The subsequent PESTEL analysis studied interrelationships of challenges and identified 34 interrelationships. The analysis of the interrelationships informs policy decision makers of issues to focus on along the planning of successfully implementing the existing e-government strategy in Tanzania. The study also identified future research needs in evaluating the findings through quantitative analysis.", "title": "" }, { "docid": "1ce3a65fe2f5c102449c579031408aa3", "text": "Mobile or SMS spam is a real and growing problem primarily due to the availability of very cheap bulk pre-pay SMS packages and the fact that SMS engenders higher response rates as it is a trusted and personal service. SMS spam filtering is a relatively new task which inherits many issues and solutions from email spam filtering. However it poses its own specific challenges. This paper motivates work on filtering SMS spam and reviews recent developments in SMS spam filtering. The paper also discusses the issues with data collection and availability for furthering research in this area, analyses a large corpus of SMS spam, and provides some initial benchmark results.", "title": "" }, { "docid": "3bb6bfbb139ab9b488c4106c9d6cc3bd", "text": "BACKGROUND\nRecent evidence demonstrates growth in both the quality and quantity of evidence in physical therapy. Much of this work has focused on randomized controlled trials and systematic reviews.\n\n\nOBJECTIVE\nThe purpose of this study was to conduct a comprehensive bibliometric assessment of Physical Therapy (PTJ) over the past 30 years to examine trends for all types of studies.\n\n\nDESIGN\nThis was a bibliometric analysis.\n\n\nMETHODS\nAll manuscripts published in PTJ from 1980 to 2009 were reviewed. Research reports, topical reviews (including perspectives and nonsystematic reviews), and case reports were included. Articles were coded based on type, participant characteristics, physical therapy focus, research design, purpose of article, clinical condition, and intervention. Coding was performed by 2 independent reviewers, and author, institution, and citation information was obtained using bibliometric software.\n\n\nRESULTS\nOf the 4,385 publications identified, 2,519 were included in this analysis. Of these, 67.1% were research reports, 23.0% were topical reviews, and 9.9% were case reports. Percentage increases over the past 30 years were observed for research reports, inclusion of \"symptomatic\" participants (defined as humans with a current symptomatic condition), systematic reviews, qualitative studies, prospective studies, and articles focused on prognosis, diagnosis, or metric topics. Percentage decreases were observed for topical reviews, inclusion of only \"asymptomatic\" participants (defined as humans without a current symptomatic condition), education articles, nonsystematic reviews, and articles focused on anatomy/physiology.\n\n\nLIMITATIONS\nQuality assessment of articles was not performed.\n\n\nCONCLUSIONS\nThese trends provide an indirect indication of the evolution of the physical therapy profession through the publication record in PTJ. Collectively, the data indicated an increased emphasis on publishing articles consistent with evidence-based practice and clinically based research. Bibliometric analyses indicated the most frequent citations were metric studies and references in PTJ were from journals from a variety of disciplines.", "title": "" }, { "docid": "b83641785927e3788479d67af9804fb7", "text": "In recent years, an increasing popularity of deep learning model for intelligent condition monitoring and diagnosis as well as prognostics used for mechanical systems and structures has been observed. In the previous studies, however, a major assumption accepted by default, is that the training and testing data are taking from same feature distribution. Unfortunately, this assumption is mostly invalid in real application, resulting in a certain lack of applicability for the traditional diagnosis approaches. Inspired by the idea of transfer learning that leverages the knowledge learnt from rich labeled data in source domain to facilitate diagnosing a new but similar target task, a new intelligent fault diagnosis framework, i.e., deep transfer network (DTN), which generalizes deep learning model to domain adaptation scenario, is proposed in this paper. By extending the marginal distribution adaptation (MDA) to joint distribution adaptation (JDA), the proposed framework can exploit the discrimination structures associated with the labeled data in source domain to adapt the conditional distribution of unlabeled target data, and thus guarantee a more accurate distribution matching. Extensive empirical evaluations on three fault datasets validate the applicability and practicability of DTN, while achieving many state-of-the-art transfer results in terms of diverse operating conditions, fault severities and fault types.", "title": "" }, { "docid": "036ac7fc6886f1f7d1734be18a11951f", "text": "Often the challenge associated with tasks like fraud and spam detection is the lack of all likely patterns needed to train suitable supervised learning models. This problem accentuates when the fraudulent patterns are not only scarce, they also change over time. Change in fraudulent pattern is because fraudsters continue to innovate novel ways to circumvent measures put in place to prevent fraud. Limited data and continuously changing patterns makes learning signi cantly di cult. We hypothesize that good behavior does not change with time and data points representing good behavior have consistent spatial signature under di erent groupings. Based on this hypothesis we are proposing an approach that detects outliers in large data sets by assigning a consistency score to each data point using an ensemble of clustering methods. Our main contribution is proposing a novel method that can detect outliers in large datasets and is robust to changing patterns. We also argue that area under the ROC curve, although a commonly used metric to evaluate outlier detection methods is not the right metric. Since outlier detection problems have a skewed distribution of classes, precision-recall curves are better suited because precision compares false positives to true positives (outliers) rather than true negatives (inliers) and therefore is not a ected by the problem of class imbalance. We show empirically that area under the precision-recall curve is a better than ROC as an evaluation metric. The proposed approach is tested on the modi ed version of the Landsat satellite dataset, the modi ed version of the ann-thyroid dataset and a large real world credit card fraud detection dataset available through Kaggle where we show signi cant improvement over the baseline methods.", "title": "" }, { "docid": "d2f44693ea525c3fd99cb687b383022a", "text": "According to the shortcomings of the existing ultrasonic wind velocity measurement device, for instance, complexity of circuit and difficulty of signal processing, a new ultrasonic wind velocity measurement is put forward based on phase discrimination and a new sensor configuration as well. Thus, a system model is built. Firstly, an equilateral triangle should be constituted by an ultrasonic emission sensor and two ultrasonic receiving sensors. Then by high-precision phase discrimination circuit, the lag between the ultrasonic which is received by two ultrasonic receiving sensors during the traveling time in the upwind and downwind is converted to the phase difference. After that, the wind velocity is measured. Besides, a mathematical model is established among the wind velocity, the ultrasonic velocity, and the structural parameters with ambient temperature. The factors which influence the precision of the wind velocity measurement are analyzed and the solutions are given as well. The experimental results show that in view of the phase discrimination technology, the system has a good numerical stability and the resolution is one order magnitude better than that of the cup anemometer.", "title": "" }, { "docid": "8b512d57c7c96c82855927e2f222ec58", "text": "The current Internet of Things (IoT) has made it very convenient to obtain information about a product from a single data node. However, in many industrial applications, information about a single product can be distributed in multiple different data nodes, and aggregating the information from these nodes has become a common task. In this paper, we provide a distributed service-oriented architecture for this task. In this architecture, each manufacturer provides service for their own products, and data nodes keep the information collected by themselves. Semantic technologies are adopted to handle problems of heterogeneity and serve as the foundation to support different applications. Finally, as an example, we illustrate the use of this architecture to solve the problem of product tracing. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "1ca64eac6aaa34e114f5fb7d20b986b4", "text": "Circumstances that led to the development of the Theory: The SCT has its origins in the discipline of psychology, with its early foundation being laid by behavioral and social psychologists. The SLT evolved under the umbrella of behaviorism, which is a cluster of psychological theories intended to explain why people and animals behave the way that they do. Behaviorism, introduced by John Watson in 1913, took an extremely mechanistic approach to understanding human behavior. According to Watson, behavior could be explained in terms of observable acts that could be described by stimulus-response sequences (Crosbie-Brunett and Lewis, 1993; Thomas, 1990). Also central to behaviorist study was the notion that contiguity between stimulus and response determined the likelihood that learning would occur.", "title": "" }, { "docid": "fc0470776583df8b25114abc8709b045", "text": "Certified Registered Nurse Anesthetists (CRNAs) have been providing anesthesia care in the United States (US) for nearly 150 years. Historically, anesthesia care for surgical patients was mainly provided by trained nurses under the supervision of surgeons until the establishment of anesthesiology as a medical specialty in the US. Currently, all 50 US states utilize CRNAs to perform various kinds of anesthesia care, either under the medical supervision of anesthesiologists in most states, or independently without medical supervision in 16 states; the latter has become an on-going source of conflict between anesthesiologists and CRNAs. Understanding the history and current conditions of anesthesia practice in the US is crucial for countries in which the shortage of anesthesia care providers has become a national issue.", "title": "" }, { "docid": "38f386546b5f866d45ff243599bd8305", "text": "During the last two decades, Structural Equation Modeling (SEM) has evolved from a statistical technique for insiders to an established valuable tool for a broad scientific public. This class of analyses has much to offer, but at what price? This paper provides an overview on SEM, its underlying ideas, potential applications and current software. Furthermore, it discusses avoidable pitfalls as well as built-in drawbacks in order to lend support to researchers in deciding whether or not SEM should be integrated into their research tools. Commented findings of an internet survey give a “State of the Union Address” on SEM users and usage. Which kinds of models are preferred? Which software is favoured in current psychological research? In order to assist the reader on his first steps, a SEM first-aid kit is included. Typical problems and possible solutions are addressed, helping the reader to get the support he needs. Hence, the paper may assist the novice on the first steps and self-critically reminds the advanced reader of the limitations of Structural Equation Modeling", "title": "" }, { "docid": "52fe696242f399d830d0a675bd766128", "text": "Humans are adept at inferring the mental states underlying other agents' actions, such as goals, beliefs, desires, emotions and other thoughts. We propose a computational framework based on Bayesian inverse planning for modeling human action understanding. The framework represents an intuitive theory of intentional agents' behavior based on the principle of rationality: the expectation that agents will plan approximately rationally to achieve their goals, given their beliefs about the world. The mental states that caused an agent's behavior are inferred by inverting this model of rational planning using Bayesian inference, integrating the likelihood of the observed actions with the prior over mental states. This approach formalizes in precise probabilistic terms the essence of previous qualitative approaches to action understanding based on an \"intentional stance\" [Dennett, D. C. (1987). The intentional stance. Cambridge, MA: MIT Press] or a \"teleological stance\" [Gergely, G., Nádasdy, Z., Csibra, G., & Biró, S. (1995). Taking the intentional stance at 12 months of age. Cognition, 56, 165-193]. In three psychophysical experiments using animated stimuli of agents moving in simple mazes, we assess how well different inverse planning models based on different goal priors can predict human goal inferences. The results provide quantitative evidence for an approximately rational inference mechanism in human goal inference within our simplified stimulus paradigm, and for the flexible nature of goal representations that human observers can adopt. We discuss the implications of our experimental results for human action understanding in real-world contexts, and suggest how our framework might be extended to capture other kinds of mental state inferences, such as inferences about beliefs, or inferring whether an entity is an intentional agent.", "title": "" } ]
scidocsrr
6de948ed765e46952ef7c6974203626c
Convolution neural network based syntactic and semantic aware paraphrase identification
[ { "docid": "84c95e15ddff06200624822cc12fa51f", "text": "A growing body of research has recently been conducted on semantic textual similarity using a variety of neural network models. While recent research focuses on word-based representation for phrases, sentences and even paragraphs, this study considers an alternative approach based on character n-grams. We generate embeddings for character n-grams using a continuous-bag-of-n-grams neural network model. Three different sentence representations based on n-gram embeddings are considered. Results are reported for experiments with bigram, trigram and 4-gram embeddings on the STS Core dataset for SemEval-2016 Task 1.", "title": "" } ]
[ { "docid": "cb947a7b78158a804582d8a7036f9116", "text": "Downloading the book in this website lists can give you more advantages. It will show you the best book collections and completed collections. So many books can be found in this website. So, this is not only this significant aspects of client centered therapy. However, this book is referred to read because it is an inspiring book to give you more chance to get experiences and also thoughts. This is simple, read the soft file of the book and you get it.", "title": "" }, { "docid": "5dac4a5d6adcb75742344268bb717e11", "text": "System logs are widely used in various tasks of software system management. It is crucial to avoid logging too little or too much. To achieve so, developers need to make informed decisions on where to log and what to log in their logging practices during development. However, there exists no work on studying such logging practices in industry or helping developers make informed decisions. To fill this significant gap, in this paper, we systematically study the logging practices of developers in industry, with focus on where developers log. We obtain six valuable findings by conducting source code analysis on two large industrial systems (2.5M and 10.4M LOC, respectively) at Microsoft. We further validate these findings via a questionnaire survey with 54 experienced developers in Microsoft. In addition, our study demonstrates the high accuracy of up to 90% F-Score in predicting where to log.", "title": "" }, { "docid": "4be9ae4bc6fb01e78d550bedf199d0b0", "text": "Protein timing is a popular dietary strategy designed to optimize the adaptive response to exercise. The strategy involves consuming protein in and around a training session in an effort to facilitate muscular repair and remodeling, and thereby enhance post-exercise strength- and hypertrophy-related adaptations. Despite the apparent biological plausibility of the strategy, however, the effectiveness of protein timing in chronic training studies has been decidedly mixed. The purpose of this paper therefore was to conduct a multi-level meta-regression of randomized controlled trials to determine whether protein timing is a viable strategy for enhancing post-exercise muscular adaptations. The strength analysis comprised 478 subjects and 96 ESs, nested within 41 treatment or control groups and 20 studies. The hypertrophy analysis comprised 525 subjects and 132 ESs, nested with 47 treatment or control groups and 23 studies. A simple pooled analysis of protein timing without controlling for covariates showed a small to moderate effect on muscle hypertrophy with no significant effect found on muscle strength. In the full meta-regression model controlling for all covariates, however, no significant differences were found between treatment and control for strength or hypertrophy. The reduced model was not significantly different from the full model for either strength or hypertrophy. With respect to hypertrophy, total protein intake was the strongest predictor of ES magnitude. These results refute the commonly held belief that the timing of protein intake in and around a training session is critical to muscular adaptations and indicate that consuming adequate protein in combination with resistance exercise is the key factor for maximizing muscle protein accretion.", "title": "" }, { "docid": "bc8c769b625017e2f8522c71dcfe0660", "text": "Quantitative models have proved valuable in predicting consumer behavior in the offline world. These same techniques can be adapted to predict online actions. The use of diffusion models provides a firm foundation to implement and forecast viral marketing strategies. Choice models can predict purchases at online stores and shopbots. Hierarchical Bayesian models provide a framework to implement versioning and price segmentation strategies. Bayesian updating is a natural tool for profiling users with clickstream data. I illustrate these four modeling techniques and discuss their potential for solving Internet marketing problems.", "title": "" }, { "docid": "b5967a8dc6a8349b2f5c1d3070369d3c", "text": "Hereditary xerocytosis is thought to be a rare genetic condition characterized by red blood cell (RBC) dehydration with mild hemolysis. RBC dehydration is linked to reduced Plasmodium infection in vitro; however, the role of RBC dehydration in protection against malaria in vivo is unknown. Most cases of hereditary xerocytosis are associated with gain-of-function mutations in PIEZO1, a mechanically activated ion channel. We engineered a mouse model of hereditary xerocytosis and show that Plasmodium infection fails to cause experimental cerebral malaria in these mice due to the action of Piezo1 in RBCs and in T cells. Remarkably, we identified a novel human gain-of-function PIEZO1 allele, E756del, present in a third of the African population. RBCs from individuals carrying this allele are dehydrated and display reduced Plasmodium infection in vitro. The existence of a gain-of-function PIEZO1 at such high frequencies is surprising and suggests an association with malaria resistance.", "title": "" }, { "docid": "6168a7860ffb49b28e2055e30314b120", "text": "A forensic investigation of digital evidence is commonly employed as a post-event response to a serious information security incident. In fact, there are many circumstances where an organisation may benefit from an ability to gather and preserve digital evidence before an incident occurs. Forensic readiness is defined as the ability of an organisation to maximise its potential to use digital evidence whilst minimising the costs of an investigation. The costs and benefits of such an approach are outlined. Preparation to use digital evidence may involve enhanced system and staff monitoring, technical, physical and procedural means to secure data to evidential standards of admissibility, processes and procedures to ensure that staff recognise the importance and legal sensitivities of evidence, and appropriate legal advice and interfacing with law enforcement. This paper proposes a ten step process for an organisation to implement forensic readiness.", "title": "" }, { "docid": "8d07f52f154f81ce9dedd7c5d7e3182d", "text": "We present a 3D face reconstruction system that takes as input either one single view or several different views. Given a facial image, we first classify the facial pose into one of five predefined poses, then detect two anchor points that are then used to detect a set of predefined facial landmarks. Based on these initial steps, for a single view we apply a warping process using a generic 3D face model to build a 3D face. For multiple views, we apply sparse bundle adjustment to reconstruct 3D landmarks which are used to deform the generic 3D face model. Experimental results on the Color FERET and CMU multi-PIE databases confirm our framework is effective in creating realistic 3D face models that can be used in many computer vision applications, such as 3D face recognition at a distance.", "title": "" }, { "docid": "d25a34b3208ee28f9cdcddb9adf46eb4", "text": "1 Umeå University, Department of Computing Science, SE-901 87 Umeå, Sweden, {jubo,thomasj,marie}@cs.umu.se Abstract  The transition to object-oriented programming is more than just a matter of programming language. Traditional syllabi fail to teach students the “big picture” and students have difficulties taking advantage of objectoriented concepts. In this paper we present a holistic approach to a CS1 course in Java favouring general objectoriented concepts over the syntactical details of the language. We present goals for designing such a course and a case study showing interesting results.", "title": "" }, { "docid": "d0c85b824d7d3491f019f47951d1badd", "text": "A nine-year-old female Rottweiler with a history of repeated gastrointestinal ulcerations and three previous surgical interventions related to gastrointestinal ulceration presented with symptoms of anorexia and intermittent vomiting. Benign gastric outflow obstruction was diagnosed in the proximal duodenal area. The initial surgical plan was to perform a pylorectomy with gastroduodenostomy (Billroth I procedure), but owing to substantial scar tissue and adhesions in the area a palliative gastrojejunostomy was performed. This procedure provided a bypass for the gastric contents into the proximal jejunum via the new stoma, yet still allowed bile and pancreatic secretions to flow normally via the patent duodenum. The gastrojejunostomy technique was successful in the surgical management of this case, which involved proximal duodenal stricture in the absence of neoplasia. Regular telephonic followup over the next 12 months confirmed that the patient was doing well.", "title": "" }, { "docid": "49fa06dc2a6ac105a2a4429eefde5efa", "text": "Now, we come to offer you the right catalogues of book to open. social media marketing in tourism and hospitality is one of the literary work in this world in suitable to be reading material. That's not only this book gives reference, but also it will show you the amazing benefits of reading a book. Developing your countless minds is needed; moreover you are kind of people with great curiosity. So, the book is very appropriate for you.", "title": "" }, { "docid": "f9b2e1ad25456f1d6e9e54c922e491fd", "text": "The paper describes the steps of the 3D modeling of the Church of Saint Anthony Abbot in San Daniele de l Friuli (I), from the laser scanning and photogrammetric integrated surveying t o the final VRML/X3D photorealistic model. This chur ch keeps the most beautiful and harmonious cycle of Renaissance fresco es of the region, painted by Pellegrino da San Dani ele: the virtual model is intended also as an instrument to help visitors or tudious to better understand the narration meaning hidden in the fresco episodes. For the inner and outer surveying of the church, th e Riegl Z390I system integrated with a Nikon D200 ph otogrammetric camera was employed: 18 different point clouds for about 3 3 millions of points and 163 digital images were au tomatically collected. Data processing carried out by RiSCAN PRO® software (Ri gl) allowed the scan registration, the 3D surface reconstruction and the image texturing with satisfactory results. Part icular attentions have been given to the 3D surface of the interior of the church, before in its construction by partial Delaunay tria ngulations and later in its smoothing and decimatio n for an efficient management of a 3D model with “few” (hundreds of thousands!) t riangles but anyway preserving a high geometric det ail. New images have been later acquired with the metric camera without laser scanning system, to substitut e the original ones having illumination troubles: these new images have been e xternally oriented by natural points on the surface nd then textured onto it. The 3D model and the image textures have been impor ted in the VRML/X3D space, where six thematic tours will be available. The model is structured in different LoD (Levels of Detail) for the model geometry and text ures resolution, and each frescoes scene and figure is linked by means an Anchor with the corresponding card of the web Regional Information System of the Cultural Heritage .", "title": "" }, { "docid": "1566fa0dfb11e6960e97f4f153d1b8de", "text": "This article studies the problem of latent community topic analysis in text-associated graphs. With the development of social media, a lot of user-generated content is available with user networks. Along with rich information in networks, user graphs can be extended with text information associated with nodes. Topic modeling is a classic problem in text mining and it is interesting to discover the latent topics in text-associated graphs. Different from traditional topic modeling methods considering links, we incorporate community discovery into topic analysis in text-associated graphs to guarantee the topical coherence in the communities so that users in the same community are closely linked to each other and share common latent topics. We handle topic modeling and community discovery in the same framework. In our model we separate the concepts of community and topic, so one community can correspond to multiple topics and multiple communities can share the same topic. We compare different methods and perform extensive experiments on two real datasets. The results confirm our hypothesis that topics could help understand community structure, while community structure could help model topics.", "title": "" }, { "docid": "3ccdf8fcb373e008751d1fe2628ec1b7", "text": "Purpose – The purpose of this paper is to better understand the influence of total quality management (TQM) practices on incremental and radical innovation, examining the role of diverse cultural change as a mediator, particularly in firms where ever-increasing competitive pressure demands a combination of quality and innovation. Design/methodology/approach – From previous research on the influence of TQM practices on innovation, the paper proposes a model which is tested through a survey carried out on a sample of 72 Spanish firms that have been drastically hit by competition from Asian companies, achieving a 51.42 percent valid return rate. Findings – None of the sets of TQM practices directly affects radical innovation, while all of them have a significant and positive relationship with incremental innovation. However, when the paper introduces cultural change as a mediating factor, the model’s goodness of fit improves substantially, and all the relations are significant. Research limitations/implications – The results reveal the power of cultural change to connect the diversity of the TQM practices and incremental and radical innovation. Further research is needed for a more comprehensive understanding of the role of cultural change in these relationships and to test the model in a longitudinal study. Practical implications – Managers can use the potential inherent in TQM to stimulate a paradoxical cultural context that favours innovation. This is especially relevant for enhancing radical innovation. Social implications – Given the extent to which TQM has been applied over the last 20 years, the social impact of this study is relevant, particularly in the current environment of economic crisis which calls for an increase in efficiency and innovation, adaptation and change. Originality/value – The paper introduces a multidimensional analysis of TQM and a broad perspective of innovation. The paper also develops an original definition of cultural change made up of apparently contradicting values, including exploitation and exploration, and introduces it as a mediating variable in the TQM-innovation model.", "title": "" }, { "docid": "fd8ac9c61b2146a27465e96b4f0eb5f6", "text": "In this paper performance of LQR and ANFIS control for a Double Inverted Pendulum system is compared. The double inverted pendulum system is highly unstable and nonlinear. Mathematical model is presented by linearizing the system about its vertical position. The analysis of the system is performed for its stability, controllability and observability. Furthermore, the LQR controller and ANFIS controller based on the state variable fusion is proposed for the control of the double inverted pendulum system and simulation results show that ANFIS controller has better tracking performance and disturbance rejecting performance as compared to LQR controller.", "title": "" }, { "docid": "c3bd3031eeac1c223078094a8d7a2eb0", "text": "Ambient-assisted living (AAL) is, nowadays, an important research and development area, foreseen as an important instrument to face the demographic aging. The acceptance of the AAL paradigm is closely related to the quality of the available systems, namely in terms of intelligent functions for the user interaction. In that context, usability and accessibility are crucial issues to consider. This paper presents a systematic literature review of AAL technologies, products and services with the objective of establishing the current position regarding user interaction and how are end users involved in the AAL development and evaluation processes. For this purpose, a systematic review of the literature on AAL was undertaken. A total of 1,048 articles were analyzed, 111 of which were mainly related to user interaction and 132 of which described practical AAL systems applied in a specified context and with a well-defined aim. Those articles classified as user interaction and systems were further characterized in terms of objectives, target users, users’ involvement, usability and accessibility issues, settings to be applied, technologies used and development stages. The results show the need to improve the integration and interoperability of the existing technologies and to promote user-centric developments with a strong involvement of end users, namely in what concerns usability and accessibility issues.", "title": "" }, { "docid": "3dfee4e741b5610571dbc2734c427350", "text": "Anomaly detection in crowd scene is very important because of more concern with people safety in public place. This paper presents an approach to automatically detect abnormal behavior in crowd scene. For this purpose, instead of tracking every person, KLT corners are extracted as feature points to represent moving objects and tracked by optical flow technique to generate motion vectors, which are used to describe motion. We divide whole frame into small blocks, and motion pattern in each block is encoded by the distribution of motion vectors in it. Similar motion patterns are clustered into pattern model in an unsupervised way, and we classify motion pattern into normal or abnormal group according to the deviation between motion pattern and trained model. The results on abnormal events detection in real video demonstrate the effectiveness of the approach.", "title": "" }, { "docid": "334cbe19e968cb424719c7efbee9fd20", "text": "We examine the relationship between scholarly practice and participatory technologies and explore how such technologies invite and reflect the emergence of a new form of scholarship that we call Networked Participatory Scholarship: scholars’ participation in online social networks to share, reflect upon, critique, improve, validate, and otherwise develop their scholarship. We discuss emergent techno-cultural pressures that may influence higher education scholars to reconsider some of the foundational principles upon which scholarship has been established due to the limitations of a pre-digital world, and delineate how scholarship itself is changing with the emergence of certain tools, social behaviors, and cultural expectations associated with participatory technologies. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "74adf22dff08c0d914197d71fabe4938", "text": "Modeling contact in multibody simulation is a difficult problem frequently characterized by numerically brittle algorithms, long running times, and inaccurate (with respect to theory) models. We present a comprehensive evaluation of four methods for contact modeling on seven benchmark scenarios in order to quantify the performance of these methods with respect to robustness and speed. We also assess the accuracy of these methods where possible. We conclude the paper with a prescriptive description in order to guide the user of multibody simulation.", "title": "" }, { "docid": "706812fd28b79b752feb1f392ea5a2da", "text": "Authenticating fingerphoto images captured using a smartphone camera, provide a good alternate solution in place of traditional pin or pattern based approaches. There are multiple challenges associated with fingerphoto authentication such as background variations, environmental illumination, estimating finger position, and camera resolution. In this research, we propose a novel ScatNet feature based fingerphoto matching approach. Effective fingerphoto segmentation and enhancement are performed to aid the matching process and to attenuate the effect of capture variations. Further, we propose and create a publicly available smartphone fingerphoto database having three different subsets addressing the challenges of environmental illumination and background, along with their corresponding live scan fingerprints. Experimental results show improved performance across multiple challenges present in the database.", "title": "" } ]
scidocsrr
f4fa5e4ee27a20315d153a7f823c2ed0
LABOUR TURNOVER : CAUSES , CONSEQUENCES AND PREVENTION Oladele
[ { "docid": "c9972414881db682c219d69d59efa34a", "text": "“Employee turnover” as a term is widely used in business circles. Although several studies have been conducted on this topic, most of the researchers focus on the causes of employee turnover. This research looked at extent of influence of various factors on employee turnover in urban and semi urban banks. The research was aimed at achieving the following objectives: identify the key factors of employee turnover; determine the extent to which the identified factors are influencing employees’ turnover. The study is based on the responses of the employees of leading banks. A self-developed questionnaire, measured on a Likert Scale was used to collect data from respondents. Quantitative research design was used and this design was chosen because its findings are generaliseable and data objective. The reliability of the data collected is done by split half method.. The collected data were being analyzed using a program called Statistical Package for Social Science (SPSS ver.16.0 For Windows). The data analysis is carried out by calculating mean, standard deviation and linear correlation. The difference between means of variable was estimated by using t-test. The following factors have significantly influenced employee turnover in banking sector: Work Environment, Job Stress, Compensation (Salary), Employee relationship with management, Career Growth.", "title": "" } ]
[ { "docid": "e769f52b6e10ea1cf218deb8c95f4803", "text": "To facilitate the task of reading and searching information, it became necessary to find a way to reduce the size of documents without affecting the content. The solution is in Automatic text summarization system, it allows, from an input text to produce another smaller and more condensed without losing relevant data and meaning conveyed by the original text. The research works carried out on this area have experienced lately strong progress especially in English language. However, researches in Arabic text summarization are very few and are still in their beginning. In this paper we expose a literature review of recent techniques and works on automatic text summarization field research, and then we focus our discussion on some works concerning automatic text summarization in some languages. We will discuss also some of the main problems that affect the quality of automatic text summarization systems. © 2015 AESS Publications. All Rights Reserved.", "title": "" }, { "docid": "a7ab755978c9309513ac79dbd6b09763", "text": "In this paper, we propose a denoising method motivated by our previous analysis of the performance bounds for image denoising. Insights from that study are used here to derive a high-performance practical denoising algorithm. We propose a patch-based Wiener filter that exploits patch redundancy for image denoising. Our framework uses both geometrically and photometrically similar patches to estimate the different filter parameters. We describe how these parameters can be accurately estimated directly from the input noisy image. Our denoising approach, designed for near-optimal performance (in the mean-squared error sense), has a sound statistical foundation that is analyzed in detail. The performance of our approach is experimentally verified on a variety of images and noise levels. The results presented here demonstrate that our proposed method is on par or exceeding the current state of the art, both visually and quantitatively.", "title": "" }, { "docid": "24880289ca2b6c31810d28c8363473b3", "text": "Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal difference updates with supervised classification of the demonstrator’s actions. We show that DQfD has better initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the first million steps on 41 of 42 games and on average it takes PDD DQN 83 million steps to catch up to DQfD’s performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algorithms for incorporating demonstration data into DQN.", "title": "" }, { "docid": "615d2f03b2ff975242e90103e98d70d3", "text": "The insurance industries consist of more than thousand companies in worldwide. And collect more than one trillions of dollars premiums in each year. When a person or entity make false insurance claims in order to obtain compensation or benefits to which they are not entitled is known as an insurance fraud. The total cost of an insurance fraud is estimated to be more than forty billions of dollars. So detection of an insurance fraud is a challenging problem for the insurance industry. The traditional approach for fraud detection is based on developing heuristics around fraud indicator. The auto\\vehicle insurance fraud is the most prominent type of insurance fraud, which can be done by fake accident claim. In this paper, focusing on detecting the auto\\vehicle fraud by using, machine learning technique. Also, the performance will be compared by calculation of confusion matrix. This can help to calculate accuracy, precision, and recall.", "title": "" }, { "docid": "56674d44df277e40d8aef20d8eb7549f", "text": "The rapid proliferation of smartphones over the last few years has come hand in hand with and impressive growth in the number and sophistication of malicious apps targetting smartphone users. The availability of reuse-oriented development methodologies and automated malware production tools makes exceedingly easy to produce new specimens. As a result, market operators and malware analysts are increasingly overwhelmed by the amount of newly discovered samples that must be analyzed. This situation has stimulated research in intelligent instruments to automate parts of the malware analysis process. In this paper, we introduce Dendroid, a system based on text mining and information retrieval techniques for this task. Our approach is motivated by a statistical analysis of the code structures found in a dataset of Android OS malware families, which reveals some parallelisms with classical problems in those domains. We then adapt the standard Vector Space Model and reformulate the modelling process followed in text mining applications. This enables us to measure similarity between malware samples, which is then used to automatically classify them into families. We also investigate the application of hierarchical clustering over the feature vectors obtained for each malware family. The resulting dendograms resemble the so-called phylogenetic trees for biological species, allowing us to conjecture about evolutionary relationships among families. Our experimental results suggest that the approach is remarkably accurate and deals efficiently with large databases of malware instances.", "title": "" }, { "docid": "c3ad915ac57bf56c4adc47acee816b54", "text": "How does the brain “produce” conscious subjective experience, an awareness of something? This question has been regarded as perhaps the most challenging one facing science. Penfield et al. [9] had produced maps of whereresponses to electrical stimulation of cerebral cortex could be obtained in human neurosurgical patients. Mapping of cerebral activations in various subjective paradigms has been greatly extended more recently by utilizing PET scan and fMRI techniques. But there were virtually no studies of what the appropriate neurons do in order to elicit a conscious experience. The opportunity for me to attempt such studies arose when my friend and neurosurgeon colleague, Bertram Feinstein, invited me to utilize the opportunity presented by access to stimulating and recording electrodes placed for therapeutic purposes intracranially in awake and responsive patients. With the availability of an excellent facility and team of co-workers, I decided to study neuronal activity requirements for eliciting a simple conscious somatosensory experience, and compare that to activity requirements forunconsciousdetection of sensory signals. We discovered that a surprising duration of appropriate neuronal activations, up to about 500 msec, was required in order to elicit a conscious sensory experience [5]. This was true not only when the initiating stimulus was in any of the cerebral somatosensory pathways; several lines of evidence indicated that even a single stimulus pulse to the skin required similar durations of activities at the cortical level. That discovery led to further studies of such a delay factor for awareness generally, and to profound inferences for the nature of conscious subjective experience. It formed the basis of that highlight in my work [1,3]. For example, a neuronal requirement of about 500 msec to produce awareness meant that we do not experience our sensory world immediately, in real time. But that would contradict our intuitive feeling of the experience in real time. We solved this paradox with a hypothesis for “backward referral” of subjective experience to the time of the first cortical response, the primary evoked potential. This was tested and confirmed experimentally [8], a thrilling result. We could now add subjective referral in time to the already known subjective referral in space. Subjective referrals have no known neural basis and appear to be purely mental phenomena! Another experimental study supported my “time-on” theory for eliciting conscious sensations as opposed to unconscious detection [7]. The time-factor appeared also in an endogenous experience, the conscious intention or will to produce a purely voluntary act [4,6]. In this, we found that cerebral activity initiates this volitional process at least 350 msec before the conscious wish (W) to act appears. However, W appears about 200 msec before the muscles are activated. That retained the possibility that the conscious will could control the outcome of the volitional process; it could veto it and block the performance of the act. These discoveries have profound implications for the nature of free will, for individual responsibility and guilt. Discovery of these time factors led to unexpected ways of viewing conscious experience and unconscious mental functions. Experience of the sensory world is delayed. It raised the possibility that all conscious mental functions are initiated unconsciouslyand become conscious only if neuronal activities persist for a sufficiently long time. Conscious experiences must be discontinuousif there is a delay for each; the “stream of consciousness” must be modified. Quick actions or responses, whether in reaction times, sports activities, etc., would all be initially unconscious. Unconscious mental operations, as in creative thinking, artistic impulses, production of speech, performing in music, etc., can all proceed rapidly, since only brief neural actions are sufficient. Rapid unconscious events would allow faster processing in thinking, etc. The delay for awareness provides a physiological opportunity for modulatory influences to affect the content of an experience that finally appears, as in Freudian repression of certain sensory images or thoughts [2,3]. The discovery of the neural time factor (except in conscious will) could not have been made without intracranial access to the neural pathways. They provided an experimentally based entry into how new hypotheses, of how the brain deals with conscious experience, could be directly tested. That was in contrast to the many philosophical approaches which were speculative and mostly untestable. Evidence based views could now be accepted with some confidence.", "title": "" }, { "docid": "6174220696199251e774489b6fc0001f", "text": "This paper introduces a collaborative learning game called Futura: The Sustainable Futures Game, which is implemented on a custom multi-touch digital tabletop platform. The goal of the game is to work with other players to support a growing population as time passes while minimizing negative impact on the environment. The design-oriented research goal of the project is to explore the novel design space of collaborative, multi-touch tabletop games for learning. Our focus is on identifying and understanding key design factors of importance in creating opportunities for learning. We use four theoretical perspectives as lenses through which we conceptualize our design intentions and inform our analysis. These perspectives are: experiential learning, constructivist learning, collaborative learning, and game theory. In this paper we discuss design features that enable collaborative learning, present the results from two observational studies, and compare our findings to other guidelines in order to contribute to the growing body of empirically derived design guidelines for tangible, embodied and embedded interaction.", "title": "" }, { "docid": "a059b4908b2ffde33fcedfad999e9f6e", "text": "The use of a hull-climbing robot is proposed to assist hull surveyors in their inspection tasks, reducing cost and risk to personnel. A novel multisegmented hull-climbing robot with magnetic wheels is introduced where multiple two-wheeled modular segments are adjoined by flexible linkages. Compared to traditional rigid-body tracked magnetic robots that tend to detach easily in the presence of surface discontinuities, the segmented design adapts to such discontinuities with improved adhesion to the ferrous surface. Coordinated mobility is achieved with the use of a motion-control algorithm that estimates robot pose through position sensors located in each segment and linkage in order to optimally command each of the drive motors of the system. Self-powered segments and an onboard radio allow for wireless transmission of video and control data between the robot and its operator control unit. The modular-design approach of the system is highly suited for upgrading or adding segments as needed. For example, enhancing the system with a segment that supports an ultrasonic measurement device used to measure hull-thickness of corroded sites can help minimize the number of areas that a surveyor must personally visit for further inspection and repair. Future development efforts may lead to the design of autonomy segments that accept high-level commands from the operator and automatically execute wide-area inspections. It is also foreseeable that with several multi-segmented robots, a coordinated inspection task can take place in parallel, significantly reducing inspection time and cost. *aaron.burmeister@navy.mil The focus of this paper is on the development efforts of the prototype system that has taken place since 2012. Specifically, the tradeoffs of the magnetic-wheel and linkage designs are discussed and the motion-control algorithm presented. Overall system-performance results obtained from various tests and demonstrations are also reported.", "title": "" }, { "docid": "43228a3436f23d786ad7faa7776f1e1b", "text": "Antineutrophil cytoplasmic antibody (ANCA)-associated vasculitides (AAV) include Wegener granulomatosis, microscopic polyangiitis, Churg–Strauss syndrome and renal-limited vasculitis. This Review highlights the progress that has been made in our understanding of AAV pathogenesis and discusses new developments in the treatment of these diseases. Evidence from clinical studies, and both in vitro and in vivo experiments, supports a pathogenic role for ANCAs in the development of AAV; evidence is stronger for myeloperoxidase-ANCAs than for proteinase-3-ANCAs. Neutrophils, complement and effector T cells are also involved in AAV pathogenesis. With respect to treatment of AAV, glucocorticoids, cyclophosphamide and other conventional therapies are commonly used to induce remission in generalized disease. Pulse intravenous cyclophosphamide is equivalent in efficacy to oral cyclophosphamide but seems to be associated with less adverse effects. Nevertheless, alternatives to cyclophosphamide therapy have been investigated, such as the use of methotrexate as a less-toxic alternative to cyclophosphamide to induce remission in non-organ-threatening or non-life-threatening AAV. Furthermore, rituximab is equally as effective as cyclophosphamide for induction of remission in AAV and might become the standard of therapy in the near future. Controlled trials in which specific immune effector cells and molecules are being therapeutically targeted have been initiated or are currently being planned.", "title": "" }, { "docid": "1db450f3e28907d6940c87d828fc1566", "text": "The task of colorizing black and white images has previously been explored for natural images. In this paper we look at the task of colorization on a different domain: webtoons. To our knowledge this type of dataset hasn't been used before. Webtoons are usually produced in color thus they make a good dataset for analyzing different colorization models. Comics like webtoons also present some additional challenges over natural images, such as occlusion by speech bubbles and text. First we look at some of the previously introduced models' performance on this task and suggest modifications to address their problems. We propose a new model composed of two networks; one network generates sparse color information and a second network uses this generated color information as input to apply color to the whole image. These two networks are trained end-to-end. Our proposed model solves some of the problems observed with other architectures, resulting in better colorizations.", "title": "" }, { "docid": "571e2d2fcb55f16513a425b874102f69", "text": "Distributed word representations have a rising interest in NLP community. Most of existing models assume only one vector for each individual word, which ignores polysemy and thus degrades their effectiveness for downstream tasks. To address this problem, some recent work adopts multiprototype models to learn multiple embeddings per word type. In this paper, we distinguish the different senses of each word by their latent topics. We present a general architecture to learn the word and topic embeddings efficiently, which is an extension to the Skip-Gram model and can model the interaction between words and topics simultaneously. The experiments on the word similarity and text classification tasks show our model outperforms state-of-the-art methods.", "title": "" }, { "docid": "625f1f11e627c570e26da9f41f89a28b", "text": "In this paper, we propose an approach to realize substrate integrated waveguide (SIW)-based leaky-wave antennas (LWAs) supporting continuous beam scanning from backward to forward above the cutoff frequency. First, through phase delay analysis, it was found that SIWs with straight transverse slots support backward and forward radiation of the -1-order mode with an open-stopband (OSB) in between. Subsequently, by introducing additional longitudinal slots as parallel components, the OSB can be suppressed, leading to continuous beam scanning at least from -40° through broadside to 35°. The proposed method only requires a planar structure and obtains less dispersive beam scanning compared with a composite right/left-handed (CRLH) LWA. Both simulations and measurements verify the intended beam scanning operation while verifying the underlying theory.", "title": "" }, { "docid": "5bf330cdbaf7df4f1f585c7510a34f1f", "text": "The availability of affordable and portable depth sensors has made scanning objects and people simpler than ever. However, dealing with occlusions and missing parts is still a significant challenge. The problem of reconstructing a (possibly non-rigidly moving) 3D object from a single or multiple partial scans has received increasing attention in recent years. In this work, we propose a novel learning-based method for the completion of partial shapes. Unlike the majority of existing approaches, our method focuses on objects that can undergo non-rigid deformations. The core of our method is a variational autoencoder with graph convolutional operations that learns a latent space for complete realistic shapes. At inference, we optimize to find the representation in this latent space that best fits the generated shape to the known partial input. The completed shape exhibits a realistic appearance on the unknown part. We show promising results towards the completion of synthetic and real scans of human body and face meshes exhibiting different styles of articulation and partiality.", "title": "" }, { "docid": "a74081f7108e62fadb48446255dd246b", "text": "Existing fuzzy neural networks (FNNs) are mostly developed under a shallow network configuration having lower generalization power than those of deep structures. This paper proposes a novel self-organizing deep fuzzy neural network, namely deep evolving fuzzy neural networks (DEVFNN). Fuzzy rules can be automatically extracted from data streams or removed if they play little role during their lifespan. The structure of the network can be deepened on demand by stacking additional layers using a drift detection method which not only detects the covariate drift, variations of input space, but also accurately identifies the real drift, dynamic changes of both feature space and target space. DEVFNN is developed under the stacked generalization principle via the feature augmentation concept where a recently developed algorithm, namely Generic Classifier (gClass), drives the hidden layer. It is equipped by an automatic feature selection method which controls activation and deactivation of input attributes to induce varying subsets of input features. A deep network simplification procedure is put forward using the concept of hidden layer merging to prevent uncontrollable growth of input space dimension due to the nature of feature augmentation approach in building a deep network structure. DEVFNN works in the sample-wise fashion and is compatible for data stream applications. The efficacy of DEVFNN has been thoroughly evaluated using six datasets with non-stationary properties under the prequential test-then-train protocol. It has been compared with four state-ofthe art data stream methods and its shallow counterpart where DEVFNN demonstrates improvement of classification accuracy. Moreover, it is also shown that the concept drift detection method is an effective tool to control the depth of network structure while the hidden layer merging scenario is capable of simplifying the network complexity of a deep network with negligible compromise of generalization performance.", "title": "" }, { "docid": "bd4fd4d383a691106aab5d775381c388", "text": "This paper describes a model-based pothole detection algorithm that exploits a multi-phase dynamic model. The responses of hitting potholes are empirically broken down into three phases governed by three simpler dynamic system sub-models. Each sub-model is based on a rigid-ring tire and quarter-car suspension model. The model is validated by comparing simulation results over various scenarios with FTire, a commercial simulation software for tire-road interaction. Based on the developed model, a pothole detection algorithm with Unscented Kalman Filter (UKF) and Bayesian estimation is developed and demonstrated.", "title": "" }, { "docid": "34e1566235f94a265564cbe5d0bf7cc1", "text": "Circuit techniques that overcome practical noise, reliability, and EMI limitations are reported. An auxiliary loop with ramping circuits suppresses pop-and-click noise to 1 mV for an amplifier with 4 V-achievable output voltage. Switching edge rate control enables the system to meet the EN55022 Class-B standard with a 15 dB margin. An enhanced scheme detects short-circuit conditions without relying on overlimit current events.", "title": "" }, { "docid": "ae18e923e22687f66303c7ff07689f38", "text": "Recognizing fine-grained sub-categories such as birds and dogs is extremely challenging due to the highly localized and subtle differences in some specific parts. Most previous works rely on object / part level annotations to build part-based representation, which is demanding in practical applications. This paper proposes an automatic fine-grained recognition approach which is free of any object / part annotation at both training and testing stages. Our method explores a unified framework based on two steps of deep filter response picking. The first picking step is to find distinctive filters which respond to specific patterns significantly and consistently, and learn a set of part detectors via iteratively alternating between new positive sample mining and part model retraining. The second picking step is to pool deep filter responses via spatially weighted combination of Fisher Vectors. We conditionally pick deep filter responses to encode them into the final representation, which considers the importance of filter responses themselves. Integrating all these techniques produces a much more powerful framework, and experiments conducted on CUB-200-2011 and Stanford Dogs demonstrate the superiority of our proposed algorithm over the existing methods.", "title": "" }, { "docid": "1590742097219610170bd62eb3799590", "text": "In this paper, we develop a vision-based system that employs a combined RGB and depth descriptor to classify hand gestures. The method is studied for a human-machine interface application in the car. Two interconnected modules are employed: one that detects a hand in the region of interaction and performs user classification, and another that performs gesture recognition. The feasibility of the system is demonstrated using a challenging RGBD hand gesture data set collected under settings of common illumination variation and occlusion.", "title": "" }, { "docid": "516bbc36588afeeba0c3045f38efadb0", "text": "full text) and the cognitively different indexer interpretations of the", "title": "" }, { "docid": "502a948fbf73036a4a1546cdd4a04833", "text": "The literature review is an established research genre in many academic disciplines, including the IS discipline. Although many scholars agree that systematic literature reviews should be rigorous, few instructional texts for compiling a solid literature review, at least with regard to the IS discipline, exist. In response to this shortage, in this tutorial, I provide practical guidance for both students and researchers in the IS community who want to methodologically conduct qualitative literature reviews. The tutorial differs from other instructional texts in two regards. First, in contrast to most textbooks, I cover not only searching and synthesizing the literature but also the challenging tasks of framing the literature review, interpreting research findings, and proposing research paths. Second, I draw on other texts that provide guidelines for writing literature reviews in the IS discipline but use many examples of published literature reviews. I use an integrated example of a literature review, which guides the reader through the overall process of compiling a literature review.", "title": "" } ]
scidocsrr
eff97c74bbdd35c6f0192c8966d71214
Predictive analytics on IoT
[ { "docid": "8d7e778331feccc94a730b6cf21a2063", "text": "Data mining is a process of inferring knowledge from such huge data. Data Mining has three major components Clustering or Classification, Association Rules and Sequence Analysis. By simple definition, in classification/clustering analyze a set of data and generate a set of grouping rules which can be used to classify future data. Data mining is the process is to extract information from a data set and transform it into an understandable structure. It is the computational process of discovering patterns in large data sets involving methods at the intersection of artificial intelligence, machine learning, statistics, and database systems. The actual data mining task is the automatic or semi-automatic analysis of large quantities of data to extract previously unknown interesting patterns. Data mining involves six common classes of tasks. Anomaly detection, Association rule learning, Clustering, Classification, Regression, Summarization. Classification is a major technique in data mining and widely used in various fields. Classification is a data mining (machine learning) technique used to predict group membership for data instances. In this paper, we present the basic classification techniques. Several major kinds of classification method including decision tree induction, Bayesian networks, k-nearest neighbor classifier, the goal of this study is to provide a comprehensive review of different classification techniques in data mining.", "title": "" }, { "docid": "d253029f47fe3afb6465a71e966fdbd5", "text": "With the development of the social economy, more and more appliances have been presented in a house. It comes out a problem that how to manage and control these increasing various appliances efficiently and conveniently so as to achieve more comfortable, security and healthy space at home. In this paper, a smart control system base on the technologies of internet of things has been proposed to solve the above problem. The smart home control system uses a smart central controller to set up a radio frequency 433 MHz wireless sensor and actuator network (WSAN). A series of control modules, such as switch modules, radio frequency control modules, have been developed in the WSAN to control directly all kinds of home appliances. Application servers, client computers, tablets or smart phones can communicate with the smart central controller through a wireless router via a Wi-Fi interface. Since it has WSAN as the lower control layer, a appliance can be added into or withdrawn from the control system very easily. The smart control system embraces the functions of appliance monitor, control and management, home security, energy statistics and analysis.", "title": "" } ]
[ { "docid": "4adb6497d5623ab584eb9d4b9aab41b5", "text": "Hidden Markov Models (HMMs) are applied to the problems of statistical modeling, database searching and multiple sequence alignment of protein families and protein domains. These methods are demonstrated on the globin family, the protein kinase catalytic domain, and the EF-hand calcium binding motif. In each case the parameters of an HMM are estimated from a training set of unaligned sequences. After the HMM is built, it is used to obtain a multiple alignment of all the training sequences. It is also used to search the SWISS-PROT 22 database for other sequences that are members of the given protein family, or contain the given domain. The HMM produces multiple alignments of good quality that agree closely with the alignments produced by programs that incorporate three-dimensional structural information. When employed in discrimination tests (by examining how closely the sequences in a database fit the globin, kinase and EF-hand HMMs), the HMM is able to distinguish members of these families from non-members with a high degree of accuracy. Both the HMM and PROFILESEARCH (a technique used to search for relationships between a protein sequence and multiply aligned sequences) perform better in these tests than PROSITE (a dictionary of sites and patterns in proteins). The HMM appears to have a slight advantage over PROFILESEARCH in terms of lower rates of false negatives and false positives, even though the HMM is trained using only unaligned sequences, whereas PROFILESEARCH requires aligned training sequences. Our results suggest the presence of an EF-hand calcium binding motif in a highly conserved and evolutionary preserved putative intracellular region of 155 residues in the alpha-1 subunit of L-type calcium channels which play an important role in excitation-contraction coupling. This region has been suggested to contain the functional domains that are typical or essential for all L-type calcium channels regardless of whether they couple to ryanodine receptors, conduct ions or both.", "title": "" }, { "docid": "31b5deab1e434962f0bf974834134d50", "text": "The aim of this paper is to layout deep investment techniques in financial markets using deep learning models. Financial prediction problems usually involve huge variety of data-sets with complex data interactions which makes it difficult to design an economic model. Applying deep learning models to such problems can exploit potentially non-linear patterns in data. In this paper author introduces deep learning hierarchical decision models for prediction analysis and better decision making for financial domain problem set such as pricing securities, risk factor analysis and portfolio selection. The Section 3 includes architecture as well as detail on training a financial domain deep learning neural network. It further lays out different models such asLSTM, auto-encoding, smart indexing, credit risk analysis model for solving the complex data interactions. The experiments along with their results show how these models can be useful in deep investments for financial domain problems.", "title": "" }, { "docid": "a545496b8cd0a8083830ece25d0f6634", "text": "Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used or minimize the number or total size of accepted items. We consider off-line and on-line variants of the problems. For the off-line variant, we require that there be an ordering of the bins, so that no item in a later bin fits in an earlier bin. We find the approximation ratios of two natural approximation algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of 1 k on the item sizes, for some integer k. ∗The work of Boyar, Favrholdt, Kohrt, and Larsen was supported in part by the Danish Natural Science Research Council (SNF). The work of Epstein was supported in part by the Israel Science Foundation (ISF). A preliminary version of this paper appeared in the proceedings of the Fifteenth International Symposium on Fundamentals of Computation Theory, 2005.", "title": "" }, { "docid": "585445a760077e18a3e35d6916265514", "text": "This paper offers a review of the literature on labour turnover in organizations. Initially, the importance of the subject area is established, as analyses of turnover are outlined and critiqued.This leads toadiscussionof thevariousways inwhich turnover and its consequences are measured. The potentially critical impact of turnover behaviour on organizational effectiveness is presented as justification for the need to model turnover, as a precursor to prediction and prevention. Key models from the literature of labour turnover are presented and critiqued.", "title": "" }, { "docid": "c70ff7ed949cd6d96c1bd62331649257", "text": "Bitcoin is a popular alternative to fiat money, widely used for its perceived anonymity properties. However, recent attacks on Bitcoin’s peer-to-peer (P2P) network demonstrated that its gossip-based flooding protocols, which are used to ensure global network consistency, may enable user deanonymization— the linkage of a user’s IP address with her pseudonym in the Bitcoin network. In 2015, the Bitcoin community responded to these attacks by changing the network’s flooding mechanism to a different protocol, known as diffusion. However, no systematic justification was provided for the change, and it is unclear if diffusion actually improves the system’s anonymity. In this paper, we model the Bitcoin networking stack and analyze its anonymity properties, both preand post-2015. In doing so, we consider new adversarial models and spreading mechanisms that have not been previously studied in the source-finding literature. We theoretically prove that Bitcoin’s networking protocols (both preand post-2015) offer poor anonymity properties on networks with a regular-tree topology. We validate this claim in simulation on a 2015 snapshot of the real Bitcoin P2P network topology.", "title": "" }, { "docid": "de1d8d115d4f80f5976dbb52558b89fe", "text": "With the enormous growth in processor performance over the last decade, it is clear that reliability, rather than performance, is now the greatest challenge for computer systems research. This is particularly true in the context of Internet services that require 24x7 operation and home computers with no professional administration. While operating system products have matured and become more reliable, they are still the source of a significant number of failures. Furthermore, recent studies show that device drivers are frequently responsible for operating system failures. For example, a study at Stanford University found that Linux drivers have 3 to 7 times the bug frequency as the rest of the OS [4]. An analysis of product support calls for Windows 2000 showed that device drivers accounted for 27% of crashes, compared to 2% for the kernel itself [16].", "title": "" }, { "docid": "241d7da91d5b48d415040b44b128ec33", "text": "Dieser Beitrag beschreibt eine neuartige Mobilfunktechnologie, mit der sich innovative und besonders latenzsensitive Dienste in Mobilfunknetzen realisieren lassen. Dieser Artikel geht auf die technischen Eigenschaften der sogenannten Mobile Edge Computing-Technologie ein und beschreibt deren Architektur und Integrationsmöglichkeiten. Ferner werden konkrete – sowohl angedachte als auch bereits realisierte – Beispiele und Szenarien vorgestellt, die durch den Einsatz der Mobile Edge Computing-Technologie ermöglicht werden.", "title": "" }, { "docid": "ebe4fa8652cc92d23edfd69106a72584", "text": "Cryosection brain images in Chinese Visible Human (CVH) dataset contain rich anatomical structure information of tissues because of its high resolution (e.g., 0.167 mm per pixel). Fast and accurate segmentation of these images into white matter, gray matter, and cerebrospinal fluid plays a critical role in analyzing and measuring the anatomical structures of human brain. However, most existing automated segmentation methods are designed for computed tomography or magnetic resonance imaging data, and they may not be applicable for cryosection images due to the imaging difference. In this paper, we propose a supervised learning-based CVH brain tissues segmentation method that uses stacked autoencoder (SAE) to automatically learn the deep feature representations. Specifically, our model includes two successive parts where two three-layer SAEs take image patches as input to learn the complex anatomical feature representation, and then these features are sent to Softmax classifier for inferring the labels. Experimental results validated the effectiveness of our method and showed that it outperformed four other classical brain tissue detection strategies. Furthermore, we reconstructed three-dimensional surfaces of these tissues, which show their potential in exploring the high-resolution anatomical structures of human brain.", "title": "" }, { "docid": "2e12a5f308472f3f4d19d4399dc85546", "text": "This paper presents a taxonomy of replay attacks on cryptographic protocols in terms of message origin and destination. The taxonomy is independent of any method used to analyze or prevent such attacks. It is also complete in the sense that any replay attack is composed entirely of elements classi ed by the taxonomy. The classi cation of attacks is illustrated using both new and previously known attacks on protocols. The taxonomy is also used to discuss the appropriateness of particular countermeasures and protocol analysis methods to particular kinds of replays.", "title": "" }, { "docid": "19937d689287ba81d2d01efd9ce8f2e4", "text": "We present a fast, fully parameterizable GPU implementation of Convolutional Neural Network variants. Our feature extractors are neither carefully designed nor pre-wired, but rather learned in a supervised way. Our deep hierarchical architectures achieve the best published results on benchmarks for object classification (NORB, CIFAR10) and handwritten digit recognition (MNIST), with error rates of 2.53%, 19.51%, 0.35%, respectively. Deep nets trained by simple back-propagation perform better than more shallow ones. Learning is surprisingly rapid. NORB is completely trained within five epochs. Test error rates on MNIST drop to 2.42%, 0.97% and 0.48% after 1, 3 and 17 epochs, respectively.", "title": "" }, { "docid": "d2e0309b503a23a9c0dd4360d0d26294", "text": "The rapid emergence of user-generated content (UGC) inspires knowledge sharing among Internet users. A good example is the well-known travel site TripAdvisor.com, which enables users to share their experiences and express their opinions on attractions, accommodations, restaurants, etc. The UGC about travel provide precious information to the users as well as staff in travel industry. In particular, how to identify reviews that are noteworthy for hotel management is critical to the success of hotels in the competitive travel industry. We have employed two hotel managers to conduct an examination on Taiwan’s hotel reviews in Tripadvisor.com and found that noteworthy reviews can be characterized by their content features, sentiments, and review qualities. Through the experiments using tripadvisor.com data, we find that all three types of features are important in identifying noteworthy hotel reviews. Specifically, content features are shown to have the most impact, followed by sentiments and review qualities. With respect to the various methods for representing content features, LDA method achieves comparable performance to TF-IDF method with higher recall and much fewer features.", "title": "" }, { "docid": "4ea07335d42a859768565c8d88cd5280", "text": "This paper brings together research from two different fields – user modelling and web ontologies – in attempt to demonstrate how recent semantic trends in web development can be combined with the modern technologies of user modelling. Over the last several years, a number of user-adaptive systems have been exploiting ontologies for the purposes of semantics representation, automatic knowledge acquisition, domain and user model visualisation and creation of interoperable and reusable architectural solutions. Before discussing these projects, we first overview the underlying user modelling and ontological technologies. As an example of the project employing ontology-based user modelling, we present an experiment design for translation of overlay student models for relative domains by means of ontology mapping.", "title": "" }, { "docid": "87fa8c6c894208e24328aa9dbb71a889", "text": "In this paper, the design and measurements of a 8-12GHz high-efficiency MMIC high power amplifier (HPA) implemented in a 0.25μm GaAS pHEMT process is described. The 3-stage amplifier has demonstrated from 37% to 54% power-added efficiency (PAE) with 12W of output power and up to 27dB of small signal gain range from 8-12GHz. In particular, over the frequency band of 9-11 GHz, the circuit achieved above 45% PAE. The key to this design is determining and matching the optimum source and load impedance for PAE at the first two harmonics in output stage.", "title": "" }, { "docid": "921062a73e2b4a5ab1d994ac22b04918", "text": "This study describes a new corpus of over 60,000 hand-annotated metadiscourse acts from 106 OpenCourseWare lectures, from two different disciplines: Physics and Economics. Metadiscourse is a set of linguistic expressions that signal different functions in the discourse. This type of language is hypothesised to be helpful in finding a structure in unstructured text, such as lectures discourse. A brief summary is provided about the annotation scheme and labelling procedures, inter-annotator reliability statistics, overall distributional statistics, a description of auxiliary data that will be distributed with the corpus, and information relating to how to obtain the data. The results provide a deeper understanding of lecture structure and confirm the reliable coding of metadiscursive acts in academic lectures across different disciplines. The next stage of our research will be to build a classification model to automate the tagging process, instead of manual annotation, which take time and efforts. This is in addition to the use of these tags as indicators of the higher level structure of lecture discourse.", "title": "" }, { "docid": "be997314f9fdee31d3dd02f6a0b0bb5b", "text": "Time-series-based anomaly detection is a quite important field that has been researched over years. Many techniques have been developed and applied successfully for certain application domains. However, there are still some challenges, such as continuously learning, tolerance to noise and generalization. This paper present Hierarchical Temporal Memory, a novel biological neural network, to time-series-based anomaly detection. HTM is able to learn the changing pattern of the data and incorporate contextual information from the past to make accurate prediction. We have evaluated HTM on real and artificial datasets. The experiment results show that HTM can successfully discover anomalies in time-series data.", "title": "" }, { "docid": "e16d89d3a6b3d38b5823fae977087156", "text": "The payoff of abarrier option depends on whether or not a specified asset price, index, or rate reaches a specified level during the life of the option. Most models for pricing barrier options assume continuous monitoring of the barrier; under this assumption, the option can often be priced in closed form. Many (if not most) real contracts with barrier provisions specify discrete monitoring instants; there are essentially no formulas for pricing these options, and even numerical pricing is difficult. We show, however, that discrete barrier options can be priced with remarkable accuracy using continuous barrier formulas by applying a simple continuity correction to the barrier. The correction shifts the barrier away from the underlying by a factor of exp (βσ √ 1t), whereβ ≈ 0.5826,σ is the underlying volatility, and1t is the time between monitoring instants. The correction is justified both theoretically and experimentally.", "title": "" }, { "docid": "e2f2961ab8c527914c3d23f8aa03e4bf", "text": "Pedestrian detection based on the combination of convolutional neural network (CNN) and traditional handcrafted features (i.e., HOG+LUV) has achieved great success. In general, HOG+LUV are used to generate the candidate proposals and then CNN classifies these proposals. Despite its success, there is still room for improvement. For example, CNN classifies these proposals by the fully connected layer features, while proposal scores and the features in the inner-layers of CNN are ignored. In this paper, we propose a unifying framework called multi-layer channel features (MCF) to overcome the drawback. It first integrates HOG+LUV with each layer of CNN into a multi-layer image channels. Based on the multi-layer image channels, a multi-stage cascade AdaBoost is then learned. The weak classifiers in each stage of the multi-stage cascade are learned from the image channels of corresponding layer. Experiments on Caltech data set, INRIA data set, ETH data set, TUD-Brussels data set, and KITTI data set are conducted. With more abundant features, an MCF achieves the state of the art on Caltech pedestrian data set (i.e., 10.40% miss rate). Using new and accurate annotations, an MCF achieves 7.98% miss rate. As many non-pedestrian detection windows can be quickly rejected by the first few stages, it accelerates detection speed by 1.43 times. By eliminating the highly overlapped detection windows with lower scores after the first stage, it is 4.07 times faster than negligible performance loss.", "title": "" }, { "docid": "52a362eab96e1a5a848e9fd2d7cb7960", "text": "Multiple time-stepping (MTS) algorithms allow to efficiently integrate large systems of ordinary differential equations, where a few stiff terms restrict the timestep of an otherwise non-stiff system. In this work, we discuss a flexible class of MTS techniques, based on multistep methods. Our approach contains several popular methods as special cases and it allows for the easy construction of novel and efficient higher-order MTS schemes. In addition, we demonstrate how to adapt the stability contour of the non-stiff time-integration to the physical system at hand. This allows significantly larger timesteps when compared to previously known multistep MTS approaches. As an example, we derive novel predictor-corrector (PCMTS) schemes specifically optimized for the time-integration of damped wave equations on locally refined meshes. In a set of numerical experiments, we demonstrate the performance of our scheme on discontinuous Galerkin time-domain (DGTD) simulations of Maxwell’s equations.", "title": "" }, { "docid": "5b9c12c1d65ab52d1a7bb6575c6c0bb1", "text": "The purpose of image enhancement is to process an acquired image for better contrast and visibility of features of interest for visual examination as well as subsequent computer-aided analysis and diagnosis. Therefore, we have proposed an algorithm for medical images enhancement. In the study, we used top-hat transform, contrast limited histogram equalization and anisotropic diffusion filter methods. The system results are quite satisfactory for many different medical images like lung, breast, brain, knee and etc.", "title": "" }, { "docid": "69f6b21da3fa48f485fc612d385e7869", "text": "Recurrent neural networks (RNN) have been successfully applied for recognition of cursive handwritten documents, both in English and Arabic scripts. Ability of RNNs to model context in sequence data like speech and text makes them a suitable candidate to develop OCR systems for printed Nabataean scripts (including Nastaleeq for which no OCR system is available to date). In this work, we have presented the results of applying RNN to printed Urdu text in Nastaleeq script. Bidirectional Long Short Term Memory (BLSTM) architecture with Connectionist Temporal Classification (CTC) output layer was employed to recognize printed Urdu text. We evaluated BLSTM networks for two cases: one ignoring the character's shape variations and the second is considering them. The recognition error rate at character level for first case is 5.15% and for the second is 13.6%. These results were obtained on synthetically generated UPTI dataset containing artificially degraded images to reflect some real-world scanning artifacts along with clean images. Comparison with shape-matching based method is also presented.", "title": "" } ]
scidocsrr
33114dd19e2f03b637f44f2b95c9908f
Game AI as Storytelling
[ { "docid": "57d5b69473898b0ae31fcb2f7b0660af", "text": "This paper describes an approach for managing the interaction of human users with computer-controlled agents in an interactive narrative-oriented virtual environment. In these kinds of systems, the freedom of the user to perform whatever action she desires must be balanced with the preservation of the storyline used to control the system's characters. We describe a technique, narrative mediation, that exploits a plan-based model of narrative structure to manage and respond to users' actions inside a virtual world. We define two general classes of response to situations where users execute actions that interfere with story structure: accommodation and intervention. Finally, we specify an architecture that uses these definitions to monitor and automatically characterize user actions, and to compute and implement responses to unanticipated activity. The approach effectively integrates user action and system response into the unfolding narrative, providing for the balance between a user's sense of control within the story world and the user's sense of coherence of the overall narrative.", "title": "" } ]
[ { "docid": "97dfc67c63e7e162dd06d5cb2959912a", "text": "To examine the pattern of injuries in cases of fatal shark attack in South Australian waters, the authors examined the files of their institution for all cases of shark attack in which full autopsies had been performed over the past 25 years, from 1974 to 1998. Of the seven deaths attributed to shark attack during this period, full autopsies were performed in only two cases. In the remaining five cases, bodies either had not been found or were incomplete. Case 1 was a 27-year-old male surfer who had been attacked by a shark. At autopsy, the main areas of injury involved the right thigh, which displayed characteristic teeth marks, extensive soft tissue damage, and incision of the femoral artery. There were also incised wounds of the right wrist. Bony injury was minimal, and no shark teeth were recovered. Case 2 was a 26-year-old male diver who had been attacked by a shark. At autopsy, the main areas of injury involved the left thigh and lower leg, which displayed characteristic teeth marks, extensive soft tissue damage, and incised wounds of the femoral artery and vein. There was also soft tissue trauma to the left wrist, with transection of the radial artery and vein. Bony injury was minimal, and no shark teeth were recovered. In both cases, death resulted from exsanguination following a similar pattern of soft tissue and vascular damage to a leg and arm. This type of injury is in keeping with predator attack from underneath or behind, with the most severe injuries involving one leg. Less severe injuries to the arms may have occurred during the ensuing struggle. Reconstruction of the damaged limb in case 2 by sewing together skin, soft tissue, and muscle bundles not only revealed that no soft tissue was missing but also gave a clearer picture of the pattern of teeth marks, direction of the attack, and species of predator.", "title": "" }, { "docid": "b2bf48c6c443f8fb39f79d2c9c0714f3", "text": "We review drug addiction from the perspective of the hypothesis that drugs of abuse interact with distinct brain memory systems. We focus on emotional and procedural forms of memory, encompassing Pavlovian and instrumental conditioning, both for action-outcome and for stimulus-response associations. Neural structures encompassed by these systems include the amygdala, hippocampus, nucleus accumbens, and dorsal striatum. Additional influences emanate from the anterior cingulate and prefrontal cortex, which are implicated in the encoding and retrieval of drug-related memories that lead to drug craving and drug use. Finally, we consider the ancillary point that chronic abuse of many drugs may impact directly on neural memory systems via neuroadaptive and neurotoxic effects that lead to cognitive impairments in which memory dysfunction is prominent.", "title": "" }, { "docid": "2c63c39cf0e21119ecd6a471c9764fa2", "text": "CODE4 is a general-purpose knowledge management system, intended to assist with the common knowledge processing needs of anyone who desires to analyse, store, or retrieve conceptual knowledge in applications as varied as the specification, design and user documentation of computer systems; the construction of term banks, or the development of ontologies for natural language understanding. This paper provides an overview of CODE4 as follows: We first describe the general philosophy and rationale of CODE4 and relate it to other systems. Next, we discuss the knowledge representation, specifically designed to meet the needs of flexible, interactive knowledge management. The highly-developed user interface, which we believe to be critical for this type of system, is explained in some detail. We finally describe how CODE4 is being used in a number of applications.", "title": "" }, { "docid": "1a66727305984ae359648e4bd3e75ba2", "text": "Self-organizing models constitute valuable tools for data visualization, clustering, and data mining. Here, we focus on extensions of basic vector-based models by recursive computation in such a way that sequential and tree-structured data can be processed directly. The aim of this article is to give a unified review of important models recently proposed in literature, to investigate fundamental mathematical properties of these models, and to compare the approaches by experiments. We first review several models proposed in literature from a unifying perspective, thereby making use of an underlying general framework which also includes supervised recurrent and recursive models as special cases. We shortly discuss how the models can be related to different neuron lattices. Then, we investigate theoretical properties of the models in detail: we explicitly formalize how structures are internally stored in different context models and which similarity measures are induced by the recursive mapping onto the structures. We assess the representational capabilities of the models, and we shortly discuss the issues of topology preservation and noise tolerance. The models are compared in an experiment with time series data. Finally, we add an experiment for one context model for tree-structured data to demonstrate the capability to process complex structures.", "title": "" }, { "docid": "9c7ea7ec8da891ccfd6ea2f3a08dc9db", "text": "In the past five years there has been tremendous activity in role-based access control (RBAC) models. Consensus has been achieved on a standard core RBAC model that is in process of publication by the US National Institute of Standards and Technology (NIST). An early insight was that RBAC cannot be encompassed by a single model since RBAC concepts range from very simple to very sophisticated. Hence a family of models is more appropriate than a single model. The NIST model reflects this approach. In fact RBAC is an open-ended concept which can be extended in many different directions as new applications and systems arise. The consensus embodied in the NIST model is a substantial achievement. All the same it just a starting point. There are important aspects of RBAC models, such as administration of RBAC, on which consensus remains to be reached. Recent RBAC models have studied newer concepts such as delegation and personalization, which are not captured in the NIST model. Applications of RBAC in workflow management systems have been investigated by several researchers. Research on RBAC systems that cross organizational boundaries has also been initiated. Thus RBAC models remain a fertile area for future research. In this paper we discuss some of the directions which we feel are likely to result in practically useful enhancements to the current state of art in RBAC models.", "title": "" }, { "docid": "b2c60198f29f734e000dd67cb6bdd08a", "text": "OBJECTIVE\nTo assess adolescents' perceptions about factors influencing their food choices and eating behaviors.\n\n\nDESIGN\nData were collected in focus-group discussions.\n\n\nSUBJECTS/SETTING\nThe study population included 141 adolescents in 7th and 10th grade from 2 urban schools in St Paul, Minn, who participated in 21 focus groups.\n\n\nANALYSIS\nData were analyzed using qualitative research methodology, specifically, the constant comparative method.\n\n\nRESULTS\nFactors perceived as influencing food choices included hunger and food cravings, appeal of food, time considerations of adolescents and parents, convenience of food, food availability, parental influence on eating behaviors (including the culture or religion of the family), benefits of foods (including health), situation-specific factors, mood, body image, habit, cost, media, and vegetarian beliefs. Major barriers to eating more fruits, vegetables, and dairy products and eating fewer high-fat foods included a lack of sense of urgency about personal health in relation to other concerns, and taste preferences for other foods. Suggestions for helping adolescents eat a more healthful diet include making healthful food taste and look better, limiting the availability of unhealthful options, making healthful food more available and convenient, teaching children good eating habits at an early age, and changing social norms to make it \"cool\" to eat healthfully.\n\n\nAPPLICATIONS/CONCLUSIONS\nThe findings suggest that if programs to improve adolescent nutrition are to be effective, they need to address a broad range of factors, in particular environmental factors (e.g., the increased availability and promotion of appealing, convenient foods within homes schools, and restaurants).", "title": "" }, { "docid": "6e4dcb451292cc38cb72300a24135c1b", "text": "This survey gives state-of-the-art of genetic algorithm (GA) based clustering techniques. Clustering is a fundamental and widely applied method in understanding and exploring a data set. Interest in clustering has increased recently due to the emergence of several new areas of applications including data mining, bioinformatics, web use data analysis, image analysis etc. To enhance the performance of clustering algorithms, Genetic Algorithms (GAs) is applied to the clustering algorithm. GAs are the best-known evolutionary techniques. The capability of GAs is applied to evolve the proper number of clusters and to provide appropriate clustering. This paper present some existing GA based clustering algorithms and their application to different problems and domains.", "title": "" }, { "docid": "80baa392afe96b7a83db79f5dc928c1a", "text": "Standard PWM current source inverters (CSIs) usually operate at fixed modulation index. The proposed modified current source inverter (MCSI) can operate with most pulse width modulation (PWM) techniques and with a variable mod­ ulation index, since the dc link inductor current freewheels on itself and not through the CSI. The use of variable modulation index control results in faster response times with no penalty on input power factor as compared to other variable modulation index schemes. This paper confirms this by investigating the input characteristics of the MCSI as seen from the ac mains. The quality of the input ac line currents is examined, and a design procedure for the input filters is given. Power factor and efficiency are discussed. Results are compared to those of other current source iuvester topologies. Experimental results obtained from a 5 kVA converter confirm the theoretical considerations.", "title": "" }, { "docid": "60e56a59ecbdee87005407ed6a117240", "text": "The visionary Steve Jobs said, “A lot of times, people don’t know what they want until you show it to them.” A powerful recommender system not only shows people similar items, but also helps them discover what they might like, and items that complement what they already purchased. In this paper, we attempt to instill a sense of “intention” and “style” into our recommender system, i.e., we aim to recommend items that are visually complementary with those already consumed. By identifying items that are visually coherent with a query item/image, our method facilitates exploration of the long tail items, whose existence users may be even unaware of. This task is formulated only recently by Julian et al. [1], with the input being millions of item pairs that are frequently viewed/bought together, entailing noisy style coherence. In the same work, the authors proposed a Mahalanobisbased transform to discriminate a given pair to be sharing a same style or not. Despite its success, we experimentally found that it’s only able to recommend items on the margin of different clusters, which leads to limited coverage of the items to be recommended. Another limitation is it totally ignores the existence of taxonomy information that is ubiquitous in many datasets like Amazon the authors experimented with. In this report, we propose two novel methods that make use of the hierarchical category metadata to overcome the limitations identified above. The main contributions are listed as following.", "title": "" }, { "docid": "f9571dc9a91dd8c2c6495814c44c88c0", "text": "Automatic number plate recognition is the task of extracting vehicle registration plates and labeling it for its underlying identity number. It uses optical character recognition on images to read symbols present on the number plates. Generally, numberplate recognition system includes plate localization, segmentation, character extraction and labeling. This research paper describes machine learning based automated Nepali number plate recognition model. Various image processing algorithms are implemented to detect number plate and to extract individual characters from it. Recognition system then uses Support Vector Machine (SVM) based learning and prediction on calculated Histograms of Oriented Gradients (HOG) features from each character. The system is evaluated on self-created Nepali number plate dataset. Evaluation accuracy of number plate character dataset is obtained as; 6.79% of average system error rate, 87.59% of average precision, 98.66% of average recall and 92.79% of average f-score. The accuracy of the complete number plate labeling experiment is obtained as 75.0%. Accuracy of the automatic number plate recognition is greatly influenced by the segmentation accuracy of the individual characters along with the size, resolution, pose, and illumination of the given image. Keywords—Nepali License Plate Recognition, Number Plate Detection, Feature Extraction, Histograms of Oriented Gradients, Optical Character Recognition, Support Vector Machines, Computer Vision, Machine Learning", "title": "" }, { "docid": "4b9df4116960cd3e3300d87e4f97e1e9", "text": "Large data collections required for the training of neural networks often contain sensitive information such as the medical histories of patients, and the privacy of the training data must be preserved. In this paper, we introduce a dropout technique that provides an elegant Bayesian interpretation to dropout, and show that the intrinsic noise added, with the primary goal of regularization, can be exploited to obtain a degree of differential privacy. The iterative nature of training neural networks presents a challenge for privacy-preserving estimation since multiple iterations increase the amount of noise added. We overcome this by using a relaxed notion of differential privacy, called concentrated differential privacy, which provides tighter estimates on the overall privacy loss. We demonstrate the accuracy of our privacy-preserving dropout algorithm on benchmark datasets.", "title": "" }, { "docid": "a398f3f5b670a9d2c9ae8ad84a4a3cb8", "text": "This project deals with online simultaneous localization and mapping (SLAM) problem without taking any assistance from Global Positioning System (GPS) and Inertial Measurement Unit (IMU). The main aim of this project is to perform online odometry and mapping in real time using a 2-axis lidar mounted on a robot. This involves use of two algorithms, the first of which runs at a higher frequency and uses the collected data to estimate velocity of the lidar which is fed to the second algorithm, a scan registration and mapping algorithm, to perform accurate matching of point cloud data.", "title": "" }, { "docid": "4fa9db557f53fa3099862af87337cfa9", "text": "With the rapid development of E-commerce, recent years have witnessed the booming of online advertising industry, which raises extensive concerns of both academic and business circles. Among all the issues, the task of Click-through rates (CTR) prediction plays a central role, as it may influence the ranking and pricing of online ads. To deal with this task, the Factorization Machines (FM) model is designed for better revealing proper combinations of basic features. However, the sparsity of ads transaction data, i.e., a large proportion of zero elements, may severely disturb the performance of FM models. To address this problem, in this paper, we propose a novel Sparse Factorization Machines (SFM) model, in which the Laplace distribution is introduced instead of traditional Gaussian distribution to model the parameters, as Laplace distribution could better fit the sparse data with higher ratio of zero elements. Along this line, it will be beneficial to select the most important features or conjunctions with the proposed SFM model. Furthermore, we develop a distributed implementation of our SFM model on Spark platform to support the prediction task on mass dataset in practice. Comprehensive experiments on two large-scale real-world datasets clearly validate both the effectiveness and efficiency of our SFM model compared with several state-of-the-art baselines, which also proves our assumption that Laplace distribution could be more suitable to describe the online ads transaction data.", "title": "" }, { "docid": "3823975ea2bcda029c3c3cda2b0472be", "text": "by Dimitrios Tzionas for the degree of Doctor rerum naturalium Hand motion capture with an RGB-D sensor gained recently a lot of research attention, however, even most recent approaches focus on the case of a single isolated hand. We focus instead on hands that interact with other hands or with a rigid or articulated object. Our framework successfully captures motion in such scenarios by combining a generative model with discriminatively trained salient points, collision detection and physics simulation to achieve a low tracking error with physically plausible poses. All components are unified in a single objective function that can be optimized with standard optimization techniques. We initially assume a-priori knowledge of the object’s shape and skeleton. In case of unknown object shape there are existing 3d reconstruction methods that capitalize on distinctive geometric or texture features. These methods though fail for textureless and highly symmetric objects like household articles, mechanical parts or toys. We show that extracting 3d hand motion for in-hand scanning e↵ectively facilitates the reconstruction of such objects and we fuse the rich additional information of hands into a 3d reconstruction pipeline. Finally, although shape reconstruction is enough for rigid objects, there is a lack of tools that build rigged models of articulated objects that deform realistically using RGB-D data. We propose a method that creates a fully rigged model consisting of a watertight mesh, embedded skeleton and skinning weights by employing a combination of deformable mesh tracking, motion segmentation based on spectral clustering and skeletonization based on mean curvature flow. To my family Maria, Konstantinos, Glyka. In the loving memory of giagià 'Olga & pappo‘c Giànnhc. (Olga Matoula & Ioannis Matoulas) πste ô yuqò πsper ô qe–r ‚stin· ka» gÄr ô qe»r ÓrganÏn ‚stin Êrgànwn, ka» Â no‹c e⁄doc e d¿n ka» ô a“sjhsic e⁄doc a sjht¿n.", "title": "" }, { "docid": "8f0ac7417daf0c995263274738dcbb13", "text": "Technology platform strategies offer a novel way to orchestrate a rich portfolio of contributions made by the many independent actors who form an ecosystem of heterogeneous complementors around a stable platform core. This form of organising has been successfully used in the smartphone, gaming, commercial software, and other industrial sectors. While technology ecosystems require stability and homogeneity to leverage common investments in standard components, they also need variability and heterogeneity to meet evolving market demand. Although the required balance between stability and evolvability in the ecosystem has been addressed conceptually in the literature, we have less understanding of its underlying mechanics or appropriate governance. Through an extensive case study of a business software ecosystem consisting of a major multinational manufacturer of enterprise resource planning (ERP) software at the core, and a heterogeneous system of independent implementation partners and solution developers on the periphery, our research identifies three salient tensions that characterize the ecosystem: standard-variety; control-autonomy; and collective-individual. We then highlight the specific ecosystem governance mechanisms designed to simultaneously manage desirable and undesirable variance across each tension. Paradoxical tensions may manifest as dualisms, where actors are faced with contradictory and disabling „either/or‟ decisions. Alternatively, they may manifest as dualities, where tensions are framed as complementary and mutually-enabling. We identify conditions where latent, mutually enabling tensions become manifest as salient, disabling tensions. By identifying conditions in which complementary logics are overshadowed by contradictory logics, our study further contributes to the understanding of the dynamics of technology ecosystems, as well as the effective design of technology ecosystem governance that can explicitly embrace paradoxical tensions towards generative outcomes.", "title": "" }, { "docid": "af4d583cf45d13c09e59a927905a7794", "text": "Background and aims: Addiction to internet and mobile phone may be affecting all aspect of student’s life. Knowledge about prevalence and related factors of internet and mobile phone addiction is necessary for planning for prevention and treatment. This study was conducted to evaluate the prevalence of internet and mobile phone addiction among Iranian students. Methods: This cross sectional study conducted from Jun to April 2015 in Rasht Iran. With using stratified sampling method, 581 high school students from two region of Rasht in North of Iran were recruited as the subjects for this study. Data were collected with using demographics questionnaire, Cell phone Overuse Scale (COS), and the Internet Addiction Test (IAT). Analysis was performed using Statistical Package for Social Sciences (SPSS) 17 21 version. Results: Of the 581 students, who participate in present study, 53.5% were female and the rest were male. The mean age of students was 16.28±1.01 years. The mean score of IAT was 42.03±18.22. Of the 581 students, 312 (53.7%), 218 (37.5%) and 51 (8.8%) showed normal, mild and moderate level of internet addiction. The mean score of COS was 55.10±19.86.Of the 581 students, 27(6/4%), 451(6/77) and 103 (7/17) showed low, moderate and high level of mobile phone addiction. Conclusion: according to finding of present study, rate of mobile phone and internet addiction were high among Iranian students. Health care authorities should pay more attention to these problems.", "title": "" }, { "docid": "732da8eb4c41d6bf70ded5866fadd334", "text": "Ferroelectric field effect transistors (FeFETs) based on ferroelectric hafnium oxide (HfO2) thin films show high potential for future embedded nonvolatile memory applications. However, HfO2 films besides their recently discovered ferroelectric behavior are also prone to undesired charge trapping effects. Therefore, the scope of this paper is to verify the possibility of the charge trapping during standard operation of the HfO2-based FeFET memories. The kinetics of the charge trapping and its interplay with the ferroelectric polarization switching are analyzed in detail using the single-pulse ID-VG technique. Furthermore, the impact of the charge trapping on the important memory characteristics such as retention and endurance is investigated.", "title": "" }, { "docid": "0386f70c108694545be490e00a23cd62", "text": "In this paper, we propose an end-to-end convolution neural network (CNN) to restore a clear high-resolution image from a severely blurry image. It's a highly ill-posed problem and brings tremendous challenges to state-of-art deblurring or super-resolution (SR) methods. A straightforward way to solve this problem is to concatenate two types of networks directly. However, experiments show that the concatenation of independent networks increases computation complexity instead of generating satisfying high-resolution images. Consequently, we focus on designing a single deep network to solve the deblurring and SR problems in parallel. Our method, called ED-DSRN, extends the traditional Super-Resolution network by adding a deblurring branch that shares the same feature maps extracted from an encoder-decoder module with the original SR branch. Extensive experiments show that our method produces remarkable deblurred and super-resolved images simultaneously with high efficiency.", "title": "" }, { "docid": "ddae1c6469769c2c7e683bfbc223ad1a", "text": "Deep reinforcement learning has achieved many impressive results in recent years. However, tasks with sparse rewards or long horizons continue to pose significant challenges. To tackle these important problems, we propose a general framework that first learns useful skills in a pre-training environment, and then leverages the acquired skills for learning faster in downstream tasks. Our approach brings together some of the strengths of intrinsic motivation and hierarchical methods: the learning of useful skill is guided by a single proxy reward, the design of which requires very minimal domain knowledge about the downstream tasks. Then a high-level policy is trained on top of these skills, providing a significant improvement of the exploration and allowing to tackle sparse rewards in the downstream tasks. To efficiently pre-train a large span of skills, we use Stochastic Neural Networks combined with an information-theoretic regularizer. Our experiments1 show2 that this combination is effective in learning a wide span of interpretable skills in a sample-efficient way, and can significantly boost the learning performance uniformly across a wide range of downstream tasks.", "title": "" } ]
scidocsrr
d40928bca658edcf58f6c830d6b07728
PatchMatch Based Joint View Selection and Depthmap Estimation
[ { "docid": "dd1b20766f2b8099b914c780fb8cc03c", "text": "Many computer vision algorithms limit their performance by ignoring the underlying 3D geometric structure in the image. We show that we can estimate the coarse geometric properties of a scene by learning appearance-based models of geometric classes, even in cluttered natural scenes. Geometric classes describe the 3D orientation of an image region with respect to the camera. We provide a multiple-hypothesis framework for robustly estimating scene structure from a single image and obtaining confidences for each geometric label. These confidences can then be used to improve the performance of many other applications. We provide a thorough quantitative evaluation of our algorithm on a set of outdoor images and demonstrate its usefulness in two applications: object detection and automatic single-view reconstruction.", "title": "" }, { "docid": "e4c5df9b038e69c5ddd919e4284c07b0", "text": "Many computer vision tasks can be formulated as labeling problems. The desired solution is often a spatially smooth labeling where label transitions are aligned with color edges of the input image. We show that such solutions can be efficiently achieved by smoothing the label costs with a very fast edge-preserving filter. In this paper, we propose a generic and simple framework comprising three steps: 1) constructing a cost volume, 2) fast cost volume filtering, and 3) Winner-Takes-All label selection. Our main contribution is to show that with such a simple framework state-of-the-art results can be achieved for several computer vision applications. In particular, we achieve 1) disparity maps in real time whose quality exceeds those of all other fast (local) approaches on the Middlebury stereo benchmark, and 2) optical flow fields which contain very fine structures as well as large displacements. To demonstrate robustness, the few parameters of our framework are set to nearly identical values for both applications. Also, competitive results for interactive image segmentation are presented. With this work, we hope to inspire other researchers to leverage this framework to other application areas.", "title": "" }, { "docid": "25786c5516b559fc4a566e72485fdcc6", "text": "We propose an algorithm to improve the quality of depth-maps used for Multi-View Stereo (MVS). Many existing MVS techniques make use of a two stage approach which estimates depth-maps from neighbouring images and then merges them to extract a final surface. Often the depth-maps used for the merging stage will contain outliers due to errors in the matching process. Traditional systems exploit redundancy in the image sequence (the surface is seen in many views), in order to make the final surface estimate robust to these outliers. In the case of sparse data sets there is often insufficient redundancy and thus performance degrades as the number of images decreases. In order to improve performance in these circumstances it is necessary to remove the outliers from the depth-maps. We identify the two main sources of outliers in a top performing algorithm: (1) spurious matches due to repeated texture and (2) matching failure due to occlusion, distortion and lack of texture. We propose two contributions to tackle these failure modes. Firstly, we store multiple depth hypotheses and use a spatial consistently constraint to extract the true depth. Secondly, we allow the algorithm to return an unknown state when the a true depth estimate cannot be found. By combining these in a discrete label MRF optimisation we are able to obtain high accuracy depthmaps with low numbers of outliers. We evaluate our algorithm in a multi-view stereo framework and find it to confer state-of-the-art performance with the leading techniques, in particular on the standard evaluation sparse data sets.", "title": "" } ]
[ { "docid": "d407b75f7ee6c3f0d504bddf39c2648e", "text": "This article presents a recent and inclusive review of the use of token economies in various environments (schools, home, etc.). Digital and manual searches were carried using the following databases: Google Scholar, Psych Info (EBSCO), and The Web of Knowledge. The search terms included: token economy, token systems, token reinforcement, behavior modification, classroom management, operant conditioning, animal behavior, token literature reviews, and token economy concerns. The criteria for inclusion were studies that implemented token economies in settings where academics were assessed. Token economies have been extensively implemented and evaluated in the past. Few articles in the peerreviewed literature were found being published recently. While token economy reviews have occurred historically (Kazdin, 1972, 1977, 1982), there has been no recent overview of the research. During the previous several years, token economies in relation to certain disorders have been analyzed and reviewed; however, a recent review of token economies as a field of study has not been carried out. The purpose of this literature review was to produce a recent review and evaluation on the research of token economies across settings.", "title": "" }, { "docid": "4c3e4da0a2423a184911dfed7f4e7234", "text": "Pseudo-relevance feedback (PRF) has been proven to be an effective query expansion strategy to improve retrieval performance. Several PRF methods have so far been proposed for many retrieval models. Recent theoretical studies of PRF methods show that most of the PRF methods do not satisfy all necessary constraints. Among all, the log-logistic model has been shown to be an effective method that satisfies most of the PRF constraints. In this paper, we first introduce two new PRF constraints. We further analyze the log-logistic feedback model and show that it does not satisfy these two constraints as well as the previously proposed \"relevance effect\" constraint. We then modify the log-logistic formulation to satisfy all these constraints. Experiments on three TREC newswire and web collections demonstrate that the proposed modification significantly outperforms the original log-logistic model, in all collections.", "title": "" }, { "docid": "8905bd760b0c72fbfe4fbabd778ff408", "text": "Boredom and low levels of task engagement while driving can pose road safety risks, e.g., inattention during low traffic, routine trips, or semi-automated driving. Digital technology interventions that increase task engagement, e.g., through performance feedback, increased challenge, and incentives (often referred to as ‘gamification’), could therefore offer safety benefits. To explore the impact of such interventions, we conducted experiments in a highfidelity driving simulator with thirty-two participants. In two counterbalanced conditions (control and intervention), we compared driving behaviour, physiological arousal, and subjective experience. Results indicate that the gamified boredom intervention reduced unsafe coping mechanisms such as speeding while promoting anticipatory driving. We can further infer that the intervention not only increased one’s attention and arousal during the intermittent gamification challenges, but that these intermittent stimuli may also help sustain one’s attention and arousal in between challenges and throughout a drive. At the same time, the gamified condition led to slower hazard reactions and short off-road glances. Our contributions deepen our understanding of driver boredom and pave the way for engaging interventions for safety critical tasks.", "title": "" }, { "docid": "4acf71599c803b4d98fc1b64ccd3ec90", "text": "The rise of augmented reality (AR) technology presents e-retailers with new opportunities. According to previous research, it is a technology that can positively affect engagement, brand recall and purchase confidence. Mobile-enabled augmented reality differs from regular mobile phone use as the technology virtually overlays images or information to the real environment. As the use of a touch screen device (i.e. smartphone vs. laptop) has previously been found to positively affect feelings of perceived ownership, the current study examines whether the possibility to virtually manipulate a product on a mobile AR application would have an even stronger effect. This is examined for products with either material properties (i.e. products that require the examination of sensory information) or geometric properties (i.e. products that can be examined via written and/or visual information). The findings reveal that AR does indeed result in higher levels of perceived ownership, particularly in case of material products.", "title": "" }, { "docid": "c3920da94fae0820c8ef6fb204a6c1d4", "text": "Many commercial video players rely on bitrate adaptation logic to adapt the bitrate in response to changing network conditions. Past measurement studies have identified issues with today's commercial players with respect to three key metrics---efficiency, fairness, and stability---when multiple bitrate-adaptive players share a bottleneck link. Unfortunately, our current understanding of why these effects occur and how they can be mitigated is quite limited.\n In this paper, we present a principled understanding of bitrate adaptation and analyze several commercial players through the lens of an abstract player model. Through this framework, we identify the root causes of several undesirable interactions that arise as a consequence of overlaying the video bitrate adaptation over HTTP. Building on these insights, we develop a suite of techniques that can systematically guide the tradeoffs between stability, fairness and efficiency and thus lead to a general framework for robust video adaptation. We pick one concrete instance from this design space and show that it significantly outperforms today's commercial players on all three key metrics across a range of experimental scenarios.", "title": "" }, { "docid": "d076cb1cf48cf0a9e7eb5fee749ed10e", "text": "Cats have protractible claws to fold their tips to keep them sharp. They protract claws while hunting and pawing on slippery surfaces. Protracted claws by tendons and muscles of toes can help cats anchoring themselves steady while their locomotion trends to slip and releasing the hold while they retract claws intentionally. This research proposes a kind of modularized self-adaptive toe mechanism inspired by cat claws to improve the extremities' contact performance for legged robot. The mechanism is constructed with four-bar linkage actuated by contact reaction force and retracted by applied spring tension. A feasible mechanical design based on several essential parameters is introduced and an integrated Sole-Toe prototype is built for experimental evaluation. Mechanical self-adaption and actual contact performance on specific surface have been evaluated respectively on a biped walking platform and a bench-top mechanical testing.", "title": "" }, { "docid": "f7e19e14c90490e1323e47860d21ec4d", "text": "There is great potential for genome sequencing to enhance patient care through improved diagnostic sensitivity and more precise therapeutic targeting. To maximize this potential, genomics strategies that have been developed for genetic discovery — including DNA-sequencing technologies and analysis algorithms — need to be adapted to fit clinical needs. This will require the optimization of alignment algorithms, attention to quality-coverage metrics, tailored solutions for paralogous or low-complexity areas of the genome, and the adoption of consensus standards for variant calling and interpretation. Global sharing of this more accurate genotypic and phenotypic data will accelerate the determination of causality for novel genes or variants. Thus, a deeper understanding of disease will be realized that will allow its targeting with much greater therapeutic precision.", "title": "" }, { "docid": "6fd1d745512130fa62672f5a1ad5e1c2", "text": "Bitcoin, the first peer-to-peer electronic cash system, opened the door to permissionless, private, and trustless transactions. Attempts to repurpose Bitcoin’s underlying blockchain technology have run up against fundamental limitations to privacy, faithful execution, and transaction finality. We introduce Strong Federations: publicly verifiable, Byzantinerobust transaction networks that facilitate movement of any asset between disparate markets, without requiring third-party trust. Strong Federations enable commercial privacy, with support for transactions where asset types and amounts are opaque, while remaining publicly verifiable. As in Bitcoin, execution fidelity is cryptographically enforced; however, Strong Federations significantly lower capital requirements for market participants by reducing transaction latency and improving interoperability. To show how this innovative solution can be applied today, we describe Liquid: the first implementation of Strong Federations deployed in a Financial Market.", "title": "" }, { "docid": "32b4b275dc355dff2e3e168fe6355772", "text": "The management of coupon promotions is an important issue for marketing managers since it still is the major promotion medium. However, the distribution of coupons does not go without problems. Although manufacturers and retailers are investing heavily in the attempt to convince as many customers as possible, overall coupon redemption rate is low. This study improves the strategy of retailers and manufacturers concerning their target selection since both parties often end up in a battle for customers. Two separate models are built: one model makes predictions concerning redemption behavior of coupons that are distributed by the retailer while another model does the same for coupons handed out by manufacturers. By means of the feature-selection technique ‘Relief-F’ the dimensionality of the models is reduced, since it searches for the variables that are relevant for predicting the outcome. In this way, redundant variables are not used in the model-building process. The model is evaluated on real-life data provided by a retailer in FMCG. The contributions of this study for retailers as well as manufacturers are threefold. First, the possibility to classify customers concerning their coupon usage is shown. In addition, it is demonstrated that retailers and manufacturers can stay clear of each other in their marketing campaigns. Finally, the feature-selection technique ‘Relief-F’ proves to facilitate and optimize the performance of the models.", "title": "" }, { "docid": "86d196a616e4ae0d28fb6d7099508c49", "text": "As applications are becoming increasingly dynamic, the notion that a schema can be created in advance for an application and remain relatively stable is becoming increasingly unrealistic. This has pushed application developers away from traditional relational database systems and away from the SQL interface, despite their many well-established benefits. Instead, developers often prefer self-describing data models such as JSON, and NoSQL systems designed specifically for their relaxed semantics.\n In this paper, we discuss the design of a system that enables developers to continue to represent their data using self-describing formats without moving away from SQL and traditional relational database systems. Our system stores arbitrary documents of key-value pairs inside physical and virtual columns of a traditional relational database system, and adds a layer above the database system that automatically provides a dynamic relational view to the user against which fully standard SQL queries can be issued. We demonstrate that our design can achieve an order of magnitude improvement in performance over alternative solutions, including existing relational database JSON extensions, MongoDB, and shredding systems that store flattened key-value data inside a relational database.", "title": "" }, { "docid": "22f49f2d6e3021516d93d9a96c408dbb", "text": "This paper presents Flower menu, a new type of Marking menu that does not only support straight, but also curved gestures for any of the 8 usual orientations. Flower menus make it possible to put many commands at each menu level and thus to create as large a hierarchy as needed for common applications. Indeed our informal analysis of menu breadth in popular applications shows that a quarter of them have more than 16 items. Flower menus can easily contain 20 items and even more (theoretical maximum of 56 items). Flower menus also support within groups as well as hierarchical groups. They can thus favor breadth organization (within groups) or depth organization (hierarchical groups): as a result, the designers can lay out items in a very flexible way in order to reveal meaningful item groupings. We also investigate the learning performance of the expert mode of Flower menus. A user experiment is presented that compares linear menus (baseline condition), Flower menus and Polygon menus, a variant of Marking menus that supports a breadth of 16 items. Our experiment shows that Flower menus are more efficient than both Polygon and Linear menus for memorizing command activation in expert mode.", "title": "" }, { "docid": "74421de5dedd1f06e94e3ad215a49043", "text": "Input is a significant problem for wearable systems, particularly for head mounted virtual and augmented reality displays. Existing input techniques either lack expressive power or may not be socially acceptable. As an alternative, thumb-to-finger touches present a promising input mechanism that is subtle yet capable of complex interactions. We present DigiTouch, a reconfigurable glove-based input device that enables thumb-to-finger touch interaction by sensing continuous touch position and pressure. Our novel sensing technique improves the reliability of continuous touch tracking and estimating pressure on resistive fabric interfaces. We demonstrate DigiTouch’s utility by enabling a set of easily reachable and reconfigurable widgets such as buttons and sliders. Since DigiTouch senses continuous touch position, widget layouts can be customized according to user preferences and application needs. As an example of a real-world application of this reconfigurable input device, we examine a split-QWERTY keyboard layout mapped to the user’s fingers. We evaluate DigiTouch for text entry using a multi-session study. With our continuous sensing method, users reliably learned to type and achieved a mean typing speed of 16.0 words per minute at the end of ten 20-minute sessions, an improvement over similar wearable touch systems.", "title": "" }, { "docid": "c446ce16a62f832a167101293fe8b58d", "text": "Unforeseen events such as node failures and resource contention can have a severe impact on the performance of data processing frameworks, such as Hadoop, especially in cloud environments where such incidents are common. SLA compliance in the presence of such events requires the ability to quickly and dynamically resize infrastructure resources. Unfortunately, the distributed and stateful nature of data processing frameworks makes it challenging to accurately scale the system at run-time. In this paper, we present the design and implementation of a model-driven autoscaling solution for Hadoop clusters. We first develop novel gray-box performance models for Hadoop workloads that specifically relate job execution times to resource allocation and workload parameters. We then employ these models to dynamically determine the resources required to successfully complete the Hadoop jobs as per the user-specified SLA under various scenarios including node failures and multi-job executions. Our experimental results on three different Hadoop cloud clusters and across different workloads demonstrate the efficacy of our models and highlight their autoscaling capabilities.", "title": "" }, { "docid": "5594fc8fec483698265abfe41b3776c9", "text": "This paper is an abridgement and update of numerous IEEE papers dealing with Squirrel Cage Induction Motor failure analysis. They are the result of a taxonomic study and research conducted by the author during a 40 year career in the motor industry. As the Petrochemical Industry is revolving to reliability based maintenance, increased attention should be given to preventing repeated failures. The Root Cause Failure methodology presented in this paper will assist in this transition. The scope of the product includes Squirrel Cage Induction Motors up to 3000 hp, however, much of this methodology has application to larger sizes and types.", "title": "" }, { "docid": "3f60b8ddecab25537c24fd972b4958ff", "text": "Biological sequence comparison is a key step in inferring the relatedness of various organisms and the functional similarity of their components. Thanks to the Next Generation Sequencing efforts, an abundance of sequence data is now available to be processed for a range of bioinformatics applications. Embedding a biological sequence – over a nucleotide or amino acid alphabet – in a lower dimensional vector space makes the data more amenable for use by current machine learning tools, provided the quality of embedding is high and it captures the most meaningful information of the original sequences. Motivated by recent advances in the text document embedding literature, we present a new method, called seq2vec, to represent a complete biological sequence in an Euclidean space. The new representation has the potential to capture the contextual information of the original sequence necessary for sequence comparison tasks. We test our embeddings with protein sequence classification and retrieval tasks and demonstrate encouraging outcomes.", "title": "" }, { "docid": "96abf2baa684ec3e214d4eacd8ca9c23", "text": "Face attribute estimation has many potential applications in video surveillance, face retrieval, and social media. While a number of methods have been proposed for face attribute estimation, most of them did not explicitly consider the attribute correlation and heterogeneity (e.g., ordinal versus nominal and holistic versus local) during feature representation learning. In this paper, we present a Deep Multi-Task Learning (DMTL) approach to jointly estimate multiple heterogeneous attributes from a single face image. In DMTL, we tackle attribute correlation and heterogeneity with convolutional neural networks (CNNs) consisting of shared feature learning for all the attributes, and category-specific feature learning for heterogeneous attributes. We also introduce an unconstrained face database (LFW+), an extension of public-domain LFW, with heterogeneous demographic attributes (age, gender, and race) obtained via crowdsourcing. Experimental results on benchmarks with multiple face attributes (MORPH II, LFW+, CelebA, LFWA, and FotW) show that the proposed approach has superior performance compared to state of the art. Finally, evaluations on a public-domain face database (LAP) with a single attribute show that the proposed approach has excellent generalization ability.", "title": "" }, { "docid": "d10ec03d91d58dd678c995ec1877c710", "text": "Major depressive disorders, long considered to be of neurochemical origin, have recently been associated with impairments in signaling pathways that regulate neuroplasticity and cell survival. Agents designed to directly target molecules in these pathways may hold promise as new therapeutics for depression.", "title": "" }, { "docid": "beedcc735e6e0c2e58ede1dc042e9979", "text": "Paleolimnological studies which included analyses of diatoms, fossil pigments and physico-chemical characteristics of bottom sediments have been used to describe the limnological history of Racze Lake. The influx of terrigenous material into the lake have been determined on the basis of stratigraphy of elements associated with mineral content. The successively eroded soils as well as process of chemical erosion caused increase leaching of metals Mg, Fe, Al into the lake basin. However the concentration of these metals finally deposited in bottom sediments was also effected by the oxygen regime at the sediment-water interface. Both ratios, chlorophyll derivatives to total carotenoids (CD:TC) and Fe:Mn indicated hypolimnetic oxygen depletion in the middle part of the profile. The development of blue-green algal population, estimated by the ratio epiphasic to hypophasic carotenoids (EC:HC) was correlated with periods of redox conditions in the lake. The pH changes ranged from 6.5 to 7.7. The most important factors effecting pH changes were inflow of mineral matter from the watershed and structural changes in the littoral biocenosis.", "title": "" }, { "docid": "9409922d01a00695745939b47e6446a0", "text": "The Suricata intrusion-detection system for computer-network monitoring has been advanced as an open-source improvement on the popular Snort system that has been available for over a decade. Suricata includes multi-threading to improve processing speed beyond Snort. Previous work comparing the two products has not used a real-world setting. We did this and evaluated the speed, memory requirements, and accuracy of the detection engines in three kinds of experiments: (1) on the full traffic of our school as observed on its \" backbone\" in real time, (2) on a supercomputer with packets recorded from the backbone, and (3) in response to malicious packets sent by a red-teaming product. We used the same set of rules for both products with a few small exceptions where capabilities were missing. We conclude that Suricata can handle larger volumes of traffic than Snort with similar accuracy, and that its performance scaled roughly linearly with the number of processors up to 48. We observed no significant speed or accuracy advantage of Suricata over Snort in its current state, but it is still being developed. Our methodology should be useful for comparing other intrusion-detection products.", "title": "" }, { "docid": "01d93a621bb6d52ca37650d4a79c43f3", "text": "Recommender systems are a classical example for machine learning applications, however, they have not yet been used extensively in health informatics and medical scenarios. We argue that this is due to the specifics of benchmarking criteria in medical scenarios and the multitude of drastically differing end-user groups and the enormous contextcomplexity of the medical domain. Here both risk perceptions towards data security and privacy as well as trust in safe technical systems play a central and specific role, particularly in the clinical context. These aspects dominate acceptance of such systems. By using a Doctor-in-theLoop approach some of these difficulties could be mitigated by combining both human expertise with computer efficiency. We provide a three-part research framework to access health recommender systems, suggesting to incorporate domain understanding, evaluation and specific methodology into the development process.", "title": "" } ]
scidocsrr
28002333d094b7608eb087d4ad48feee
A clustering fuzzy approach for image segmentation
[ { "docid": "2c8e7bfcd41924d0fe8f66166d366751", "text": "-Many image segmentation techniques are available in the literature. Some of these techniques use only the gray level histogram, some use spatial details while others use fuzzy set theoretic approaches. Most of these techniques are not suitable for noisy environments. Some works have been done using the Markov Random Field (MRF) model which is robust to noise, but is computationally involved. Neural network architectures which help to get the output in real time because of their parallel processing ability, have also been used for segmentation and they work fine even when the noise level is very high. The literature on color image segmentation is not that rich as it is for gray tone images. This paper critically reviews and summarizes some of these techniques. Attempts have been made to cover both fuzzy and non-fuzzy techniques including color image segmentation and neural network based approaches. Adequate attention is paid to segmentation of range images and magnetic resonance images. It also addresses the issue of quantitative evaluation of segmentation results. Image segmentation Fuzzy sets Markov Random Field Thresholding Edge detection Clustering Relaxation", "title": "" } ]
[ { "docid": "c8379f1382a191985cf55773d0cd02c9", "text": "Utilizing Big Data scenarios that are generated from increasing digitization and data availability is a core topic in IS research. There are prospective advantages in generating business value from those scenarios through improved decision support and new business models. In order to harvest those potential advantages Big Data capabilities are required, including not only technological aspects of data management and analysis but also strategic and organisational aspects. To assess these capabilities, one can use capability assessment models. Employing a qualitative meta-analysis on existing capability assessment models, it can be revealed that the existing approaches greatly differ in their fundamental structure due to heterogeneous model elements. The heterogeneous elements are therefore synthesized and transformed into consistent assessment dimensions to fulfil the requirements of exhaustive and mutually exclusive aspects of a capability assessment model. As part of a broader research project to develop a consistent and harmonized Big Data Capability Assessment Model (BDCAM) a new design for a capability matrix is proposed including not only capability dimensions but also Big Data life cycle tasks in order to measure specific weaknesses along the process of data-driven value creation.", "title": "" }, { "docid": "cc8159c1bf2494d0c0df88343d7366b1", "text": "Sharp electrically conductive structures integrated into micro-transfer-print compatible components provide an approach to forming electrically interconnected systems during the assembly procedure. Silicon micromachining techniques are used to fabricate print-compatible components with integrated, electrically conductive, pressure-concentrating structures. The geometry of the structures allow them to penetrate a polymer receiving layer during the elastomer stamp printing operation, and reflow of the polymer following the transfer completes the electrical interconnection when capillary action forces the gold-coated pressure-concentrator into a metal landing site. Experimental results and finite element simulations support a discussion of the mechanics of the interconnection.", "title": "" }, { "docid": "67989a9fe9d56e27eb42ca867a919a7d", "text": "Data remanence is the residual physical representation of data that has been erased or overwritten. In non-volatile programmable devices, such as UV EPROM, EEPROM or Flash, bits are stored as charge in the floating gate of a transistor. After each erase operation, some of this charge remains. Security protection in microcontrollers and smartcards with EEPROM/Flash memories is based on the assumption that information from the memory disappears completely after erasing. While microcontroller manufacturers successfully hardened already their designs against a range of attacks, they still have a common problem with data remanence in floating-gate transistors. Even after an erase operation, the transistor does not return fully to its initial state, thereby allowing the attacker to distinguish between previously programmed and not programmed transistors, and thus restore information from erased memory. The research in this direction is summarised here and it is shown how much information can be extracted from some microcontrollers after their memory has been ‘erased’.", "title": "" }, { "docid": "062fe0d16a46b261848b9bfe8a47bd2f", "text": "RSA is a very popular public key cryptosystem. This algorithm is known to be secure, but this fact relies on the difficulty of factoring large numbers. Because of the popularity of the algorithm, much research has gone into this problem of factoring a large number. The size of the number that we are able to factor increases exponentially year by year. This fact is partly due to advancements in computing hardware, but it is largely due to advancements in factoring algorithms. The General Number Field Sieve is an example of just such an advanced factoring algorithm. This is currently the best known method for factoring large numbers. This paper is a presentation of the General Number Field Sieve. It begins with a discussion of the algorithm in general and covers the theory that is responsible for its success. Because often the best way to learn an algorithm is by applying it, an extensive numerical example is included as well. I. I NTRODUCTION The General Number Field Sieve is an algorithm for factoring very large numbers. Factoring is very important in the field of cryptography, specifically in the RSA cryptosystem. The Rivest, Shamir, Adleman (RSA) cryptosystem is a scheme for encrypting and decrypting messages, and its security relies on the fact that factoring large composite numbers is a very hard, computationally intensive task. The RSA algorithm works in the following way: • Choose two large primes p andq. Setn = pq. • Choose a randome satisfying1 ≤ e < n. • Setd = e−1 (mod (p− 1)(q − 1)). • A messagem is encrypted toc ≡ me (mod n). Note that onlye andn were needed to compute c. e andn are known as the public key and are public information.", "title": "" }, { "docid": "c9df206d8c0bc671f3109c1c7b12b149", "text": "Internet of Things (IoT) — a unified network of physical objects that can change the parameters of the environment or their own, gather information and transmit it to other devices. It is emerging as the third wave in the development of the internet. This technology will give immediate access to information about the physical world and the objects in it leading to innovative services and increase in efficiency and productivity. The IoT is enabled by the latest developments, smart sensors, communication technologies, and Internet protocols. This article contains a description of lnternet of things (IoT) networks. Much attention is given to prospects for future of using IoT and it's development. Some problems of development IoT are were noted. The article also gives valuable information on building(construction) IoT systems based on PLC technology.", "title": "" }, { "docid": "2af781cb0bf034acbd5e6a563349b7b7", "text": "This paper presents a new information visualization framework that supports the analytical reasoning process. It consists of three views - a data view, a knowledge view and a navigation view. The data view offers interactive information visualization tools. The knowledge view enables the analyst to record analysis artifacts such as findings, hypotheses and so on. The navigation view provides an overview of the exploration process by capturing the visualization states automatically. An analysis artifact recorded in the knowledge view can be linked to a visualization state in the navigation view. The analyst can revisit a visualization state from both the navigation and knowledge views to review the analysis and reuse it to look for alternate views. The whole analysis process can be saved along with the synthesized information. We present a user study and discuss the perceived usefulness of a prototype based on this framework that we have developed.", "title": "" }, { "docid": "16dc05092756ca157476b6aeb7705915", "text": "Model checkers and other nite-state veriication tools allow developers to detect certain kinds of errors automatically. Nevertheless, the transition of this technology from research to practice has been slow. While there are a number of potential causes for reluctance to adopt such formal methods, we believe that a primary cause is that practitioners are unfamiliar with specii-cation processes, notations, and strategies. In a recent paper, we proposed a pattern-based approach to the presentation, codiication and reuse of property specii-cations for nite-state veriication. Since then, we have carried out a survey of available speciications, collecting over 500 examples of property speciications. We found that most are instances of our proposed patterns. Furthermore, we have updated our pattern system to accommodate new patterns and variations of existing patterns encountered in this survey. This paper reports the results of the survey and the current status of our pattern system.", "title": "" }, { "docid": "73d08848f16d5c881cf3224fee561eb6", "text": "This paper presents the design of the KIT Dual Arm System, which consists of two high-performance, humanoid robot arms. Based on human arm kinematics, each arm has 8 degrees of freedom (DOF) including a clavicle joint of the inner shoulder. In comparison to classical 7 DOF robot arms, the incorporation of the clavicle joint results in a larger workspace and an increased dexterity in bimanual tasks. The arm structure is based on an exoskeleton design approach: Highly modular and highly integrated sensor-actuator-control units in each joint are linked by a hollow structure, which allows a stiff construction at low weight. Combined with its length of 1 m and a maximum payload of 11 kg at stretched configuration, the performance of the KIT Arm is comparable to state-of-the art industrial robot arms. Thereby, it combines the strengths of humanoid and industrial robot arms.", "title": "" }, { "docid": "dd0a7e506c11eef00f7bbd2f6c4c18aa", "text": "Word sense induction (WSI) seeks to automatically discover the senses of a word in a corpus via unsupervised methods. We propose a sense-topic model for WSI, which treats sense and topic as two separate latent variables to be inferred jointly. Topics are informed by the entire document, while senses are informed by the local context surrounding the ambiguous word. We also discuss unsupervised ways of enriching the original corpus in order to improve model performance, including using neural word embeddings and external corpora to expand the context of each data instance. We demonstrate significant improvements over the previous state-of-the-art, achieving the best results reported to date on the SemEval-2013 WSI task.", "title": "" }, { "docid": "0f4750f3998766e8f2a506a2d432f3bf", "text": "Presently sustainability of fashion in the worldwide is the major considerable issue. The much talked concern is for the favor of fashion’s sustainability around the world. Many organizations and fashion conscious personalities have come forward to uphold the further extension of the campaign of good environment for tomorrows. On the other hand, fashion for the morality or ethical issues is one of the key concepts for the humanity and sustainability point of view. Main objectives of this study to justify the sustainability concern of fashion companies and their policy. In this paper concerned brands are focused on the basis of their present activities related fashion from the manufacturing to the marketing process. Most of the cases celebrities are in the forwarded stages for the upheld of the fashion sustainability. For the conservation of the environment, sustainability of the fashion is the utmost need in the present fastest growing world. Nowadays, fashion is considered the vital issue for the ecological aspect with morality concern. The research is based on the rigorously study with the reading materials. The data have been gathered from various sources, mainly academic literature, research article, conference article, PhD thesis, under graduate & post graduate dissertation and a qualitative research method approach has been adopted for this research. For the convenience of the reader and future researchers, Analysis and Findings have done in the same time.", "title": "" }, { "docid": "3b8f2694d8b6f7177efa8716d72b9129", "text": "Behara, B and Jacobson, BH. Acute effects of deep tissue foam rolling and dynamic stretching on muscular strength, power, and flexibility in Division I linemen. J Strength Cond Res 31(4): 888-892, 2017-A recent strategy to increase sports performance is a self-massage technique called myofascial release using foam rollers. Myofascial restrictions are believed to be brought on by injuries, muscle imbalances, overrecruitment, and/or inflammation, all of which can decrease sports performance. The purpose of this study was to compare the acute effects of a single-bout of lower extremity self-myofascial release using a custom deep tissue roller (DTR) and a dynamic stretch protocol. Subjects consisted of NCAA Division 1 offensive linemen (n = 14) at a Midwestern university. All players were briefed on the objectives of the study and subsequently signed an approved IRB consent document. A randomized crossover design was used to assess each dependent variable (vertical jump [VJ] power and velocity, knee isometric torque, and hip range of motion was assessed before and after: [a] no treatment, [b] deep tissue foam rolling, and [c] dynamic stretching). Results of repeated-measures analysis of variance yielded no pretest to posttest significant differences (p > 0.05) among the groups for VJ peak power (p = 0.45), VJ average power (p = 0.16), VJ peak velocity (p = 0.25), VJ average velocity (p = 0.23), peak knee extension torque (p = 0.63), average knee extension torque (p = 0.11), peak knee flexion torque (p = 0.63), or average knee flexion torque (p = 0.22). However, hip flexibility was statistically significant when tested after both dynamic stretching and foam rolling (p = 0.0001). Although no changes in strength or power was evident, increased flexibility after DTR may be used interchangeably with traditional stretching exercises.", "title": "" }, { "docid": "5f8d0469c05308d8e6c56fcf9e3b804c", "text": "A sensorless control method for a high-speed brushless DC motor based on the line-to-line back electromotive force (back EMF) is proposed in this paper. In order to obtain the commutation signals, the line-to-line voltages are obtained by the low-pass filters. However, due to the low-pass filters, wide speed range, and other factors, the actual commutation signals are significantly delayed by more than 90 electrical degrees which limits the acceleration of the motor. A novel sensorless commutation algorithm based on the hysteresis transition between “90-α” and “150-α” is introduced to handle the severe commutation retarding and guarantee the motor works in a large speed range. In order to compensate the remaining existing commutation errors, a novel closed-loop compensation algorithm based on the integration of the virtual neutral voltage is proposed. The integration difference between the adjacent 60 electrical degrees interval before and after the commutation point is utilized as the feedback of the PI regulator to compensate the errors automatically. Several experiment results confirm the feasibility and effectiveness of the proposed method.", "title": "" }, { "docid": "7eed84f959268599e1b724b0752f6aa5", "text": "Using the information systems lifecycle as a unifying framework, we review online communities research and propose a sequence for incorporating success conditions during initiation and development to increase their chances of becoming a successful community, one in which members participate actively and develop lasting relationships. Online communities evolve following distinctive lifecycle stages and recommendations for success are more or less relevant depending on the developmental stage of the online community. In addition, the goal of the online community under study determines the components to include in the development of a successful online community. Online community builders and researchers will benefit from this review of the conditions that help online communities succeed.", "title": "" }, { "docid": "17813a603f0c56c95c96f5b2e0229026", "text": "Geographic ranges are estimated for brachiopod and bivalve species during the late Middle (mid-Givetian) to the middle Late (terminal Frasnian) Devonian to investigate range changes during the time leading up to and including the Late Devonian biodiversity crisis. Species ranges were predicted using GARP (Genetic Algorithm using Rule-set Prediction), a modeling program developed to predict fundamental niches of modern species. This method was applied to fossil species to examine changing ranges during a critical period of Earth’s history. Comparisons of GARP species distribution predictions with historical understanding of species occurrences indicate that GARP models predict accurately the presence of common species in some depositional settings. In addition, comparison of GARP distribution predictions with species-range reconstructions from geographic information systems (GIS) analysis suggests that GARP modeling has the potential to predict species ranges more completely and tailor ranges more specifically to environmental parameters than GIS methods alone. Thus, GARP modeling is a potentially useful tool for predicting fossil species ranges and can be used to address a wide array of palaeontological problems. The use of GARP models allows a statistical examination of the relationship of geographic range size with species survival during the Late Devonian. Large geographic range was statistically associated with species survivorship across the crisis interval for species examined in the linguiformis Zone but not for species modeled in the preceding Lower varcus or punctata zones. The enhanced survival benefit of having a large geographic range, therefore, appears to be restricted to the biodiversity crisis interval.", "title": "" }, { "docid": "54af3c39dba9aafd5b638d284fd04345", "text": "In this paper, Principal Component Analysis (PCA), Most Discriminant Features (MDF), and Regularized-Direct Linear Discriminant Analysis (RD-LDA) - based feature extraction approaches are tested and compared in an experimental personal recognition system. The system is multimodal and bases on features extracted from nine regions of an image of the palmar surface of the hand. For testing purposes 10 gray-scale images of right hand of 184 people were acquired. The experiments have shown that the best results are obtained with the RD-LDA - based features extraction approach (100% correctness for 920 identification tests and EER = 0.01% for 64170 verification tests).", "title": "" }, { "docid": "90d1d78d3d624d3cb1ecc07e8acaefd4", "text": "Wheat straw is an abundant agricultural residue with low commercial value. An attractive alternative is utilization of wheat straw for bioethanol production. However, production costs based on the current technology are still too high, preventing commercialization of the process. In recent years, progress has been made in developing more effective pretreatment and hydrolysis processes leading to higher yield of sugars. The focus of this paper is to review the most recent advances in pretreatment, hydrolysis and fermentation of wheat straw. Based on the type of pretreatment method applied, a sugar yield of 74-99.6% of maximum theoretical was achieved after enzymatic hydrolysis of wheat straw. Various bacteria, yeasts and fungi have been investigated with the ethanol yield ranging from 65% to 99% of theoretical value. So far, the best results with respect to ethanol yield, final ethanol concentration and productivity were obtained with the native non-adapted Saccharomyses cerevisiae. Some recombinant bacteria and yeasts have shown promising results and are being considered for commercial scale-up. Wheat straw biorefinery could be the near-term solution for clean, efficient and economically-feasible production of bioethanol as well as high value-added products.", "title": "" }, { "docid": "caf0b3a9385dffe3663c4847c1637cec", "text": "In this paper we present a novel method for plausible real-time rendering of indirect illumination effects for diffuse and non-diffuse surfaces. The scene geometry causing indirect illumination is captured by an extended shadow map, as proposed in previous work, and secondary light sources are distributed on directly lit surfaces. One novelty is the rendering of these secondary lights' contribution by splatting in a deferred shading process, which decouples rendering time from scene complexity. An importance sampling strategy, implemented entirely on the GPU, allows efficient selection of secondary light sources. Adapting the light's splat shape to surface glossiness also allows efficient rendering of caustics. Unlike previous approaches the approximated indirect lighting does barely exhibit coarse artifacts - even under unfavorable viewing and lighting conditions. We describe an implementation on contemporary graphics hardware, show a comparison to previous approaches, and present adaptation to and results in game-typical applications.", "title": "" }, { "docid": "9718921e6546abd13e8f08698ba10423", "text": "LawStats provides quantitative insights into court decisions from the Bundesgerichtshof – Federal Court of Justice (BGH), the Federal Court of Justice in Germany. Using Watson Web Services and approaches from Sentiment Analysis (SA), we can automatically classify the revision outcome and offer statistics on judges, senates, previous instances etc. via faceted search. These statistics are accessible through a open web interface to aid law professionals. With a clear focus on interpretability, users can not only explore statistics, but can also understand, which sentences in the decision are responsible for the machine’s decision; links to the original texts provide more context. This is the first largescale application of Machine Learning (ML) based Natural Language Processing (NLP) for German in the analysis of ordinary court decisions in Germany that we are aware of. We have analyzed over 50,000 court decisions and extracted the outcomes and relevant entities. The modular architecture of the application allows continuous improvements of the ML model as more annotations become available over time. The tool can provide a critical foundation for further quantitative research in the legal domain and can be used as a proof-of-concept for similar efforts.", "title": "" }, { "docid": "f16fd498b692875c3bd95460feaf06ec", "text": "Raman and Fourier Transform Infrared (FT-IR) spectroscopy was used for assessment of structural differences of celluloses of various origins. Investigated celluloses were: bacterial celluloses cultured in presence of pectin and/or xyloglucan, as well as commercial celluloses and cellulose extracted from apple parenchyma. FT-IR spectra were used to estimate of the I(β) content, whereas Raman spectra were used to evaluate the degree of crystallinity of the cellulose. The crystallinity index (X(C)(RAMAN)%) varied from -25% for apple cellulose to 53% for microcrystalline commercial cellulose. Considering bacterial cellulose, addition of xyloglucan has an impact on the percentage content of cellulose I(β). However, addition of only xyloglucan or only pectins to pure bacterial cellulose both resulted in a slight decrease of crystallinity. However, culturing bacterial cellulose in the presence of mixtures of xyloglucan and pectins results in an increase of crystallinity. The results confirmed that the higher degree of crystallinity, the broader the peak around 913 cm(-1). Among all bacterial celluloses the bacterial cellulose cultured in presence of xyloglucan and pectin (BCPX) has the most similar structure to those observed in natural primary cell walls.", "title": "" }, { "docid": "fd8b0bcd163823194746426916e0e17b", "text": "Deep neural networks (DNNs) trained on large-scale datasets have recently achieved impressive improvements in face recognition. But a persistent challenge remains to develop methods capable of handling large pose variations that are relatively under-represented in training data. This paper presents a method for learning a feature representation that is invariant to pose, without requiring extensive pose coverage in training data. We first propose to generate non-frontal views from a single frontal face, in order to increase the diversity of training data while preserving accurate facial details that are critical for identity discrimination. Our next contribution is to seek a rich embedding that encodes identity features, as well as non-identity ones such as pose and landmark locations. Finally, we propose a new feature reconstruction metric learning to explicitly disentangle identity and pose, by demanding alignment between the feature reconstructions through various combinations of identity and pose features, which is obtained from two images of the same subject. Experiments on both controlled and in-the-wild face datasets, such as MultiPIE, 300WLP and the profile view database CFP, show that our method consistently outperforms the state-of-the-art, especially on images with large head pose variations.", "title": "" } ]
scidocsrr
32cdcaca5ac68713df12a99b8817e28e
Deep roto-translation scattering for object classification
[ { "docid": "7dc652c9b86f63c0a6b546396980783b", "text": "An affine invariant representation is constructed with a cascade of invariants, which preserves information for classification. A joint translation and rotation invariant representation of image patches is calculated with a scattering transform. It is implemented with a deep convolution network, which computes successive wavelet transforms and modulus non-linearities. Invariants to scaling, shearing and small deformations are calculated with linear operators in the scattering domain. State-of-the-art classification results are obtained over texture databases with uncontrolled viewing conditions.", "title": "" }, { "docid": "fd1e327327068a1373e35270ef257c59", "text": "We consider the problem of building high-level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a deep sparse autoencoder on a large dataset of images (the model has 1 billion connections, the dataset has 10 million 200×200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a cluster with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bodies. Starting from these learned features, we trained our network to recognize 22,000 object categories from ImageNet and achieve a leap of 70% relative improvement over the previous state-of-the-art.", "title": "" } ]
[ { "docid": "5ed8c1b7efa827d9efcd537cd831142c", "text": "The fundamental role of the software defined networks (SDNs) is to decouple the data plane from the control plane, thus providing a logically centralized visibility of the entire network to the controller. This enables the applications to innovate through network programmability. To establish a centralized visibility, a controller is required to discover a network topology of the entire SDN infrastructure. However, discovering a network topology is challenging due to: 1) the frequent migration of the virtual machines in the data centers; 2) lack of authentication mechanisms; 3) scarcity of the SDN standards; and 4) integration of security mechanisms for the topology discovery. To this end, in this paper, we present a comprehensive survey of the topology discovery and the associated security implications in SDNs. This survey provides discussions related to the possible threats relevant to each layer of the SDN architecture, highlights the role of the topology discovery in the traditional network and SDN, presents a thematic taxonomy of topology discovery in SDN, and provides insights into the potential threats to the topology discovery along with its state-of-the-art solutions in SDN. Finally, this survey also presents future challenges and research directions in the field of SDN topology discovery.", "title": "" }, { "docid": "11fe82917eb56b1188ddc46cf8b5d0e2", "text": "We show that to capture the empirical effects of uncertainty on the unemployment rate, it is crucial to study the interactions between search frictions and nominal rigidities. Our argument is guided by empirical evidence showing that an increase in uncertainty leads to a large increase in unemployment and a significant decline in inflation, suggesting that uncertainty partly operates via an aggregate demand channel. To understand the mechanism through which uncertainty generates these macroeconomic effects, we incorporate search frictions and nominal rigidities in a DSGE model. We show that an option-value channel that arises from search frictions interacts with a demand channel that arises from nominal rigidities, and such interactions magnify the effects of uncertainty to generate roughly 60 percent of the observed increase in unemployment following an uncer-", "title": "" }, { "docid": "7bd440a6c7aece364877dbb5170cfcfb", "text": "Semantic representation lies at the core of several applications in Natural Language Processing. However, most existing semantic representation techniques cannot be used effectively for the representation of individual word senses. We put forward a novel multilingual concept representation, called MUFFIN, which not only enables accurate representation of word senses in different languages, but also provides multiple advantages over existing approaches. MUFFIN represents a given concept in a unified semantic space irrespective of the language of interest, enabling cross-lingual comparison of different concepts. We evaluate our approach in two different evaluation benchmarks, semantic similarity and Word Sense Disambiguation, reporting state-of-the-art performance on several standard datasets.", "title": "" }, { "docid": "251a47eb1a5307c5eba7372ce09ea641", "text": "A new class of target link flooding attacks (LFA) can cut off the Internet connections of a target area without being detected because they employ legitimate flows to congest selected links. Although new mechanisms for defending against LFA have been proposed, the deployment issues limit their usages since they require modifying routers. In this paper, we propose LinkScope, a novel system that employs both the end-to-end and the hopby-hop network measurement techniques to capture abnormal path performance degradation for detecting LFA and then correlate the performance data and traceroute data to infer the target links or areas. Although the idea is simple, we tackle a number of challenging issues, such as conducting large-scale Internet measurement through noncooperative measurement, assessing the performance on asymmetric Internet paths, and detecting LFA. We have implemented LinkScope with 7174 lines of C codes and the extensive evaluation in a testbed and the Internet show that LinkScope can quickly detect LFA with high accuracy and low false positive rate.", "title": "" }, { "docid": "5c521a43b743144ed2df29fd7adf4aa3", "text": "We address the problem of geo-registering ground-based multi-view stereo models by ground-to-aerial image matching. The main contribution is a fully automated geo-registration pipeline with a novel viewpoint-dependent matching method that handles ground to aerial viewpoint variation. We conduct large-scale experiments which consist of many popular outdoor landmarks in Rome. The proposed approach demonstrates a high success rate for the task, and dramatically outperforms state-of-the-art techniques, yielding geo-registration at pixel-level accuracy.", "title": "" }, { "docid": "62aa091313743dda4fc8211eccd78f83", "text": "We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units. Dropout, t he most successful technique for regularizing neural networks, does n ot work well with RNNs and LSTMs. In this paper, we show how to correctly apply dropo ut t LSTMs, and show that it substantially reduces overfitting on a varie ty of tasks. These tasks include language modeling, speech recognition, image capt ion generation, and machine translation.", "title": "" }, { "docid": "f9f26d8ff95aff0a361fcb321e57a779", "text": "A novel algorithm for the detection of underwater man-made objects in forwardlooking sonar imagery is proposed. The algorithm takes advantage of the integral-image representation to quickly compute features, and progressively reduces the computational load by working on smaller portions of the image along the detection process phases. By adhering to the proposed scheme, real-time detection on sonar data onboard an autonomous vehicle is made possible. The proposed method does not require training data, as it dynamically takes into account environmental characteristics of the sensed sonar data. The proposed approach has been implemented and integrated into the software system of the Gemellina autonomous surface vehicle, and is able to run in real time. The validity of the proposed approach is demonstrated on real experiments carried out at sea with the Gemellina autonomous surface vehicle.", "title": "" }, { "docid": "345f54e3a6d00ecb734de529ed559933", "text": "Size and cost of a switched mode power supply can be reduced by increasing the switching frequency. The maximum switching frequency and the maximum input voltage range, respectively, is limited by the minimum propagated on-time pulse, which is mainly determined by the level shifter speed. At switching frequencies above 10 MHz, a voltage conversion with an input voltage range up to 50 V and output voltages below 5 V requires an on-time of a pulse width modulated signal of less than 5 ns. This cannot be achieved with conventional level shifters. This paper presents a level shifter circuit, which controls an NMOS power FET on a high-voltage domain up to 50 V. The level shifter was implemented as part of a DCDC converter in a 180 nm BiCMOS technology. Experimental results confirm a propagation delay of 5 ns and on-time pulses of less than 3 ns. An overlapping clamping structure with low parasitic capacitances in combination with a high-speed comparator makes the level shifter also very robust against large coupling currents during high-side transitions as fast as 20 V/ns, verified by measurements. Due to the high dv/dt, capacitive coupling currents can be two orders of magnitude larger than the actual signal current. Depending on the conversion ratio, the presented level shifter enables an increase of the switching frequency for multi-MHz converters towards 100 MHz. It supports high input voltages up to 50 V and it can be applied also to other high-speed applications.", "title": "" }, { "docid": "8767787aaa4590acda7812411135c168", "text": "Automatic annotation of images is one of the fundamental problems in computer vision applications. With the increasing amount of freely available images, it is quite possible that the training data used to learn a classifier has different distribution from the data which is used for testing. This results in degradation of the classifier performance and highlights the problem known as domain adaptation. Framework for domain adaptation typically requires a classification model which can utilize several classifiers by combining their results to get the desired accuracy. This work proposes depth-based and iterative depth-based fusion methods which are basically rank-based fusion methods and utilize rank of the predicted labels from different classifiers. Two frameworks are also proposed for domain adaptation. The first framework uses traditional machine learning algorithms, while the other works with metric learning as well as transfer learning algorithm. Motivated from ImageCLEF’s 2014 domain adaptation task, these frameworks with the proposed fusion methods are validated and verified by conducting experiments on the images from five domains having varied distributions. Bing, Caltech, ImageNet, and PASCAL are used as source domains and the target domain is SUN. Twelve object categories are chosen from these domains. The experimental results show the performance improvement not only over the baseline system, but also over the winner of the ImageCLEF’s 2014 domain adaptation challenge.", "title": "" }, { "docid": "fb23919bd638765ec07efda41e4c4cf6", "text": "OBJECTIVE\nThe distinct trajectories of patients with autism spectrum disorders (ASDs) have not been extensively studied, particularly regarding clinical manifestations beyond the neurobehavioral criteria from the Diagnostic and Statistical Manual of Mental Disorders. The objective of this study was to investigate the patterns of co-occurrence of medical comorbidities in ASDs.\n\n\nMETHODS\nInternational Classification of Diseases, Ninth Revision codes from patients aged at least 15 years and a diagnosis of ASD were obtained from electronic medical records. These codes were aggregated by using phenotype-wide association studies categories and processed into 1350-dimensional vectors describing the counts of the most common categories in 6-month blocks between the ages of 0 to 15. Hierarchical clustering was used to identify subgroups with distinct courses.\n\n\nRESULTS\nFour subgroups were identified. The first was characterized by seizures (n = 120, subgroup prevalence 77.5%). The second (n = 197) was characterized by multisystem disorders including gastrointestinal disorders (prevalence 24.3%) and auditory disorders and infections (prevalence 87.8%), and the third was characterized by psychiatric disorders (n = 212, prevalence 33.0%). The last group (n = 4316) could not be further resolved. The prevalence of psychiatric disorders was uncorrelated with seizure activity (P = .17), but a significant correlation existed between gastrointestinal disorders and seizures (P < .001). The correlation results were replicated by using a second sample of 496 individuals from a different geographic region.\n\n\nCONCLUSIONS\nThree distinct patterns of medical trajectories were identified by unsupervised clustering of electronic health record diagnoses. These may point to distinct etiologies with different genetic and environmental contributions. Additional clinical and molecular characterizations will be required to further delineate these subgroups.", "title": "" }, { "docid": "11cfe05879004f225aee4b3bda0ce30b", "text": "Data mining system contain large amount of private and sensitive data such as healthcare, financial and criminal records. These private and sensitive data can not be share to every one, so privacy protection of data is required in data mining system for avoiding privacy leakage of data. Data perturbation is one of the best methods for privacy preserving. We used data perturbation method for preserving privacy as well as accuracy. In this method individual data value are distorted before data mining application. In this paper we present min max normalization transformation based data perturbation. The privacy parameters are used for measurement of privacy protection and the utility measure shows the performance of data mining technique after data distortion. We performed experiment on real life dataset and the result show that min max normalization transformation based data perturbation method is effective to protect confidential information and also maintain the performance of data mining technique after data distortion.", "title": "" }, { "docid": "4d11fb2e8043e4f7cce009e0af65af86", "text": "Various hand-crafted features and metric learning methods prevail in the field of person re-identification. Compared to these methods, this paper proposes a more general way that can learn a similarity metric from image pixels directly. By using a “siamese” deep neural network, the proposed method can jointly learn the color feature, texture feature and metric in a unified framework. The network has a symmetry structure with two sub-networks which are connected by Cosine function. To deal with the big variations of person images, binomial deviance is used to evaluate the cost between similarities and labels, which is proved to be robust to outliers. Compared to existing researches, a more practical setting is studied in the experiments that is training and test on different datasets (cross dataset person re-identification). Both in “intra dataset” and “cross dataset” settings, the superiorities of the proposed method are illustrated on VIPeR and PRID.", "title": "" }, { "docid": "cb46b6331371cf3b790ba2b10539f70e", "text": "The problem of matching measured latitude/longitude points to roads is becoming increasingly important. This paper describes a novel, principled map matching algorithm that uses a Hidden Markov Model (HMM) to find the most likely road route represented by a time-stamped sequence of latitude/longitude pairs. The HMM elegantly accounts for measurement noise and the layout of the road network. We test our algorithm on ground truth data collected from a GPS receiver in a vehicle. Our test shows how the algorithm breaks down as the sampling rate of the GPS is reduced. We also test the effect of increasing amounts of additional measurement noise in order to assess how well our algorithm could deal with the inaccuracies of other location measurement systems, such as those based on WiFi and cell tower multilateration. We provide our GPS data and road network representation as a standard test set for other researchers to use in their map matching work.", "title": "" }, { "docid": "39ff54263fa91d9d178a143a49239f68", "text": "A series of 3-(2H-1,2,4-triazol-5-yl)-1,3-thiazolidin-4-one derivatives (7c-l) was designed and synthesized. Their structures have been elucidated based on analytical and spectral data. They were evaluated for their antibacterial and antifungal activities. Compound 7h showed the highest activity against all tested strains, except P. vulgaris, with MIC 8 μg/mL and 4 μg/mL against S. aureus and C. albicans, respectively. Furthermore, Compounds 7c, 7h, and 7j demonstrated moderate anti-mycobacterium activity. The binding mode of the synthesized thiazolidinones to bacterial MurB enzyme was also studied. Good interactions between the docked compounds to the MurB active site were observed primarily with Asn83, Arg310, Arg188 and Ser82 amino acid residues.", "title": "" }, { "docid": "88e97dc5105ef142d422bec88e897ddd", "text": "This paper reports on an experiment realized on the IBM 5Q chip which demonstrates strong evidence for the advantage of using error detection and fault-tolerant design of quantum circuits. By showing that fault-tolerant quantum computation is already within our reach, the author hopes to encourage this approach.", "title": "" }, { "docid": "baf3c2456fa0e28b39730a5803ddcc2b", "text": "Music21 is an object-oriented toolkit for analyzing, searching, and transforming music in symbolic (scorebased) forms. The modular approach of the project allows musicians and researchers to write simple scripts rapidly and reuse them in other projects. The toolkit aims to provide powerful software tools integrated with sophisticated musical knowledge to both musicians with little programming experience (especially musicologists) and to programmers with only modest music theory skills. This paper introduces the music21 system, demonstrating how to use it and the types of problems it is wellsuited toward advancing. We include numerous examples of its power and flexibility, including demonstrations of graphing data and generating annotated musical scores.", "title": "" }, { "docid": "8093219e7e2b4a7067f8d96118a5ea93", "text": "We model knowledge graphs for their completion by encoding each entity and relation into a numerical space. All previous work including Trans(E, H, R, and D) ignore the heterogeneity (some relations link many entity pairs and others do not) and the imbalance (the number of head entities and that of tail entities in a relation could be different) of knowledge graphs. In this paper, we propose a novel approach TranSparse to deal with the two issues. In TranSparse, transfer matrices are replaced by adaptive sparse matrices, whose sparse degrees are determined by the number of entities (or entity pairs) linked by relations. In experiments, we design structured and unstructured sparse patterns for transfer matrices and analyze their advantages and disadvantages. We evaluate our approach on triplet classification and link prediction tasks. Experimental results show that TranSparse outperforms Trans(E, H, R, and D) significantly, and achieves state-ofthe-art performance.", "title": "" }, { "docid": "73f922013679a0522c25de09c37ab323", "text": "Despite the wide adoption of agile methodologies, software development teams still struggle to meet time, budget and scope. These challenges are partially explained by practitioners' lack of motivation to apply agile techniques in practice. Some researchers have already proposed to tackle this problem with gamification, i.e. the use of game elements and game-design in non-game contexts. However, very few gamification proposals were evaluated with Scrum teams in practice. In this paper, we present a software tool based on gamification to make Scrum techniques more fun and engaging for practitioners. As a result, practitioners should increase their motivation to apply Scrum techniques in practice, which in turn might result in more and better software. This paper describes the first iteration of a larger research effort that follows the Design Science Research methodology. In this first research iteration, a prototype was developed, and a project team is currently using the prototype to evaluate the real impact of gamification on Scrum adoption in a real-world organization.", "title": "" }, { "docid": "f5c4bdf959e455193221a1fa76e1895a", "text": "This book contains a wide variety of hot topics on advanced computational intelligence methods which incorporate the concept of complex and hypercomplex number systems into the framework of artificial neural networks. In most chapters, the theoretical descriptions of the methodology and its applications to engineering problems are excellently balanced. This book suggests that a better information processing method could be brought about by selecting a more appropriate information representation scheme for specific problems, not only in artificial neural networks but also in other computational intelligence frameworks. The advantages of CVNNs and hypercomplex-valued neural networks over real-valued neural networks are confirmed in some case studies but still unclear in general. Hence, there is a need to further explore the difference between them from the viewpoint of nonlinear dynamical systems. Nevertheless, it seems that the applications of CVNNs and hypercomplex-valued neural networks are very promising.", "title": "" }, { "docid": "4520a0c8bdd2c0c55e181ec4bfe80d35", "text": "The authors present a case which brings out a unique modality of child homicide by placing the baby in a washing machine and turning it on. The murder was perpetrated by the baby’s mother, who suffered from a serious depressive disorder. A postmortem RX and then a forensic autopsy were performed, followed by histologic examinations and toxicology. On the basis of the results of the autopsy, as well as the histology and the negative toxicological data, the cause of death was identified as acute asphyxia. This diagnosis was rendered in light of the absence of other causes of death, as well as the presence of typical signs of asphyxia, such as epicardial and pleural petechiae and, above all, the microscopic examinations, which pointed out a massive acute pulmonary emphysema. Regarding the cause of the asphyxia, at least two mechanisms can be identified: drowning and smothering. In addition, the histology of the brain revealed some findings that can be regarded as a consequence of the barotrauma due to the centrifugal force applied by the rotating drum of the washing machine. Another remarkable aspect is that we are dealing with a mentally-ill assailant. In fact, the baby’s mother, after a psychiatric examination, was confirmed to be suffering from a mental illness—a severe depressive disorder—and so she was adjudicated not-guilty-by-reason-of-insanity. This case warrants attention because of its uniqueness and complexity and, above all, its usefulness in the understanding of the pathophysiology of this particular manner of death.", "title": "" } ]
scidocsrr
8af5509f3ed558520d7bea466b0dd5b3
RGB-D flow: Dense 3-D motion estimation using color and depth
[ { "docid": "1589e72380265787a10288c5ad906670", "text": "The goal of this work is to accurately detect and localize boundaries in natural scenes using local image measurements. We formulate features that respond to characteristic changes in brightness, color, and texture associated with natural boundaries. In order to combine the information from these features in an optimal way, we train a classifier using human labeled images as ground truth. The output of this classifier provides the posterior probability of a boundary at each image location and orientation. We present precision-recall curves showing that the resulting detector significantly outperforms existing approaches. Our two main results are 1) that cue combination can be performed adequately with a simple linear model and 2) that a proper, explicit treatment of texture is required to detect boundaries in natural images.", "title": "" } ]
[ { "docid": "9dfda21b53ade4c92ef640162f2dd8ef", "text": "Many recent works on knowledge distillation have provided ways to transfer the knowledge of a trained network for improving the learning process of a new one, but finding a good technique for knowledge distillation is still an open problem. In this paper, we provide a new perspective based on a decision boundary, which is one of the most important component of a classifier. The generalization performance of a classifier is closely related to the adequacy of its decision boundaries, so a good classifier bears good decision boundaries. Therefore, transferring the boundaries directly can be a good attempt for knowledge distillation. To realize this goal, we utilize an adversarial attack to discover samples supporting the decision boundaries. Based on this idea, to transfer more accurate information about the decision boundaries, the proposed algorithm trains a student classifier based on the adversarial samples supporting the decision boundaries. Alongside, two metrics are proposed to evaluate the similarity between decision boundaries. Experiments show that the proposed method indeed improves knowledge distillation and produces much more similar decision boundaries to the teacher classifier.", "title": "" }, { "docid": "e0d0a0f59f5a894c3674b903c5b7b14c", "text": "Automated Information Systems has played a major role in the growth, advancement, and modernization of our daily work processes. The main purpose of this paper is to develop a safe and secure web based attendance monitoring system using Biometrics and Radio Frequency Identification (RFID) Technology based on multi-tier architecture, for both computers and smartphones. The system can maintain the attendance records of both students and teachers/staff members of an institution. The system can also detect the current location of the students, faculties, and other staff members anywhere within the domain of institution campus. With the help of android application one can receive live feeds of various campus activities, keep updated with the current topics in his/her enrolled courses as well as track his/her friends on a real time basis. An automated SMS service is facilitated in the system, which sends an SMS automatically to the parents in order to notify that their ward has successfully reached the college. Parents as well as student will be notified via e-mail, if the student is lagging behind in attendance. There is a functionality of automatic attendance performance graph in the system, which gives an idea of the student's consistency in attendance throughout the semester.", "title": "" }, { "docid": "8e1b6eb4a939c493eff27cf78bab8d47", "text": "Among the various natural calamities, flood is considered one of the most catastrophic natural hazards, which has a significant impact on the socio-economic lifeline of a country. The Assessment of flood risks facilitates taking appropriate measures to reduce the consequences of flooding. The flood risk assessment requires Big data which are coming from different sources, such as sensors, social media, and organizations. However, these data sources contain various types of uncertainties because of the presence of incomplete and inaccurate information. This paper presents a Belief rule-based expert system (BRBES) which is developed in Big data platform to assess flood risk in real time. The system processes extremely large dataset by integrating BRBES with Apache Spark while a web-based interface has developed allowing the visualization of flood risk in real time. Since the integrated BRBES employs knowledge driven learning mechanism, it has been compared with other data-driven learning mechanisms to determine the reliability in assessing flood risk. The integrated BRBES produces reliable results in comparison to other data-driven approaches. Data for the expert system has been collected by considering different case study areas of Bangladesh to validate the system.", "title": "" }, { "docid": "c64ff373043fe7814d2acef08142e1a5", "text": "This article deals with the identification of gene regulatory networks from experimental data using a statistical machine learning approach. A stochastic model of gene interactions capable of handling missing variables is proposed. It can be described as a dynamic Bayesian network particularly well suited to tackle the stochastic nature of gene regulation and gene expression measurement. Parameters of the model are learned through a penalized likelihood maximization implemented through an extended version of EM algorithm. Our approach is tested against experimental data relative to the S.O.S. DNA Repair network of the Escherichia coli bacterium. It appears to be able to extract the main regulations between the genes involved in this network. An added missing variable is found to model the main protein of the network. Good prediction abilities on unlearned data are observed. These first results are very promising: they show the power of the learning algorithm and the ability of the model to capture gene interactions.", "title": "" }, { "docid": "ef1064ba6dcd464fd048aab9f70c4bdd", "text": "The problem of reproducing high dynamic range images on devices with restricted dynamic range has gained a lot of interest in the computer graphics community. There exist various approaches to this issue, which span several research areas including computer graphics, image processing, color science, physiology, neurology, psychology, et c. These approaches assume a thorough knowledge of both the objective and subjective attributes of an image. However, no comprehensive overview and analysis of such attributes has been published so far. In this paper, we present an overview of image quality attributes of different tone mapping methods. Furthermore, we propose a scheme of relationships between these attributes, leading to the definition of an overall image qua lity measure. We present results of subjective psychophysic al tests that we have performed to prove the proposed relationship scheme. We also present the evaluation of existing tone mapping methods with regard to these attributes. Our effort is not just useful to get into the tone mapping field or when implementing a tone mapping operator, but it also sets the stage for well-founded quality comparisons between tone mapping operators. By providing good definitions of the different attributes, user-driven or fully a utomatic comparisons are made possible at all.", "title": "" }, { "docid": "8e7adfab46fa21202e7ff7311d11b51d", "text": "In this paper we describe a joint effort by the City University of New York (CUNY), University of Illinois at Urbana-Champaign (UIUC) and SRI International at participating in the mono-lingual entity linking (MLEL) and cross-lingual entity linking (CLEL) tasks for the NIST Text Analysis Conference (TAC) Knowledge Base Population (KBP2011) track. The MLEL system is based on a simple combination of two published systems by CUNY (Chen and Ji, 2011) and UIUC (Ratinov et al., 2011). Therefore, we mainly focus on describing our new CLEL system. In addition to a baseline system based on name translation, machine translation and MLEL, we propose two novel approaches. One is based on a cross-lingual name similarity matrix, iteratively updated based on monolingual co-occurrence, and the other uses topic modeling to enhance performance. Our best systems placed 4th in mono-lingual track and 2nd in cross-lingual track.", "title": "" }, { "docid": "134297d45c943f0751f002fa5c456940", "text": "Widespread application of real-time, Nonlinear Model Predictive Control (NMPC) algorithms to systems of large scale or with fast dynamics is challenged by the high associated computational cost, in particular in presence of long prediction horizons. In this paper, a fast NMPC strategy to reduce the on-line computational cost is proposed. A Curvature-based Measure of Nonlinearity (CMoN) of the system is exploited to reduce the required number of sensitivity computations, which largely contribute to the overall computational cost. The proposed scheme is validated by a simulation study on the chain of masses motion control problem, a toy example that can be easily extended to an arbitrary dimension. Simulations have been run with long prediction horizons and large state dimensions. Results show that sensitivity computations are significantly reduced with respect to other sensitivity updating schemes, while preserving control performance.", "title": "" }, { "docid": "49e616b9db5ba5003ae01abfb6ed3e16", "text": "BACKGROUND\nAlthough substantial evidence suggests that stressful life events predispose to the onset of episodes of depression and anxiety, the essential features of these events that are depressogenic and anxiogenic remain uncertain.\n\n\nMETHODS\nHigh contextual threat stressful life events, assessed in 98 592 person-months from 7322 male and female adult twins ascertained from a population-based registry, were blindly rated on the dimensions of humiliation, entrapment, loss, and danger and their categories. Onsets of pure major depression (MD), pure generalized anxiety syndrome (GAS) (defined as generalized anxiety disorder with a 2-week minimum duration), and mixed MD-GAS episodes were examined using logistic regression.\n\n\nRESULTS\nOnsets of pure MD and mixed MD-GAS were predicted by higher ratings of loss and humiliation. Onsets of pure GAS were predicted by higher ratings of loss and danger. High ratings of entrapment predicted only onsets of mixed episodes. The loss categories of death and respondent-initiated separation predicted pure MD but not pure GAS episodes. Events with a combination of humiliation (especially other-initiated separation) and loss were more depressogenic than pure loss events, including death. No sex differences were seen in the prediction of episodes of illness by event categories.\n\n\nCONCLUSIONS\nIn addition to loss, humiliating events that directly devalue an individual in a core role were strongly linked to risk for depressive episodes. Event dimensions and categories that predispose to pure MD vs pure GAS episodes can be distinguished with moderate specificity. The event dimensions that preceded mixed MD-GAS episodes were largely the sum of those that preceded pure MD and pure GAS episodes.", "title": "" }, { "docid": "dd975fded3a24052a31bb20587ff8566", "text": "This paper presents a design methodology for a high power density converter, which emphasizes weight minimization. The design methodology considers various inverter topologies and semiconductor devices with application of cold plate cooling and LCL filter. Design for a high-power inverter is evaluated with demonstration of a 50 kVA 2-level 3-phase SiC inverter operating at 60 kHz switching frequency. The prototype achieves high gravimetric power density of 6.49 kW/kg.", "title": "" }, { "docid": "fdd7237680ee739b598cd508c4a2ed38", "text": "Rectovaginal Endometriosis (RVE) is a severe form of endometriosis classified by Kirtner as stage 4 [1,2]. It is less frequent than peritoneal or ovarian endometriosis affecting 3.8% to 37% of patients with endometriosis [3,4]. RVE infiltrates the rectum, vagina, and rectovaginal septum, up to obliteration of the pouch of Douglas [4]. Endometriotic nodules exceeding 30 mm in diameter have 17.9% risk of ureteral involvement [5], while 5.3% to 12% of patients have bowel endometriosis, most commonly found in the recto-sigmoid involving 74% of those patients [3,4].", "title": "" }, { "docid": "cae661146bc0156af25d8014cb61ef0b", "text": "The two critical factors distinguishing inventory management in a multifirm supply-chain context from the more traditional centrally planned perspective are incentive conflicts and information asymmetries. We study the well-known order quantity/reorder point (Q r) model in a two-player context, using a framework inspired by observations during a case study. We show how traditional allocations of decision rights to supplier and buyer lead to inefficient outcomes, and we use principal-agent models to study the effects of information asymmetries about setup cost and backorder cost, respectively. We analyze two “opposite” models of contracting on inventory policies. First, we derive the buyer’s optimal menu of contracts when the supplier has private information about setup cost, and we show how consignment stock can help reduce the impact of this information asymmetry. Next, we study consignment and assume the supplier cannot observe the buyer’s backorder cost. We derive the supplier’s optimal menu of contracts on consigned stock level and show that in this case, the supplier effectively has to overcompensate the buyer for the cost of each stockout. Our theoretical analysis and the case study suggest that consignment stock helps reduce cycle stock by providing the supplier with an additional incentive to decrease batch size, but simultaneously gives the buyer an incentive to increase safety stock by exaggerating backorder costs. This framework immediately points to practical recommendations on how supply-chain incentives should be realigned to overcome existing information asymmetries.", "title": "" }, { "docid": "3e83f454f66e8aba14733205c8e19753", "text": "BACKGROUND\nNormal-weight adults gain lower-body fat via adipocyte hyperplasia and upper-body subcutaneous (UBSQ) fat via adipocyte hypertrophy.\n\n\nOBJECTIVES\nWe investigated whether regional fat loss mirrors fat gain and whether the loss of lower-body fat is attributed to decreased adipocyte number or size.\n\n\nDESIGN\nWe assessed UBSQ, lower-body, and visceral fat gains and losses in response to overfeeding and underfeeding in 23 normal-weight adults (15 men) by using dual-energy X-ray absorptiometry and abdominal computed tomography scans. Participants gained ∼5% of weight in 8 wk and lost ∼80% of gained fat in 8 wk. We measured abdominal subcutaneous and femoral adipocyte sizes and numbers after weight gain and loss.\n\n\nRESULTS\nVolunteers gained 3.1 ± 2.1 (mean ± SD) kg body fat with overfeeding and lost 2.4 ± 1.7 kg body fat with underfeeding. Although UBSQ and visceral fat gains were completely reversed after 8 wk of underfeeding, lower-body fat had not yet returned to baseline values. Abdominal and femoral adipocyte sizes, but not numbers, decreased with weight loss. Decreases in abdominal adipocyte size and UBSQ fat mass were correlated (ρ = 0.76, P = 0.001), as were decreases in femoral adipocyte size and lower-body fat (ρ = 0.49, P = 0.05).\n\n\nCONCLUSIONS\nUBSQ and visceral fat increase and decrease proportionately with a short-term weight gain and loss, whereas a gain of lower-body fat does not relate to the loss of lower-body fat. The loss of lower-body fat is attributed to a reduced fat cell size, but not number, which may result in long-term increases in fat cell numbers.", "title": "" }, { "docid": "70fd543752f17237386b3f8e99954230", "text": "Using Markov logic to integrate logical and distributional information in natural-language semantics results in complex inference problems involving long, complicated formulae. Current inference methods for Markov logic are ineffective on such problems. To address this problem, we propose a new inference algorithm based on SampleSearch that computes probabilities of complete formulae rather than ground atoms. We also introduce a modified closed-world assumption that significantly reduces the size of the ground network, thereby making inference feasible. Our approach is evaluated on the recognizing textual entailment task, and experiments demonstrate its dramatic impact on the efficiency", "title": "" }, { "docid": "071d2d56b4516dc77fb70fcefb999fa0", "text": "Boiling heat transfer occurs in many situations and can be used for thermal management in various engineered systems with high energy density, from power electronics to heat exchangers in power plants and nuclear reactors. Essentially, boiling is a complex physical process that involves interactions between heating surface, liquid, and vapor. For engineering applications, the boiling heat transfer is usually predicted by empirical correlations or semi-empirical models, which has relatively large uncertainty. In this paper, a data-driven approach based on deep feedforward neural networks is studied. The proposed networks use near wall local features to predict the boiling heat transfer. The inputs of networks include the local momentum and energy convective transport, pressure gradients, turbulent viscosity, and surface information. The outputs of the networks are the quantities of interest of a typical boiling system, including heat transfer components, wall superheat, and near wall void fraction. The networks are trained by the high-fidelity data processed from first principle simulation of pool boiling under varying input heat fluxes. State-of-the-art algorithms are applied to prevent the overfitting issue when training the deep networks. The trained networks are tested in interpolation cases and extrapolation cases which both demonstrate good agreement with the original high-fidelity simulation results.", "title": "" }, { "docid": "fcb526dfd8f1d24b622995d4c0ff3e6c", "text": "Scene flow is defined as the motion field in 3D space, and can be computed from a single view when using an RGBD sensor. We propose a new scene flow approach that exploits the local and piecewise rigidity of real world scenes. By modeling the motion as a field of twists, our method encourages piecewise smooth solutions of rigid body motions. We give a general formulation to solve for local and global rigid motions by jointly using intensity and depth data. In order to deal efficiently with a moving camera, we model the motion as a rigid component plus a non-rigid residual and propose an alternating solver. The evaluation demonstrates that the proposed method achieves the best results in the most commonly used scene flow benchmark. Through additional experiments we indicate the general applicability of our approach in a variety of different scenarios.", "title": "" }, { "docid": "9984fc080b1f2fe2bf4910b9091591a7", "text": "In the modern era, the vehicles are focused to be automated to give human driver relaxed driving. In the field of automobile various aspects have been considered which makes a vehicle automated. Google, the biggest network has started working on the self-driving cars since 2010 and still developing new changes to give a whole new level to the automated vehicles. In this paper we have focused on two applications of an automated car, one in which two vehicles have same destination and one knows the route, where other don't. The following vehicle will follow the target (i.e. Front) vehicle automatically. The other application is automated driving during the heavy traffic jam, hence relaxing driver from continuously pushing brake, accelerator or clutch. The idea described in this paper has been taken from the Google car, defining the one aspect here under consideration is making the destination dynamic. This can be done by a vehicle automatically following the destination of another vehicle. Since taking intelligent decisions in the traffic is also an issue for the automated vehicle so this aspect has been also under consideration in this paper.", "title": "" }, { "docid": "23d560ca3bb6f2d7d9b615b5ad3224d2", "text": "The Pebbles project is creating applications to connmt multiple Personal DigiM Assistants &DAs) to a main computer such as a PC We are cmenfly using 3Com Pd@Ilots b-use they are popdar and widespread. We created the ‘Remote Comrnandefl application to dow users to take turns sending input from their PahnPiiots to the PC as if they were using the PCS mouse and keyboard. ‘.PebblesDraw” is a shared whiteboard application we btit that allows dl of tie users to send input simtdtaneously while sharing the same PC display. We are investigating the use of these applications in various contexts, such as colocated mmtings. Keywor& Personal Digiti Assistants @DAs), PH11oc Single Display Groupware, Pebbles, AmuleL", "title": "" }, { "docid": "1b581e17dad529b3452d3fbdcb1b3dd1", "text": "Authorship attribution is the task of identifying the author of a given text. The main concern of this task is to define an appropriate characterization of documents that captures the writing style of authors. This paper proposes a new method for authorship attribution supported on the idea that a proper identification of authors must consider both stylistic and topic features of texts. This method characterizes documents by a set of word sequences that combine functional and content words. The experimental results on poem classification demonstrated that this method outperforms most current state-of-the-art approaches, and that it is appropriate to handle the attribution of short documents.", "title": "" }, { "docid": "4bce6150e9bc23716a19a0d7c02640c0", "text": "A Data Mining Framework for Constructing Features and Models for Intrusion Detection Systems", "title": "" }, { "docid": "d3156f87367e8f55c3e62d376352d727", "text": "The topic of deep-learning has recently received considerable attention in the machine learning research community, having great potential to liberate computer scientists from hand-engineering training datasets, because the method can learn the desired features automatically. This is particularly beneficial in medical research applications of machine learning, where getting good hand labelling of data is especially expensive. We propose application of a single-layer sparse-auto encoder to dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) for fully automatic classification of tissue types in a large unlabelled dataset with minimal human interference -- in a manner similar to data-mining. DCE-MRI analysis, looking at the change of the MR contrast-agent concentration over successively acquired images, is time-series analysis. We analyse the change of brightness (which is related to the contrast-agent concentration) of the DCE-MRI images over time to classify different tissue types in the images. Therefore our system is an application of an auto encoder to time-series analysis while the demonstrated result and further possible successive application areas are in computer vision. We discuss the important factors affecting performance of the system in applying the auto encoder to the time-series analysis of DCE-MRI medical image data.", "title": "" } ]
scidocsrr
7d9425bf1cffd3aaf2c9ce79a0107b2e
Fast ConvNets Using Group-Wise Brain Damage
[ { "docid": "0a14a4d38f1f05aec6e0ea5d658defcf", "text": "In this work, we investigate the use of sparsity-inducing regularizers during training of Convolution Neural Networks (CNNs). These regularizers encourage that fewer connections in the convolution and fully connected layers take non-zero values and in effect result in sparse connectivity between hidden units in the deep network. This in turn reduces the memory and runtime cost involved in deploying the learned CNNs. We show that training with such regularization can still be performed using stochastic gradient descent implying that it can be used easily in existing codebases. Experimental evaluation of our approach on MNIST, CIFAR, and ImageNet datasets shows that our regularizers can result in dramatic reductions in memory requirements. For instance, when applied on AlexNet, our method can reduce the memory consumption by a factor of four with minimal loss in accuracy.", "title": "" }, { "docid": "28c03f6fb14ed3b7d023d0983cb1e12b", "text": "The focus of this paper is speeding up the application of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition [15], showing a possible 2.5⇥ speedup with no loss in accuracy, and 4.5⇥ speedup with less than 1% drop in accuracy, still achieving state-of-the-art on standard benchmarks.", "title": "" } ]
[ { "docid": "b67c5425cf7e78d94013016e3efbda16", "text": "Determinantal point processes (DPPs) are wellsuited for modeling repulsion and have proven useful in applications where diversity is desired. While DPPs have many appealing properties, learning the parameters of a DPP is difficult, as the likelihood is non-convex and is infeasible to compute in many scenarios. Here we propose Bayesian methods for learning the DPP kernel parameters. These methods are applicable in largescale discrete and continuous DPP settings, even when the likelihood can only be bounded. We demonstrate the utility of our DPP learning methods in studying the progression of diabetic neuropathy based on the spatial distribution of nerve fibers, and in studying human perception of diversity in images.", "title": "" }, { "docid": "d6c626ff39649554ce00d1322ca74e2d", "text": "The increased availability of information technologies has enabled law enforcement agencies to compile databases with detailed information about major felonies. Machine learning techniques can utilize these databases to produce decision-aid tools to support police investigations. This paper presents a methodology for obtaining a Bayesian network (BN) model of offender behavior from a database of cleared homicides. The BN can infer the characteristics of an unknown offender from the crime scene evidence, and help narrow the list of suspects in an unsolved homicide. Our research shows that 80% of offender characteristics are predicted correctly on average in new single-victim homicides, and when confidence levels are taken into account this accuracy increases to 95.6%. 2008 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "8cf336a0d57681f55b9fadbb769996a4", "text": "Games-based learning has captured the interest of educationalists and industrialists who seek to exploit the characteristics of computer games as they are perceived by some to be a potentially effective approach for teaching and learning. Despite this interest in using games-based learning there is a dearth of empirical evidence supporting the validity of the approach covering the wider context of gaming and education. This study presents a large scale gaming survey, involving 887 students from 13 different Higher Education (HE) institutes in Scotland and the Netherlands, which examines students’ characteristics related to their gaming preferences, game playing habits, and their perceptions and thoughts on the use of games in education. It presents a comparison of three separate groups of students: a group in regular education in a Scottish university, a group in regular education in universities in the Netherlands and a distance learning group from a university in the Netherlands. This study addresses an overall research question of: Can computer games be used for educational purposes at HE level in regular and distance education in different countries? The study then addresses four sub-research questions related to the overall research question: What are the different game playing habits of the three groups? What are the different motivations for playing games across the three groups? What are the different reasons for using games in HE across the three groups? What are the different attitudes towards games across the three groups? To our knowledge this is the first in-depth cross-national survey on gaming and education. We found that a large number of participants believed that computer games could be used at HE level for educational purposes and that further research in the area of game playing habits, motivations for playing computer games and motivations for playing computer games in education are worthy of extensive further investigation. We also found a clear distinction between the views of students in regular education and those in distance education. Regular education students in both countries rated all motivations for playing computer games as significantly more important than distance education students. Also the results suggest that Scottish students aim to enhance their social experience with regards to competition and cooperation, while Dutch students aim to enhance their leisurely experience with regards to leisure, feeling good, preventing boredom and excitement. 2013 Elsevier Ltd. All rights reserved. Hainey), wim.westera@ou.nl (W. Westera), thomas.connolly@uws.ac.uk (T.M. Connolly), gavin.baxter@uws.ac.uk", "title": "" }, { "docid": "a9ad415524996446ea1204ad5ff11d89", "text": "Crime against women is increasing at an alarming rate in almost all parts of India. Women in the Indian society have been victims of humiliation, torture and exploitation. It has even existed in the past but only in the recent years the issues have been brought to the open for concern. According to the latest data released by the National Crime Records Bureau (NCRB), crime against women have increased more than doubled over the past ten years. While a number of analyses have been done in the field of crime pattern detection, none have done an extensive study on the crime against women in India. The present paper describes a behavioural analysis of crime against women in India from the year 2001 to 2014. The study evaluates the efficacy of Infomap clustering algorithm for detecting communities of states and union territories in India based on crimes. As it is a graph based clustering approach, all the states of India along with the union territories have been considered as nodes of the graph and similarity among the nodes have been measured based on different types of crimes. Each community is a group of states and / or union territories which are similar based on crime trends. Initially, the method finds the communities based on current year crime data, subsequently at the end of a year when new crime data for the next year is available, the graph is modified and new communities are formed. The process is repeated year wise that helps to predict how crime against women has significantly increased in various states of India over the past years. It also helps in rapid visualisation and identification of states which are densely affected with crimes. This approach proves to be quite effective and can also be used for analysing the global crime scenario.", "title": "" }, { "docid": "fb4f4d1762535b8afe7feec072f1534e", "text": "Recently, evaluation of a recommender system has been beyond evaluating just the algorithm. In addition to accuracy of algorithms, user-centric approaches evaluate a system’s e↵ectiveness in presenting recommendations, explaining recommendations and gaining users’ confidence in the system. Existing research focuses on explaining recommendations that are related to user’s current task. However, explaining recommendations can prove useful even when recommendations are not directly related to user’s current task. Recommendations of development environment commands to software developers is an example of recommendations that are not related to the user’s current task, which is primarily focussed on programming, rather than inspecting recommendations. In this dissertation, we study three di↵erent kinds of explanations for IDE commands recommended to software developers. These explanations are inspired by the common approaches based on literature in the domain. We describe a lab-based experimental study with 24 participants where they performed programming tasks on an open source project. Our results suggest that explanations a↵ect users’ trust of recommendations, and explanations reporting the system’s confidence in recommendation a↵ects their trust more. The explanation with system’s confidence rating of the recommendations resulted in more recommendations being investigated. However, explanations did not a↵ect the uptake of the commands. Our qualitative results suggest that recommendations, when not user’s primary focus, should be in context of his task to be accepted more readily.", "title": "" }, { "docid": "d108c65a8f95ab06d00f9a0035327816", "text": "The aim of this study is to evolve novel seeds for John Conway's Game of Life cellular automaton (CA) with Compositional Pattern Producing Networks (CPPNs), a variation of artificial neural networks known to evolve organic patterns when used to process visual data. CPPNs were evolved using both objective search (implemented with NeuroEvolution of Augmenting Topologies) and novelty search, which focuses on finding novel solutions rather than objectively \"fitter\" solutions. Objective search quickly evolved game of life solutions that converged to trivial combinations of previously known solutions. However, novelty search produced non-trivial symmetries and complex high period oscillators such as the period 15 pentadecathlon. Regardless, neither approach evolved purely novel or undocumented seeds. Despite this failure, the complex evolved solutions demonstrate that CPPNs can serve as a powerful encoding for cellular automata seeds. As such, these results stand as the first baseline for further exploration into encoding cellular automata using CPPN.", "title": "" }, { "docid": "49b20e8d9f1163ac5d5b5485788008de", "text": "As the industry deploys increasingly large and complex neural networks to mobile devices, more pressure is put on the memory and compute resources of those devices. Deep compression, or compression of deep neural network weight matrices, is a technique to stretch resources for such scenarios. Existing compression methods cannot effectively compress models smaller than 1-2% of their original size. We develop a new compression technique, DeepThin, building on existing research in the area of low rank factorization. We identify and break artificial constraints imposed by low rank approximations by combining rank factorization with a reshaping process that adds nonlinearity to the approximation function.We deploy DeepThin as a plug-gable library integratedwith TensorFlow that enables users to seamlessly compress models at different granularities. We evaluate DeepThin on two state-of-the-art acoustic models, TFKaldi and DeepSpeech, comparing it to previous compression work (Pruning, HashNet, and Rank Factorization), empirical limit study approaches, and hand-tuned models. For TFKaldi, our DeepThin networks show better word error rates (WER) than competing methods at practically all tested compression rates, achieving an average of 60% relative improvement over rank factorization, 57% over pruning, 23% over hand-tuned same-size networks, and 6% over the computationally expensive HashedNets. For DeepSpeech, DeepThin-compressed networks achieve better test loss than all other compression methods, reaching a 28% better result than rank factorization, 27% better than pruning, 20% better than hand-tuned same-size networks, and 12% better than HashedNets. DeepThin also provide inference performance benefits in two ways: (1) by shrinking the application working sets, allowing the model to fit in a level of the ∗Work done while at Intel. arXiv, February, 2018 cache/memory hierarchy where the original network was too large, and (2) by exploiting unique features of the technique to reuse many intermediate computations, reducing the total compute operations necessary. We evaluate the performance of DeepThin inference across three Haswelland Broadwell-based platforms with varying cache sizes. Speedups range from 2X to 14X, depending on the compression ratio and platform cache sizes.", "title": "" }, { "docid": "5f63aa64d24dcb011db3dc2604af5e73", "text": "Communication aimed at promoting civic engagement may become problematic when citizen roles undergo historic changes. In the current era, younger generations are embracing more expressive styles of actualizing citizenship defined around peer content sharing and social media, in contrast to earlier models of dutiful citizenship based on one-way communication managed by authorities. An analysis of 90 youth Web sites operated by diverse civic and political organizations in the United States reveals uneven conceptions of citizenship and related civic skills, suggesting that many established organization are out of step with changing civic styles.", "title": "" }, { "docid": "a6772649ef68ec2beec13d639a5a6c5b", "text": "A self-organising software architecture is one in which components automatically configure their interaction in a way that is compatible with an overall architectural specification. The objective is to minimise the degree of explicit management necessary for construction and subsequent evolution whilst preserving the architectural properties implied by its specification. This paper examines the feasibility of using architectural constraints as the basis for the specification, design and implementation of self-organising architectures for distributed systems. Although we focus on organising the structure of systems, we show how component state can influence reconfiguration via interface attributes.", "title": "" }, { "docid": "51a859f71bd2ec82188826af18204f02", "text": "This study examines the accuracy of 54 online dating photographs posted by heterosexual daters. We report data on (a1) online daters’ self-reported accuracy, (b) independent judges’ perceptions of accuracy, and (c) inconsistencies in the profile photograph identified by trained coders. While online daters rated their photos as relatively accurate, independent judges rated approximately 1/3 of the photographs as not accurate. Female photographs were judged as less accurate than male photographs, and were more likely to be older, to be retouched or taken by a professional photographer, and to contain inconsistencies, including changes in hair style and skin quality. The findings are discussed in terms of the tensions experienced by online daters to (a) enhance their physical attractiveness and (b) present a photograph that would not be judged deceptive in subsequent face-to-face meetings. The paper extends the theoretical concept of selective self-presentation to online photographs, and discusses issues of self-deception and social desirability bias.", "title": "" }, { "docid": "35c8c5f950123154f4445b6c6b2399c2", "text": "Online social media have democratized the broadcasting of information, encouraging users to view the world through the lens of social networks. The exploitation of this lens, termed social sensing, presents challenges for researchers at the intersection of computer science and the social sciences.", "title": "" }, { "docid": "04c0a4613ab0ec7fd77ac5216a17bd1d", "text": "Many contemporary biomedical applications such as physiological monitoring, imaging, and sequencing produce large amounts of data that require new data processing and visualization algorithms. Algorithms such as principal component analysis (PCA), singular value decomposition and random projections (RP) have been proposed for dimensionality reduction. In this paper we propose a new random projection version of the fuzzy c-means (FCM) clustering algorithm denoted as RPFCM that has a different ensemble aggregation strategy than the one previously proposed, denoted as ensemble FCM (EFCM). RPFCM is more suitable than EFCM for big data sets (large number of points, n). We evaluate our method and compare it to EFCM on synthetic and real datasets.", "title": "" }, { "docid": "3bda091d69af44f28cb3bd5893a5b8ef", "text": "The method described assumes that a word which cannot be found in a dictionary has at most one error, which might be a wrong, missing or extra letter or a single transposition. The unidentified input word is compared to the dictionary again, testing each time to see if the words match—assuming one of these errors occurred. During a test run on garbled text, correct identifications were made for over 95 percent of these error types.", "title": "" }, { "docid": "ec14996dd3ce3701db628348dfeb63f2", "text": "Eye gaze interaction can provide a convenient and natural addition to user-computer dialogues. We have previously reported on our interaction techniques using eye gaze [10]. While our techniques seemed useful in demonstration, we now investigate their strengths and weaknesses in a controlled setting. In this paper, we present two experiments that compare an interaction technique we developed for object selection based on a where a person is looking with the most commonly used selection method using a mouse. We find that our eye gaze interaction technique is faster than selection with a mouse. The results show that our algorithm, which makes use of knowledge about how the eyes behave, preserves the natural quickness of the eye. Eye gaze interaction is a reasonable addition to computer interaction and is convenient in situations where it is important to use the hands for other tasks. It is particularly beneficial for the larger screen workspaces and virtual environments of the future, and it will become increasingly practical as eye tracker technology matures.", "title": "" }, { "docid": "cb65229a1edd5fc6dc5cf6be7afc1b9e", "text": "This session studies specific challenges that Machine Learning (ML) algorithms have to tackle when faced with Big Data problems. These challenges can arise when any of the dimensions in a ML problem grows significantly: a) size of training set, b) size of test set or c) dimensionality. The studies included in this edition explore the extension of previous ML algorithms and practices to Big Data scenarios. Namely, specific algorithms for recurrent neural network training, ensemble learning, anomaly detection and clustering are proposed. The results obtained show that this new trend of ML problems presents both a challenge and an opportunity to obtain results which could allow ML to be integrated in many new applications in years to come.", "title": "" }, { "docid": "3b1a7539000a8ddabdaa4888b8bb1adc", "text": "This paper presents evaluations among the most usual maximum power point tracking (MPPT) techniques, doing meaningful comparisons with respect to the amount of energy extracted from the photovoltaic (PV) panel [tracking factor (TF)] in relation to the available power, PV voltage ripple, dynamic response, and use of sensors. Using MatLab/Simulink and dSPACE platforms, a digitally controlled boost dc-dc converter was implemented and connected to an Agilent Solar Array E4350B simulator in order to verify the analytical procedures. The main experimental results are presented for conventional MPPT algorithms and improved MPPT algorithms named IC based on proportional-integral (PI) and perturb and observe based on PI. Moreover, the dynamic response and the TF are also evaluated using a user-friendly interface, which is capable of online program power profiles and computes the TF. Finally, a typical daily insulation is used in order to verify the experimental results for the main PV MPPT methods.", "title": "" }, { "docid": "07c288560af7cbc7acc2ed4f87967d8f", "text": "X-ray imaging in differential interference contrast (DIC) with submicrometer optical resolution was performed by using a twin zone plate (TZP) setup generating focal spots closely spaced within the TZP spatial resolution of 160 nm. Optical path differences introduced by the sample are recorded by a CCD camera in a standard full-field imaging and by an aperture photodiode in a standard scanning transmission x-ray microscope. Applying this x-ray DIC technique, we demonstrate for both the full-field imaging and scanning x-ray microscope methods a drastic increase in image contrast (approximately 20x) for a low-absorbing specimen, similar to the Nomarski DIC method for visible-light microscopy.", "title": "" }, { "docid": "6ea59490942d4748ce85c728573bdb9a", "text": "We present an accurate, efficient, and robust pose estimation system based on infrared LEDs. They are mounted on a target object and are observed by a camera that is equipped with an infrared-pass filter. The correspondences between LEDs and image detections are first determined using a combinatorial approach and then tracked using a constant-velocity model. The pose of the target object is estimated with a P3P algorithm and optimized by minimizing the reprojection error. Since the system works in the infrared spectrum, it is robust to cluttered environments and illumination changes. In a variety of experiments, we show that our system outperforms state-of-the-art approaches. Furthermore, we successfully apply our system to stabilize a quadrotor both indoors and outdoors under challenging conditions. We release our implementation as open-source software.", "title": "" }, { "docid": "e2b94f12c368904b02c449c0d28f29f5", "text": "This paper introduces a concept for robot navigation based on a rotating synthetic aperture short-range radar scanner. It uses an innovative broadband holographic reconstruction algorithm, which overcomes the typical problem of residual phase errors caused by an imprecisely measured aperture position and moving targets. Thus, it is no longer necessary to know the exact trajectory of the synthetic aperture radar to get a high-resolution image, which is a major advantage over the classical holographic reconstruction algorithm. However, the developed algorithm is not only used to compute a high-resolution 360 ° 2-D image after each turn of the radar platform while the robot is moving, but also to calculate the relative residual radial velocity between the moving radar scanner system and all targets in the environment. This allows us to determine the exact velocity of the robotic system on which the radar scanner is mounted, and thus to obtain the exact radar trajectory, if there are stationary targets like walls in the environment.", "title": "" }, { "docid": "e7eb15df383c92fcd5a4edc7e27b5265", "text": "This article presents a new model for word sense disambiguation formulated in terms of evolutionary game theory, where each word to be disambiguated is represented as a node on a graph whose edges represent word relations and senses are represented as classes. The words simultaneously update their class membership preferences according to the senses that neighboring words are likely to choose. We use distributional information to weigh the influence that each word has on the decisions of the others and semantic similarity information to measure the strength of compatibility among the choices. With this information we can formulate the word sense disambiguation problem as a constraint satisfaction problem and solve it using tools derived from game theory, maintaining the textual coherence. The model is based on two ideas: Similar words should be assigned to similar classes and the meaning of a word does not depend on all the words in a text but just on some of them. The article provides an in-depth motivation of the idea of modeling the word sense disambiguation problem in terms of game theory, which is illustrated by an example. The conclusion presents an extensive analysis on the combination of similarity measures to use in the framework and a comparison with state-of-the-art systems. The results show that our model outperforms state-of-the-art algorithms and can be applied to different tasks and in different scenarios.", "title": "" } ]
scidocsrr
2c7c8734f40ed4ee51d223e14f3a851e
Renewable Energy Systems With Photovoltaic Power Generators: Operation and Modeling
[ { "docid": "470093535d4128efa9839905ab2904a5", "text": "Photovolatic systems normally use a maximum power point tracking (MPPT) technique to continuously deliver the highest possible power to the load when variations in the insolation and temperature occur. It overcomes the problem of mismatch between the solar arrays and the given load. A simple method of tracking the maximum power points (MPP’s) and forcing the system to operate close to these points is presented. The principle of energy conservation is used to derive the largeand small-signal model and transfer function. By using the proposed model, the drawbacks of the state-space-averaging method can be overcome. The TI320C25 digital signal processor (DSP) was used to implement the proposed MPPT controller, which controls the dc/dc converter in the photovoltaic system. Simulations and experimental results show excellent performance.", "title": "" }, { "docid": "14dc7c8065adad3fc3c67f5a8e35298b", "text": "This paper describes a method for maximum power point tracking (MPPT) control while searching for optimal parameters corresponding to weather conditions at that time. The conventional method has problems in that it is impossible to quickly acquire the generation power at the maximum power (MP) point in low solar radiation (irradiation) regions. It is found theoretically and experimentally that the maximum output power and the optimal current, which give this maximum, have a linear relation at a constant temperature. Furthermore, it is also shown that linearity exists between the short-circuit current and the optimal current. MPPT control rules are created based on the findings from solar arrays that can respond at high speeds to variations in irradiation. The proposed MPPT control method sets the output current track on the line that gives the relation between the MP and the optimal current so as to acquire the MP that can be generated at that time by dividing the power and current characteristics into two fields. The method is based on the generated power being a binary function of the output current. Considering the experimental fact that linearity is maintained only at low irradiation below half the maximum irradiation, the proportionality coefficient (voltage coefficient) is compensated for only in regions with more than half the rated optimal current, which correspond to the maximum irradiation. At high irradiation, the voltage coefficient needed to perform the proposed MPPT control is acquired through the hill-climbing method. The effectiveness of the proposed method is verified through experiments under various weather conditions", "title": "" } ]
[ { "docid": "c1e0b1c318f73187c75be26f66d95632", "text": "Newly emerged gallium nitride (GaN) devices feature ultrafast switching speed and low on-state resistance that potentially provide significant improvements for power converters. This paper investigates the benefits of GaN devices in an LLC resonant converter and quantitatively evaluates GaN devices' capabilities to improve converter efficiency. First, the relationship of device and converter design parameters to the device loss is established based on an analytical model of LLC resonant converter operating at the resonance. Due to the low effective output capacitance of GaN devices, the GaN-based design demonstrates about 50% device loss reduction compared with the Si-based design. Second, a new perspective on the extra transformer winding loss due to the asymmetrical primary-side and secondary-side current is proposed. The device and design parameters are tied to the winding loss based on the winding loss model in the finite element analysis (FEA) simulation. Compared with the Si-based design, the winding loss is reduced by 18% in the GaN-based design. Finally, in order to verify the GaN device benefits experimentally, 400- to 12-V, 300-W, 1-MHz GaN-based and Si-based LLC resonant converter prototypes are built and tested. One percent efficiency improvement, which is 24.8% loss reduction, is achieved in the GaN-based converter.", "title": "" }, { "docid": "b93888e6c47d6f2dab8e1b2d0fec8b14", "text": "We developed and evaluated a multimodal affect detector that combines conversational cues, gross body language, and facial features. The multimodal affect detector uses feature-level fusion to combine the sensory channels and linear discriminant analyses to discriminate between naturally occurring experiences of boredom, engagement/flow, confusion, frustration, delight, and neutral. Training and validation data for the affect detector were collected in a study where 28 learners completed a 32- min. tutorial session with AutoTutor, an intelligent tutoring system with conversational dialogue. Classification results supported a channel × judgment type interaction, where the face was the most diagnostic channel for spontaneous affect judgments (i.e., at any time in the tutorial session), while conversational cues were superior for fixed judgments (i.e., every 20 s in the session). The analyses also indicated that the accuracy of the multichannel model (face, dialogue, and posture) was statistically higher than the best single-channel model for the fixed but not spontaneous affect expressions. However, multichannel models reduced the discrepancy (i.e., variance in the precision of the different emotions) of the discriminant models for both judgment types. The results also indicated that the combination of channels yielded superadditive effects for some affective states, but additive, redundant, and inhibitory effects for others. We explore the structure of the multimodal linear discriminant models and discuss the implications of some of our major findings.", "title": "" }, { "docid": "3c4ec64eae7723da8bc2ce51ee1b0979", "text": "Conventional vending machines require users to press buttons and respond to visual cues. This makes them less accessible to some users such as the blind. NuiVend resolves this issue by integrating natural voice commands and gesture interactions into a vending machine, thereby creating many alternative, natural, and more user-friendly ways of interaction. In this paper, we will discuss NuiVend's use of a variety of technologies. Such as: Microsoft Kinect, various Microsoft Cognitive API services, relay and sensor boards, as well as the overall logic of the control software. Finally, we discuss potential improvements to NuiVend as well as Microsoft Language Understanding Intelligent Service (LUIS) techniques that can be applied to many other future NUI based projects.", "title": "" }, { "docid": "a0b6d95d88d3ee09ec99221d2ebaf2f3", "text": "The possibility of employing sensor nodes that wireless communicate under the ground, through concrete, or under-the-debris (disaster scenario) has been recently highlighted at the Wireless Underground Sensor Networks (WUSN) literature. Nonetheless, the best operating frequency for such systems is still an open research aspect. In this work, we address this question for mid-range distances (e.g., 15..30m) by proposing a soil path attenuation model for an underground magnetic induction (MI)-based system involving a pair of nodes. The model is empirically validated and based on simulation results it is possible to conclude that for mid-range MI systems it is strategic to adopt a dynamic frequency selection scheme where audio frequencies are chosen whenever high soil moisture levels are detected.", "title": "" }, { "docid": "aa6359fe662cdd2548eeacbaeffe48de", "text": "Agile methodologies represent a 'people' centered approach to delivering software. This paper investigates the social processes that contribute to their success. Qualitative grounded theory was used to explore socio-psychological experiences in agile teams, where agile teams were viewed as complex adaptive socio-technical systems. Advances in systems theory suggest that human agency changes the nature of a system and how it should be studied. In particular, end-goals and positive sources of motivation, such as pride, become important. Research included the questions: How do agile practices structure and mediate the experience of individuals developing software? And in particular, how do agile practices mediate the interaction between individuals and the team as a whole? Results support an understanding of how social identity and collective effort are supported by agile methods.", "title": "" }, { "docid": "720648646b401761ee53b9b4c8844849", "text": "Theorists have suggested some people find it easier to express their ‘‘true selves’’ online than in person. Among 523 participants in an online study, Shyness was positively associated with online ‘Real Me’ self location, while Conscientiousness was negatively associated with an online self. Extraversion was indirectly negatively associated with an online self, mediated by Shyness. Neuroticism was positively associated with an online self, partly mediated by Shyness. 107 online and offline friends of participants provided ratings of them. Overall, both primary participants and their observers indicated that offline relationships were closer. However, participants who located their Real Me online reported feeling closer to their online friends than did those locating their real selves offline. To test whether personality is better expressed in online or offline interactions, observers’ ratings of participants’ personalities were compared. Both online and offline observers’ ratings of Extraversion, Agreeableness and Conscientiousness correlated with participants’ self-reports. However, only offline observers’ ratings of Neuroticism correlated with participants’ own. Except for Neuroticism, the similarity of online and offline observers’ personality ratings to participants’ self-reports did not differ significantly. The study provides no evidence that online self-presentations are more authentic; indeed Neuroticism may be more visibly", "title": "" }, { "docid": "6abc84b079ba3e5f4117b2d9203d8a4c", "text": "Stereotypes about Millennials, born between 1979 and 1994, depict them as self-centered, unmotivated, disrespectful, and disloyal, contributing to widespread concern about how communication with Millennials will affect organizations and how they will develop relationships with other organizational members. We review these purported characteristics, as well as Millennials' more positive qualities-they work well in teams, are motivated to have an impact on their organizations, favor open and frequent communication with their supervisors, and are at ease with communication technologies. We discuss Millennials' communicated values and expectations and their potential effect on coworkers, as well as how workplace interaction may change Millennials.", "title": "" }, { "docid": "155e53e97c23498a557f848ef52da2a7", "text": "We propose a simultaneous extraction method for 12 organs from non-contrast three-dimensional abdominal CT images. The proposed method uses an abdominal cavity standardization process and atlas guided segmentation incorporating parameter estimation with the EM algorithm to deal with the large fluctuations in the feature distribution parameters between subjects. Segmentation is then performed using multiple level sets, which minimize the energy function that considers the hierarchy and exclusiveness between organs as well as uniformity of grey values in organs. To assess the performance of the proposed method, ten non-contrast 3D CT volumes were used. The accuracy of the feature distribution parameter estimation was slightly improved using the proposed EM method, resulting in better performance of the segmentation process. Nine organs out of twelve were statistically improved compared with the results without the proposed parameter estimation process. The proposed multiple level sets also boosted the performance of the segmentation by 7.2 points on average compared with the atlas guided segmentation. Nine out of twelve organs were confirmed to be statistically improved compared with the atlas guided method. The proposed method was statistically proved to have better performance in the segmentation of 3D CT volumes.", "title": "" }, { "docid": "849ffc68aa0e14c2cfdf53c9d99d5079", "text": "To encourage repeatable research, fund repeatability engineering and reward commitments to sharing research artifacts.", "title": "" }, { "docid": "9ff977a9486b2bbc22aff46c3106f9f6", "text": "Trust and security have prevented businesses from fully accepting cloud platforms. To protect clouds, providers must first secure virtualized data center resources, uphold user privacy, and preserve data integrity. The authors suggest using a trust-overlay network over multiple data centers to implement a reputation system for establishing trust between service providers and data owners. Data coloring and software watermarking techniques protect shared data objects and massively distributed software modules. These techniques safeguard multi-way authentications, enable single sign-on in the cloud, and tighten access control for sensitive data in both public and private clouds.", "title": "" }, { "docid": "0dd8e07502ed70b38fe6eb478115f5a8", "text": "Department of Psychology Iowa State University, Ames, IA, USA Over the last 30 years, the video game industry has grown into a multi-billion dollar business. More children and adults are spending time playing computer games, consoles games, and online games than ever before. Violence is a dominant theme in most of the popular video games. This article reviews the current literature on effects of violent video game exposure on aggression-related variables. Exposure to violent video games causes increases in aggressive behavior, cognitions, and affect. Violent video game exposure also causes increases in physiological desensitization to real-life violence and decreases in helping behavior. The current video game literature is interpreted in terms of the general aggression model (GAM). Differences between violent video game exposure and violent television are also discussed.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "541daac0a96555f0e03c400b126d3cfe", "text": "The emergence of social neuroscience has significantly advanced our understanding of the relationship that exists between social processes and their neurobiological underpinnings. Social neuroscience research often involves the use of simple and static stimuli lacking many of the potentially important aspects of real world activities and social interactions. Whilst this research has merit, there is a growing interest in the presentation of dynamic stimuli in a manner that allows researchers to assess the integrative processes carried out by perceivers over time. Herein, we discuss the potential of virtual reality for enhancing ecological validity while maintaining experimental control in social neuroscience research. Virtual reality is a technology that allows for the creation of fully interactive, three-dimensional computerized models of social situations that can be fully controlled by the experimenter. Furthermore, the introduction of interactive virtual characters-either driven by a human or by a computer-allows the researcher to test, in a systematic and independent manner, the effects of various social cues. We first introduce key technical features and concepts related to virtual reality. Next, we discuss the potential of this technology for enhancing social neuroscience protocols, drawing on illustrative experiments from the literature.", "title": "" }, { "docid": "1878b3e7742a0ffbd3da67be23c6e366", "text": "Compensation for geometrical spreading along a raypath is one of the key steps in AVO amplitude-variation-with-offset analysis, in particular, for wide-azimuth surveys. Here, we propose an efficient methodology to correct long-spread, wide-azimuth reflection data for geometrical spreading in stratified azimuthally anisotropic media. The P-wave geometrical-spreading factor is expressed through the reflection traveltime described by a nonhyperbolic moveout equation that has the same form as in VTI transversely isotropic with a vertical symmetry axis media. The adapted VTI equation is parameterized by the normal-moveout NMO ellipse and the azimuthally varying anellipticity parameter . To estimate the moveout parameters, we apply a 3D nonhyperbolic semblance algorithm of Vasconcelos and Tsvankin that operates simultaneously with traces at all offsets and", "title": "" }, { "docid": "8bb465b2ec1f751b235992a79c6f7bf1", "text": "Continuum robotics has rapidly become a rich and diverse area of research, with many designs and applications demonstrated. Despite this diversity in form and purpose, there exists remarkable similarity in the fundamental simplified kinematic models that have been applied to continuum robots. However, this can easily be obscured, especially to a newcomer to the field, by the different applications, coordinate frame choices, and analytical formalisms employed. In this paper we review several modeling approaches in a common frame and notational convention, illustrating that for piecewise constant curvature, they produce identical results. This discussion elucidates what has been articulated in different ways by a number of researchers in the past several years, namely that constant-curvature kinematics can be considered as consisting of two separate submappings: one that is general and applies to all continuum robots, and another that is robot-specific. These mappings are then developed both for the singlesection and for the multi-section case. Similarly, we discuss the decomposition of differential kinematics (the robot’s Jacobian) into robot-specific and robot-independent portions. The paper concludes with a perspective on several of the themes of current research that are shaping the future of continuum robotics.", "title": "" }, { "docid": "61f7693dd01e94867866963387e77fb6", "text": "This paper seeks to identify and characterize healthrelated topics discussed on the Chinese microblogging website, Sina Weibo. We identified nearly 1 million messages containing health-related keywords, filtered from a dataset of 93 million messages spanning five years. We applied probabilistic topic models to this dataset and identified the prominent health topics. We show that a variety of health topics are discussed in Sina Weibo, and that four flu-related topics are correlated with monthly influenza case rates in China.", "title": "" }, { "docid": "238aac56366875b1714284d3d963fe9b", "text": "We construct a general-purpose multi-input functional encryption scheme in the private-key setting. Namely, we construct a scheme where a functional key corresponding to a function f enables a user holding encryptions of $$x_1, \\ldots , x_t$$ x1,…,xt to compute $$f(x_1, \\ldots , x_t)$$ f(x1,…,xt) but nothing else. This is achieved starting from any general-purpose private-key single-input scheme (without any additional assumptions) and is proven to be adaptively secure for any constant number of inputs t. Moreover, it can be extended to a super-constant number of inputs assuming that the underlying single-input scheme is sub-exponentially secure. Instantiating our construction with existing single-input schemes, we obtain multi-input schemes that are based on a variety of assumptions (such as indistinguishability obfuscation, multilinear maps, learning with errors, and even one-way functions), offering various trade-offs between security assumptions and functionality. Previous and concurrent constructions of multi-input functional encryption schemes either rely on stronger assumptions and provided weaker security guarantees (Goldwasser et al. in Advances in cryptology—EUROCRYPT, 2014; Ananth and Jain in Advances in cryptology—CRYPTO, 2015), or relied on multilinear maps and could be proven secure only in an idealized generic model (Boneh et al. in Advances in cryptology—EUROCRYPT, 2015). In comparison, we present a general transformation that simultaneously relies on weaker assumptions and guarantees stronger security.", "title": "" }, { "docid": "2f9a5a9b31830db2708f63daa1d182ea", "text": "PURPOSE\nTo report a case of autoenucleation associated with contralateral field defect.\n\n\nDESIGN\nObservational case report.\n\n\nMETHODS\nA 36-year-old man was referred to the emergency ward with his right eye attached to a fork. His history revealed drug abuse with ecstasy.\n\n\nRESULTS\nVisual field examination revealed a temporal hemianopia on the left eye. There was no change in the visual field defect after intravenous steroid, two months after initial presentation.\n\n\nCONCLUSIONS\nContralateral visual field defect may be associated with autoenucleation. A visual field test is recommended in all cases with traumatic enucleation.", "title": "" }, { "docid": "4fea6fb309d496f9b4fd281c80a8eed7", "text": "Network alignment is the problem of matching the nodes of two graphs, maximizing the similarity of the matched nodes and the edges between them. This problem is encountered in a wide array of applications---from biological networks to social networks to ontologies---where multiple networked data sources need to be integrated. Due to the difficulty of the task, an accurate alignment can rarely be found without human assistance. Thus, it is of great practical importance to develop network alignment algorithms that can optimally leverage experts who are able to provide the correct alignment for a small number of nodes. Yet, only a handful of existing works address this active network alignment setting.\n The majority of the existing active methods focus on absolute queries (\"are nodes a and b the same or not?\"), whereas we argue that it is generally easier for a human expert to answer relative queries (\"which node in the set b1,...,bn is the most similar to node a?\"). This paper introduces two novel relative-query strategies, TopMatchings and GibbsMatchings, which can be applied on top of any network alignment method that constructs and solves a bipartite matching problem. Our methods identify the most informative nodes to query by sampling the matchings of the bipartite graph associated to the network-alignment instance.\n We compare the proposed approaches to several commonly-used query strategies and perform experiments on both synthetic and real-world datasets. Our sampling-based strategies yield the highest overall performance, outperforming all the baseline methods by more than 15 percentage points in some cases. In terms of accuracy, TopMatchings and GibbsMatchings perform comparably. However, GibbsMatchings is significantly more scalable, but it also requires hyperparameter tuning for a temperature parameter.", "title": "" }, { "docid": "106696053804ae902cfccd5977a8ddc0", "text": "The expectation maximization algorithm has been classically used to find the maximum likelihood estimates of parameters in probabilistic models with unobserved data, for instance, mixture models. A key issue in such problems is the choice of the model complexity. The higher the number of components in the mixture, the higher will be the data likelihood, but also the higher will be the computational burden and data overfitting. In this work, we propose a clustering method based on the expectation maximization algorithm that adapts online the number of components of a finite Gaussian mixture model from multivariate data or method estimates the number of components and their means and covariances sequentially, without requiring any careful initialization. Our methodology starts from a single mixture component covering the whole data set and sequentially splits it incrementally during expectation maximization steps. The coarse to fine nature of the algorithm reduce the overall number of computations to achieve a solution, which makes the method particularly suited to image segmentation applications whenever computational time is an issue. We show the effectiveness of the method in a series of experiments and compare it with a state-of-the-art alternative technique both with synthetic data and real images, including experiments with images acquired from the iCub humanoid robot.", "title": "" } ]
scidocsrr
380f070f274b662587b0e38da08febf8
Individual user characteristics and information visualization: connecting the dots through eye tracking
[ { "docid": "8feb5dce809acf0efb63d322f0526fcf", "text": "Recent studies of eye movements in reading and other information processing tasks, such as music reading, typing, visual search, and scene perception, are reviewed. The major emphasis of the review is on reading as a specific example of cognitive processing. Basic topics discussed with respect to reading are (a) the characteristics of eye movements, (b) the perceptual span, (c) integration of information across saccades, (d) eye movement control, and (e) individual differences (including dyslexia). Similar topics are discussed with respect to the other tasks examined. The basic theme of the review is that eye movement data reflect moment-to-moment cognitive processes in the various tasks examined. Theoretical and practical considerations concerning the use of eye movement data are also discussed.", "title": "" } ]
[ { "docid": "a4a0ae5eca88a700002479af66c53f21", "text": "This paper studies the distinction between subordinating and coordinating discourse relations, a distinction that governs the hierarchical structure of discourse. We provide linguistic tests to clarify which discourse relations are subordinating and which are coordinating. We argue that some relations are classified as subordinating or coordinating by default, a default that can be overridden in specific contexts. The distinction between subordinating and coordinating relations thus belongs to the level of information packaging in discourse and not to the level of information content or the semantics of the relations themselves. # 2003 Published by Elsevier B.V.", "title": "" }, { "docid": "6b1e67c1768f9ec7a6ab95a9369b92d1", "text": "Autoregressive sequence models based on deep neural networks, such as RNNs, Wavenet and the Transformer attain state-of-the-art results on many tasks. However, they are difficult to parallelize and are thus slow at processing long sequences. RNNs lack parallelism both during training and decoding, while architectures like WaveNet and Transformer are much more parallelizable during training, yet still operate sequentially during decoding. We present a method to extend sequence models using discrete latent variables that makes decoding much more parallelizable. We first autoencode the target sequence into a shorter sequence of discrete latent variables, which at inference time is generated autoregressively, and finally decode the output sequence from this shorter latent sequence in parallel. To this end, we introduce a novel method for constructing a sequence of discrete latent variables and compare it with previously introduced methods. Finally, we evaluate our model end-to-end on the task of neural machine translation, where it is an order of magnitude faster at decoding than comparable autoregressive models. While lower in BLEU than purely autoregressive models, our model achieves higher scores than previously proposed non-autoregressive translation models.", "title": "" }, { "docid": "0ce9e025b0728adc245759580330e7f5", "text": "We present a unified framework for dense correspondence estimation, called Homography flow, to handle large photometric and geometric deformations in an efficient manner. Our algorithm is inspired by recent successes of the sparse to dense framework. The main intuition is that dense flows located in same plane can be represented as a single geometric transform. Tailored to dense correspondence task, the Homography flow differs from previous methods in the flow domain clustering and the trilateral interpolation. By estimating and propagating sparsely estimated transforms, dense flow field is estimated with very low computation time. The Homography flow highly improves the performance of dense correspondences, especially in flow discontinuous area. Experimental results on challenging image pairs show that our approach suppresses the state-of-the-art algorithms in both accuracy and computation time.", "title": "" }, { "docid": "9b010450862f5b3b73273028242db8ad", "text": "A number of mechanisms ensure that the intestine is protected from pathogens and also against our own intestinal microbiota. The outermost of these is the secreted mucus, which entraps bacteria and prevents their translocation into the tissue. Mucus contains many immunomodulatory molecules and is largely produced by the goblet cells. These cells are highly responsive to the signals they receive from the immune system and are also able to deliver antigens from the lumen to dendritic cells in the lamina propria. In this Review, we will give a basic overview of mucus, mucins and goblet cells, and explain how each of these contributes to immune regulation in the intestine.", "title": "" }, { "docid": "a500afda393ad60ddd1bb39778655172", "text": "The success and the failure of a data warehouse (DW) project are mainly related to the design phase according to most researchers in this domain. When analyzing the decision-making system requirements, many recurring problems appear and requirements modeling difficulties are detected. Also, we encounter the problem associated with the requirements expression by non-IT professionals and non-experts makers on design models. The ambiguity of the term of decision-making requirements leads to a misinterpretation of the requirements resulting from data warehouse design failure and incorrect OLAP analysis. Therefore, many studies have focused on the inclusion of vague data in information systems in general, but few studies have examined this case in data warehouses. This article describes one of the shortcomings of current approaches to data warehouse design which is the study of in the requirements inaccuracy expression and how ontologies can help us to overcome it. We present a survey on this topic showing that few works that take into account the imprecision in the study of this crucial phase in the decision-making process for the presentation of challenges and problems that arise and requires more attention by researchers to improve DW design. According to our knowledge, no rigorous study of vagueness in this area were made. Keywords— Data warehouses Design, requirements analysis, imprecision, ontology", "title": "" }, { "docid": "b325f262a6f84637c8a175c29f07db34", "text": "The aim of this article is to present a synthetic overview of the state of knowledge regarding the Celtic cultures in the northwestern Iberian Peninsula. It reviews the difficulties linked to the fact that linguists and archaeologists do not agree on this subject, and that the hegemonic view rejects the possibility that these populations can be considered Celtic. On the other hand, the examination of a range of direct sources of evidence, including literary and epigraphic texts, and the application of the method of historical anthropology to the available data, demonstrate the validity of the consideration of Celtic culture in this region, which can be described as a protohistorical society of the Late Iron Age, exhibiting a hierarchical organization based on ritually chosen chiefs whose power was based in part on economic redistribution of resources, together with a priestly elite more or less of the druidic type. However, the method applied cannot on its own answer the questions of when and how this Celtic cultural dimension of the proto-history of the northwestern Iberian Peninsula developed.", "title": "" }, { "docid": "94919a7ba43e986e3519d658bba03811", "text": "We propose a single image dehazing method that is based on a physical model and the dark channel prior principle. The selection of an atmospheric light value is directly responsible for the color authenticity and contrast of the resulting image. Our choice of atmospheric light is based on a variogram, which slowly weakens areas in the image that do not conform to the dark channel prior. Additionally, we propose a fast transmission estimation algorithm to shorten the processing time. Along with a subjective evaluation, the image quality was also evaluated using three indicators: MSE, PSNR, and average gradient. Our experimental results show that the proposed method can obtain accurate dehazing results and improve the operational efficiency. & 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "45f895841ad08bd4473025385e57073a", "text": "Robust brain magnetic resonance (MR) segmentation algorithms are critical to analyze tissues and diagnose tumor and edema in a quantitative way. In this study, we present a new tissue segmentation algorithm that segments brain MR images into tumor, edema, white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). The detection of the healthy tissues is performed simultaneously with the diseased tissues because examining the change caused by the spread of tumor and edema on healthy tissues is very important for treatment planning. We used T1, T2, and FLAIR MR images of 20 subjects suffering from glial tumor. We developed an algorithm for stripping the skull before the segmentation process. The segmentation is performed using self-organizing map (SOM) that is trained with unsupervised learning algorithm and fine-tuned with learning vector quantization (LVQ). Unlike other studies, we developed an algorithm for clustering the SOM instead of using an additional network. Input feature vector is constructed with the features obtained from stationary wavelet transform (SWT) coefficients. The results showed that average dice similarity indexes are 91% for WM, 87% for GM, 96% for CSF, 61% for tumor, and 77% for edema.", "title": "" }, { "docid": "6f26f4409d418fe69b1d43ec9b4f8b39", "text": "Automatic understanding of human affect using visual signals is of great importance in everyday human–machine interactions. Appraising human emotional states, behaviors and reactions displayed in real-world settings, can be accomplished using latent continuous dimensions (e.g., the circumplex model of affect). Valence (i.e., how positive or negative is an emotion) and arousal (i.e., power of the activation of the emotion) constitute popular and effective representations for affect. Nevertheless, the majority of collected datasets this far, although containing naturalistic emotional states, have been captured in highly controlled recording conditions. In this paper, we introduce the Aff-Wild benchmark for training and evaluating affect recognition algorithms. We also report on the results of the First Affect-in-the-wild Challenge (Aff-Wild Challenge) that was recently organized in conjunction with CVPR 2017 on the Aff-Wild database, and was the first ever challenge on the estimation of valence and arousal in-the-wild. Furthermore, we design and extensively train an end-to-end deep neural architecture which performs prediction of continuous emotion dimensions based on visual cues. The proposed deep learning architecture, AffWildNet, includes convolutional and recurrent neural network layers, exploiting the invariant properties of convolutional features, while also modeling temporal dynamics that arise in human behavior via the recurrent layers. The AffWildNet produced state-of-the-art results on the Aff-Wild Challenge. We then exploit the AffWild database for learning features, which can be used as priors for achieving best performances both for dimensional, as well as categorical emotion recognition, using the RECOLA, AFEW-VA and EmotiW 2017 datasets, compared to all other methods designed for the same goal. The database and emotion recognition models are available at http://ibug.doc.ic.ac.uk/resources/first-affect-wild-challenge .", "title": "" }, { "docid": "3d59f488d91af8b9d204032a8d4f65c8", "text": "Robotic grasp detection for novel objects is a challenging task, but for the last few years, deep learning based approaches have achieved remarkable performance improvements, up to 96.1% accuracy, with RGB-D data. In this paper, we propose fully convolutional neural network (FCNN) based methods for robotic grasp detection. Our methods also achieved state-of-the-art detection accuracy (up to 96.6%) with state-ofthe-art real-time computation time for high-resolution images (6-20ms per 360×360 image) on Cornell dataset. Due to FCNN, our proposed method can be applied to images with any size for detecting multigrasps on multiobjects. Proposed methods were evaluated using 4-axis robot arm with small parallel gripper and RGB-D camera for grasping challenging small, novel objects. With accurate vision-robot coordinate calibration through our proposed learning-based, fully automatic approach, our proposed method yielded 90% success rate.", "title": "" }, { "docid": "39fc7b710a6d8b0fdbc568b48221de5d", "text": "The framework of cognitive wireless networks is expected to endow the wireless devices with the cognition-intelligence ability with which they can efficiently learn and respond to the dynamic wireless environment. In many practical scenarios, the complexity of network dynamics makes it difficult to determine the network evolution model in advance. Thus, the wireless decision-making entities may face a black-box network control problem and the model-based network management mechanisms will be no longer applicable. In contrast, model-free learning enables the decision-making entities to adapt their behaviors based on the reinforcement from their interaction with the environment and (implicitly) build their understanding of the system from scratch through trial-and-error. Such characteristics are highly in accordance with the requirement of cognition-based intelligence for devices in cognitive wireless networks. Therefore, model-free learning has been considered as one key implementation approach to adaptive, self-organized network control in cognitive wireless networks. In this paper, we provide a comprehensive survey on the applications of the state-of-the-art model-free learning mechanisms in cognitive wireless networks. According to the system models on which those applications are based, a systematic overview of the learning algorithms in the domains of single-agent system, multiagent systems, and multiplayer games is provided. The applications of model-free learning to various problems in cognitive wireless networks are discussed with the focus on how the learning mechanisms help to provide the solutions to these problems and improve the network performance over the model-based, non-adaptive methods. Finally, a broad spectrum of challenges and open issues is discussed to offer a guideline for the future research directions.", "title": "" }, { "docid": "14839c18d1029270174e9f94d122edd5", "text": "Nested event structures are a common occurrence in both open domain and domain specific extraction tasks, e.g., a “crime” event can cause a “investigation” event, which can lead to an “arrest” event. However, most current approaches address event extraction with highly local models that extract each event and argument independently. We propose a simple approach for the extraction of such structures by taking the tree of event-argument relations and using it directly as the representation in a reranking dependency parser. This provides a simple framework that captures global properties of both nested and flat event structures. We explore a rich feature space that models both the events to be parsed and context from the original supporting text. Our approach obtains competitive results in the extraction of biomedical events from the BioNLP’09 shared task with a F1 score of 53.5% in development and 48.6% in testing.", "title": "" }, { "docid": "46658067ffc4fd2ecdc32fbaaa606170", "text": "Adolescent resilience research differs from risk research by focusing on the assets and resources that enable some adolescents to overcome the negative effects of risk exposure. We discuss three models of resilience-the compensatory, protective, and challenge models-and describe how resilience differs from related concepts. We describe issues and limitations related to resilience and provide an overview of recent resilience research related to adolescent substance use, violent behavior, and sexual risk behavior. We then discuss implications that resilience research has for intervention and describe some resilience-based interventions.", "title": "" }, { "docid": "4a6d48bd0f214a94f2137f424dd401eb", "text": "During the past decade, scientific research has provided new insight into the development from an acute, localised musculoskeletal disorder towards chronic widespread pain/fibromyalgia (FM). Chronic widespread pain/FM is characterised by sensitisation of central pain pathways. An in-depth review of basic and clinical research was performed to design a theoretical framework for manual therapy in these patients. It is explained that manual therapy might be able to influence the process of chronicity in three different ways. (I) In order to prevent chronicity in (sub)acute musculoskeletal disorders, it seems crucial to limit the time course of afferent stimulation of peripheral nociceptors. (II) In the case of chronic widespread pain and established sensitisation of central pain pathways, relatively minor injuries/trauma at any locations are likely to sustain the process of central sensitisation and should be treated appropriately with manual therapy accounting for the decreased sensory threshold. Inappropriate pain beliefs should be addressed and exercise interventions should account for the process of central sensitisation. (III) However, manual therapists ignoring the processes involved in the development and maintenance of chronic widespread pain/FM may cause more harm then benefit to the patient by triggering or sustaining central sensitisation.", "title": "" }, { "docid": "dc424d2dc407e504d962c557325f035e", "text": "Document image classification is an important step in Office Automation, Digital Libraries, and other document image analysis applications. There is great diversity in document image classifiers: they differ in the problems they solve, in the use of training data to construct class models, and in the choice of document features and classification algorithms. We survey this diverse literature using three components: the problem statement, the classifier architecture, and performance evaluation. This brings to light important issues in designing a document classifier, including the definition of document classes, the choice of document features and feature representation, and the choice of classification algorithm and learning mechanism. We emphasize techniques that classify single-page typeset document images without using OCR results. Developing a general, adaptable, high-performance classifier is challenging due to the great variety of documents, the diverse criteria used to define document classes, and the ambiguity that arises due to ill-defined or fuzzy document classes.", "title": "" }, { "docid": "a0cba009ac41ab57bdea75c1676715a6", "text": "These notes provide a brief introduction to the theory of noncooperative differential games. After the Introduction, Section 2 reviews the theory of static games. Different concepts of solution are discussed, including Pareto optima, Nash and Stackelberg equilibria, and the co-co (cooperative-competitive) solutions. Section 3 introduces the basic framework of differential games for two players. Open-loop solutions, where the controls implemented by the players depend only on time, are considered in Section 4. It is shown that Nash and Stackelberg solutions can be computed by solving a two-point boundary value problem for a system of ODEs, derived from the Pontryagin maximum principle. Section 5 deals with solutions in feedback form, where the controls are allowed to depend on time and also on the current state of the system. In this case, the search for Nash equilibrium solutions usually leads to a highly nonlinear system of HamiltonJacobi PDEs. In dimension higher than one, this system is generically not hyperbolic and the Cauchy problem is thus ill posed. Due to this instability, closed-loop solutions to differential games are mainly considered in the special case with linear dynamics and quadratic costs. In Section 6, a game in continuous time is approximated by a finite sequence of static games, by a time discretization. Depending of the type of solution adopted in each static game, one obtains different concept of solutions for the original differential game. Section 7 deals with differential games in infinite time horizon, with exponentially discounted payoffs. In this case, the search for Nash solutions in feedback form leads to a system of time-independent H-J equations. Section 8 contains a simple example of a game with infinitely many players. This is intended to convey a flavor of the newly emerging theory of mean field games. Modeling issues, and directions of current research, are briefly discussed in Section 9. Finally, the Appendix collects background material on multivalued functions, selections and fixed point theorems, optimal control theory, and hyperbolic PDEs.", "title": "" }, { "docid": "01f31507360e1a675a1a76d8a3dbf9f2", "text": "Event detection from tweets is an important task to understand the current events/topics attracting a large number of common users. However, the unique characteristics of tweets (e.g. short and noisy content, diverse and fast changing topics, and large data volume) make event detection a challenging task. Most existing techniques proposed for well written documents (e.g. news articles) cannot be directly adopted. In this paper, we propose a segment-based event detection system for tweets, called Twevent. Twevent first detects bursty tweet segments as event segments and then clusters the event segments into events considering both their frequency distribution and content similarity. More specifically, each tweet is split into non-overlapping segments (i.e. phrases possibly refer to named entities or semantically meaningful information units). The bursty segments are identified within a fixed time window based on their frequency patterns, and each bursty segment is described by the set of tweets containing the segment published within that time window. The similarity between a pair of bursty segments is computed using their associated tweets. After clustering bursty segments into candidate events, Wikipedia is exploited to identify the realistic events and to derive the most newsworthy segments to describe the identified events. We evaluate Twevent and compare it with the state-of-the-art method using 4.3 million tweets published by Singapore-based users in June 2010. In our experiments, Twevent outperforms the state-of-the-art method by a large margin in terms of both precision and recall. More importantly, the events detected by Twevent can be easily interpreted with little background knowledge because of the newsworthy segments. We also show that Twevent is efficient and scalable, leading to a desirable solution for event detection from tweets.", "title": "" }, { "docid": "bd9064905ba4ed166ad1e9c41eca7b34", "text": "Governments worldwide are encouraging public agencies to join e-Government initiatives in order to provide better services to their citizens and businesses; hence, methods of evaluating the readiness of individual public agencies to execute specific e-Government programs and directives are a key ingredient in the successful expansion of e-Government. To satisfy this need, a model called the eGovernment Maturity Model (eGov-MM) was developed, integrating the assessment of technological, organizational, operational, and human capital capabilities, under a multi-dimensional, holistic, and evolutionary approach. The model is strongly supported by international best practices, and provides tuning mechanisms to enable its alignment with nation-wide directives on e-Government. This article describes how the model was conceived, designed, developed, field tested by expert public officials from several government agencies, and finally applied to a selection of 30 public agencies in Chile, generating the first formal measurements, assessments, and rankings of their readiness for eGovernment. The implementation of the model also provided several recommendations to policymakers at the national and agency levels.", "title": "" }, { "docid": "975bc281e14246e29da61495e1e5dae1", "text": "We have introduced the biomechanical research on snakes and developmental research on snake-like robots that we have been working on. We could not introduce everything we developed. There were also a smaller snake-like active endoscope; a large-sized snake-like inspection robot for nuclear reactor related facility, Koryu, 1 m in height, 3.5 m in length, and 350 kg in weight; and several other snake-like robots. Development of snake-like robots is still one of our latest research topics. We feel that the technical difficulties in putting snake-like robots into practice have almost been overcome by past research, so we believe that such practical use of snake-like robots can be realized soon.", "title": "" }, { "docid": "8d5dca364cbe5e3825e2f267d1c41d50", "text": "This paper describes an algorithm based on constrained variance maximization for the restoration of a blurred image. Blurring is a smoothing process by definition. Accordingly, the deblurring filter shall be able to perform as a high pass filter, which increases the variance. Therefore, we formulate a variance maximization object function for the deconvolution filter. Using principal component analysis (PCA), we find the filter maximizing the object function. PCA is more than just a high pass filter; by maximizing the variances, it is able to perform the decorrelation, by which the original image is extracted from the mixture (the blurred image). Our approach was experimentally compared with the adaptive Lucy-Richardson maximum likelihood (ML) algorithm. The comparative results on both synthesized and real blurred images are included.", "title": "" } ]
scidocsrr
aad6cfd89669e7d6d2795535e0f9eb13
On the Practicality of Cryptographically Enforcing Dynamic Access Control Policies in the Cloud
[ { "docid": "a08fe0c015f5fc02b7654f3fd00fb599", "text": "Recently, there has been considerable interest in attribute based access control (ABAC) to overcome the limitations of the dominant access control models (i.e, discretionary-DAC, mandatory-MAC and role based-RBAC) while unifying their advantages. Although some proposals for ABAC have been published, and even implemented and standardized, there is no consensus on precisely what is meant by ABAC or the required features of ABAC. There is no widely accepted ABAC model as there are for DAC, MAC and RBAC. This paper takes a step towards this end by constructing an ABAC model that has “just sufficient” features to be “easily and naturally” configured to do DAC, MAC and RBAC. For this purpose we understand DAC to mean owner-controlled access control lists, MAC to mean lattice-based access control with tranquility and RBAC to mean flat and hierarchical RBAC. Our central contribution is to take a first cut at establishing formal connections between the three successful classical models and desired ABAC models.", "title": "" } ]
[ { "docid": "c91cc6de1e26d9ac9b5ba03ba67fa9b9", "text": "As in most of the renewable energy sources it is not possible to generate high voltage directly, the study of high gain dc-dc converters is an emerging area of research. This paper presents a high step-up dc-dc converter based on current-fed Cockcroft-Walton multiplier. This converter not only steps up the voltage gain but also eliminates the use of high frequency transformer which adds to cost and design complexity. N-stage Cockcroft-Walton has been utilized to increase the voltage gain in place of a transformer. This converter also provides dual input operation, interleaved mode and maximum power point tracking control (if solar panel is used as input). This converter is utilized for resistive load and a pulsed power supply and the effect is studied in high voltage application. Simulation has been performed by designing a converter of 450 W, 400 V with single source and two stage of Cockcroft-Walton multiplier and interleaved mode of operation is performed. Design parameters as well as simulation results are presented and verified in this paper.", "title": "" }, { "docid": "0f02a733a66e38e46663a27f5d42d2a2", "text": "In this paper, an effective collaborative filtering algorithm for top-N item recommendation with implicit feedback is proposed. The task of top-N item recommendation is to predict a ranking of items (movies, books, songs, or products in general) that can be of interest for a user based on earlier preferences of the user. We focus on implicit feedback where preferences are given in the form of binary events/ratings. Differently from state-of-the-art methods, the method proposed is designed to optimize the AUC directly within a margin maximization paradigm. Specifically, this turns out in a simple constrained quadratic optimization problem, one for each user. Experiments performed on several benchmarks show that our method significantly outperforms state-of-the-art matrix factorization methods in terms of AUC of the obtained predictions.", "title": "" }, { "docid": "24d0d2a384b2f9cefc6e5162cdc52c45", "text": "Food classification from images is a fine-grained classification problem. Manual curation of food images is cost, time and scalability prohibitive. On the other hand, web data is available freely but contains noise. In this paper, we address the problem of classifying food images with minimal data curation. We also tackle a key problems with food images from the web where they often have multiple cooccuring food types but are weakly labeled with a single label. We first demonstrate that by sequentially adding a few manually curated samples to a larger uncurated dataset from two web sources, the top-1 classification accuracy increases from 50.3% to 72.8%. To tackle the issue of weak labels, we augment the deep model with Weakly Supervised learning (WSL) that results in an increase in performance to 76.2%. Finally, we show some qualitative results to provide insights into the performance improvements using the proposed ideas.", "title": "" }, { "docid": "b42b9393a6f74a0891d5ba0e368d5dcd", "text": "In recent years, many research efforts have been focused on investigation of potential connection between social networking and mental health issues. Particularly important and controversial remains the association between Facebook use, self-esteem and life satisfaction. In our cross-sectional study, on a sample of 381 Facebook users, we tested the existence and strength of this relationship using Bergen Facebook Addiction Scale (BFAS), Facebook Intensity Scale (FBI), Rosenberg's Self-Esteem Scale (SES), and Satisfaction With Life Scale (SWLS). With k-means cluster analysis, we divided the sample into 3 groups: ordinary, intensive, and addicted Facebook users. The results of our study show that ordinary Facebook users differ statistically in self-esteem and life satisfaction from both addicted and intensive users. Facebook addiction was in relation with lower self-esteem. Facebook addiction was also negatively related to life satisfaction. These results are in accordance with the previously published findings of other authors in the fields of social networking psychology and psychiatry. © 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "849b2da3776e1f6668cce50d804d2971", "text": "Post-event timeline reconstruction plays a critical role in forensic investigation and serves as a means of identifying evidence of the digital crime. We present an artificial neural networks based approach for post-event timeline reconstruction using the file system activities. A variety of digital forensic tools have been developed during the past two decades to assist computer forensic investigators undertaking digital timeline analysis, but most of the tools cannot handle large volumes of data efficiently. This paper looks at the effectiveness of employing neural network methodology for computer forensic analysis by preparing a timeline of relevant events occurring on a computing machine by tracing the previous file system activities. Our approach consists of monitoring the file system manipulations, capturing file system snapshots at discrete intervals of time to characterise the use of different software applications, and then using this captured data to train a neural network to recognise execution patterns of the application programs. The trained version of the network may then be used to generate a post-event timeline of a seized hard disk to verify the execution of different applications at different time intervals to assist in the identification of available evidence. a 2008 Published by Elsevier Ltd. 93 94 R 95 96 R 97 98 99 100 101 102 103 104 105 106 107 108 109 U N C O", "title": "" }, { "docid": "a549abeda438ce7ce001854aadb63d81", "text": "Cyberbullying is a phenomenon which negatively affects the individuals, the victims suffer from various mental issues, ranging from depression, loneliness, anxiety to low self-esteem. In parallel with the pervasive use of social media, cyberbullying is becoming more and more prevalent. Traditional mechanisms to fight against cyberbullying include the use of standards and guidelines, human moderators, and blacklists based on the profane words. However, these mechanisms fall short in social media and cannot scale well. Therefore, it is necessary to develop a principled learning framework to automatically detect cyberbullying behaviors. However, it is a challenging task due to short, noisy and unstructured content information and intentional obfuscation of the abusive words or phrases by social media users. Motivated by sociological and psychological findings on bullying behaviors and the correlation with emotions, we propose to leverage sentiment information to detect cyberbullying behaviors in social media by proposing a sentiment informed cyberbullying detection framework. Experimental results on two realworld, publicly available social media datasets show the superiority of the proposed framework. Further studies validate the effectiveness of leveraging sentiment information for cyberbullying detection.", "title": "" }, { "docid": "febc387da7c4ee2c576393d54a0c142e", "text": "Sensors measure physical quantities of the environment for sensing and actuation systems, and are widely used in many commercial embedded systems such as smart devices, drones, and medical devices because they offer convenience and accuracy. As many sensing and actuation systems depend entirely on data from sensors, these systems are naturally vulnerable to sensor spoofing attacks that use fabricated physical stimuli. As a result, the systems become entirely insecure and unsafe. In this paper, we propose a new type of sensor spoofing attack based on saturation. A sensor shows a linear characteristic between its input physical stimuli and output sensor values in a typical operating region. However, if the input exceeds the upper bound of the operating region, the output is saturated and does not change as much as the corresponding changes of the input. Using saturation, our attack can make a sensor to ignore legitimate inputs. To demonstrate our sensor spoofing attack, we target two medical infusion pumps equipped with infrared (IR) drop sensors to control precisely the amount of medicine injected into a patients’ body. Our experiments based on analyses of the drop sensors show that the output of them could be manipulated by saturating the sensors using an additional IR source. In addition, by analyzing the infusion pumps’ firmware, we figure out the vulnerability in the mechanism handling the output of the drop sensors, and implement a sensor spoofing attack that can bypass the alarm systems of the targets. As a result, we show that both over-infusion and under-infusion are possible: our spoofing attack can inject up to 3.33 times the intended amount of fluid or 0.65 times of it for a 10 minute period.", "title": "" }, { "docid": "e7646a79b25b2968c3c5b668d0216aa6", "text": "In this paper, an image retrieval methodology suited for search in large collections of heterogeneous images is presented. The proposed approach employs a fully unsupervised segmentation algorithm to divide images into regions. Low-level features describing the color, position, size and shape of the resulting regions are extracted and are automatically mapped to appropriate intermediatelevel descriptors forming a simple vocabulary termed object ontology. The object ontology is used to allow the qualitative definition of the high-level concepts the user queries for (semantic objects, each represented by a keyword) in a human-centered fashion. When querying, clearly irrelevant image regions are rejected using the intermediate-level descriptors; following that, a relevance feedback mechanism employing the low-level features is invoked to produce the final query results. The proposed approach bridges the gap between keyword-based approaches, which assume the existence of rich image captions or require manual evaluation and annotation of every image of the collection, and query-by-example approaches, which assume that the user queries for images similar to one that already is at his disposal.", "title": "" }, { "docid": "9b30a4bce5cab904fc7faab556548c77", "text": "Hybrid electric vehicles employ a hybrid propulsion system to combine the energy efficiency of electric motor and a long driving range of internal combustion engine, thereby achieving a higher fuel economy as well as convenience compared with conventional ICE vehicles. However, the relatively complicated powertrain structures of HEVs necessitate an effective power management policy to determine the power split between ICE and EM. In this work, we propose a deep reinforcement learning framework of the HEV power management with the aim of improving fuel economy. The DRL technique is comprised of an offline deep neural network construction phase and an online deep Q-learning phase. Unlike traditional reinforcement learning, DRL presents the capability of handling the high dimensional state and action space in the actual decision-making process, making it suitable for the HEV power management problem. Enabled by the DRL technique, the derived HEV power management policy is close to optimal, fully model-free, and independent of a prior knowledge of driving cycles. Simulation results based on actual vehicle setup over real-world and testing driving cycles demonstrate the effectiveness of the proposed framework on optimizing HEV fuel economy.", "title": "" }, { "docid": "b4d7a17eb034bcf5f6616d9338fe4265", "text": "Accessory breasts, usually with a protuberant appearance, are composed of both the central accessory breast tissue and adjacent fat tissue. They are a palpable convexity and cosmetically unsightly. Consequently, patients often desire cosmetic improvement. The traditional general surgical treatment for accessory breasts is removal of the accessory breast tissue, fat tissue, and covering skin as a whole unit. A rather long ugly scar often is left after this operation. A minimally invasive method frequently used by the plastic surgeon is to “dig out” the accessory breast tissue. A central depression appearance often is left due to the adjacent fat tissue remnant. From the cosmetic point of view, neither a long scar nor a bulge is acceptable. A minimal incision is made, and the tumescent liposuction technique is used to aspirate out both the central accessory breast tissue and adjacent fat tissue. If there is an areola or nipple in the accessory breast, either the areola or nipple is excised after liposuction during the same operation. For patients who have too much extra skin in the accessory breast area, a small fusiform incision is made to remove the extra skin after the accessory breast tissue and fat tissue have been aspirated out. From August 2003 to January 2008, 51 patients underwent surgery using the described technique. All were satisfied with their appearance after their initial surgery except for two patients with minimal associated morbidity. This report describes a new approach for treating accessory breasts that results in minimal scarring and a better appearance than can be achieved with traditional methods.", "title": "" }, { "docid": "d5e2d1f3662d66f6d4cfc1c98e4de610", "text": "Compressed sensing (CS) enables significant reduction of MR acquisition time with performance guarantee. However, computational complexity of CS is usually expensive. To address this, here we propose a novel deep residual learning algorithm to reconstruct MR images from sparsely sampled k-space data. In particular, based on the observation that coherent aliasing artifacts from downsampled data has topologically simpler structure than the original image data, we formulate a CS problem as a residual regression problem and propose a deep convolutional neural network (CNN) to learn the aliasing artifacts. Experimental results using single channel and multi channel MR data demonstrate that the proposed deep residual learning outperforms the existing CS and parallel imaging algorithms. Moreover, the computational time is faster in several orders of magnitude.", "title": "" }, { "docid": "119d4cf01ad27da7ff6cfaa8427ced0c", "text": "The overwhelming amount of short text data on social media and elsewhere has posed great challenges to topic modeling due to the sparsity problem. Most existing attempts to alleviate this problem resort to heuristic strategies to aggregate short texts into pseudo-documents before the application of standard topic modeling. Although such strategies cannot be well generalized to more general genres of short texts, the success has shed light on how to develop a generalized solution. In this paper, we present a novel model towards this goal by integrating topic modeling with short text aggregation during topic inference. The aggregation is founded on general topical affinity of texts rather than particular heuristics, making the model readily applicable to various short texts. Experimental results on real-world datasets validate the effectiveness of this new model, suggesting that it can distill more meaningful topics from short texts.", "title": "" }, { "docid": "ef1bc2fc31f465300ed74863c350298a", "text": "Work on the problem of contextualized word representation—the development of reusable neural network components for sentence understanding—has recently seen a surge of progress centered on the unsupervised pretraining task of language modeling with methods like ELMo (Peters et al., 2018). This paper contributes the first large-scale systematic study comparing different pretraining tasks in this context, both as complements to language modeling and as potential alternatives. The primary results of the study support the use of language modeling as a pretraining task and set a new state of the art among comparable models using multitask learning with language models. However, a closer look at these results reveals worryingly strong baselines and strikingly varied results across target tasks, suggesting that the widely-used paradigm of pretraining and freezing sentence encoders may not be an ideal platform for further work.", "title": "" }, { "docid": "e50dbc2f378b98d94e6cc7236f057828", "text": "Monosodium glutamate (MSG) is a popular flavour enhancer used in food industries; however, excess MSG is neurotoxic. Oxidative stress is well documented in MSG induced neurotoxicity. The compounds having antioxidant and anti-inflammatory properties reportedly possess beneficial effects against various neurotoxic insults. Calendula officinalis Linn. flower extract (COE) is known for its potent antioxidant and anti-inflammatory activities. Hence, this present study has been designed to evaluate the neuroprotective effect of COE on MSG-induced neurotoxicity in rats. Adult Wistar rats were administered systemically for 7 days with MSG and after one h of MSG injection, rats were treated with COE (100 and 200 mg/kg) orally. At the end the treatment period, animals were assessed for locomotor activity and were sacrificed; brains were isolated for estimation of LPO, GSH, CAT, TT, GST, Nitrite and histopathological studies. MSG caused a significant alteration in animal behavior, oxidative defense (raised levels of LPO, nitrite concentration, depletion of antioxidant levels) and hippocampal neuronal histology. Treatment with COE significantly attenuated behavioral alterations, oxidative stress, and hippocampal damage in MSG-treated animals. Hence, this study demonstrates that COE protects against MSG-induced neurotoxicity in rats. The antioxidant and anti-inflammatory properties of COE may be responsible for its observed neuroprotective action.", "title": "" }, { "docid": "a110e4872095e8daf0974fa9cb051c39", "text": "The present study provides the first evidence that illiteracy can be reliably predicted from standard mobile phone logs. By deriving a broad set of mobile phone indicators reflecting users’ financial, social and mobility patterns we show how supervised machine learning can be used to predict individual illiteracy in an Asian developing country, externally validated against a large-scale survey. On average the model performs 10 times better than random guessing with a 70% accuracy. Further we show how individual illiteracy can be aggregated and mapped geographically at cell tower resolution. Geographical mapping of illiteracy is crucial to know where the illiterate people are, and where to put in resources. In underdeveloped countries such mappings are often based on out-dated household surveys with low spatial and temporal resolution. One in five people worldwide struggle with illiteracy, and it is estimated that illiteracy costs the global economy more than $1 trillion dollars each year [1]. These results potentially enable costeffective, questionnaire-free investigation of illiteracy-related questions on an unprecedented scale.", "title": "" }, { "docid": "2111199064e824173cbf1322e3fdcd47", "text": "This work addresses fundamental questions about the nature of cybercriminal organization. We investigate the organization of three underground forums: BlackhatWorld, Carders and L33tCrew to understand the nature of distinct communities within a forum, the structure of organization and the impact of enforcement, in particular banning members, on the structure of these forums. We find that each forum is divided into separate competing communities. Smaller communities are limited to 100-230 members, have a two-tiered hierarchy akin to a gang, and focus on a subset of cybercrime activities. Larger communities may have thousands of members and a complex organization with a distributed multi-tiered hierarchy more akin to a mob; such communities also have a more diverse cybercrime portfolio compared to smaller cohorts. Finally, despite differences in size and cybercrime portfolios, members on a single forum have similar operational practices, for example, they use the same electronic currency.", "title": "" }, { "docid": "28b796954834230a0e8218e24bab0d35", "text": "Oral Squamous Cell Carcinoma (OSCC) is a common type of cancer of the oral epithelium. Despite their high impact on mortality, sufficient screening methods for early diagnosis of OSCC often lack accuracy and thus OSCCs are mostly diagnosed at a late stage. Early detection and accurate outline estimation of OSCCs would lead to a better curative outcome and a reduction in recurrence rates after surgical treatment. Confocal Laser Endomicroscopy (CLE) records sub-surface micro-anatomical images for in vivo cell structure analysis. Recent CLE studies showed great prospects for a reliable, real-time ultrastructural imaging of OSCC in situ. We present and evaluate a novel automatic approach for OSCC diagnosis using deep learning technologies on CLE images. The method is compared against textural feature-based machine learning approaches that represent the current state of the art. For this work, CLE image sequences (7894 images) from patients diagnosed with OSCC were obtained from 4 specific locations in the oral cavity, including the OSCC lesion. The present approach is found to outperform the state of the art in CLE image recognition with an area under the curve (AUC) of 0.96 and a mean accuracy of 88.3% (sensitivity 86.6%, specificity 90%).", "title": "" }, { "docid": "955a84510497eb3af804f7920cc315c8", "text": "Mangroves on Pacific high islands offer a number of important ecosystem services to both natural ecological communities and human societies. High islands are subjected to constant erosion over geologic time, which establishes an important source of terrigeneous sediment for nearby marine communities. Many of these sediments are deposited in mangrove forests and offer mangroves a potentially important means for adjusting surface elevation with rising sea level. In this study, we investigated sedimentation and elevation dynamics of mangrove forests in three hydrogeomorphic settings on the islands of Kosrae and Pohnpei, Federated States of Micronesia (FSM). Surface accretion rates ranged from 2.9 to 20.8 mm y−1, and are high for naturally occurring mangroves. Although mangrove forests in Micronesian high islands appear to have a strong capacity to offset elevation losses by way of sedimentation, elevation change over 6½ years ranged from −3.2 to 4.1 mm y−1, depending on the location. Mangrove surface elevation change also varied by hydrogeomorphic setting and river, and suggested differential, and not uniformly bleak, susceptibilities among Pacific high island mangroves to sea-level rise. Fringe, riverine, and interior settings registered elevation changes of −1.30, 0.46, and 1.56 mm y−1, respectively, with the greatest elevation deficit (−3.2 mm y−1) from a fringe zone on Pohnpei and the highest rate of elevation gain (4.1 mm y−1) from an interior zone on Kosrae. Relative to sea-level rise estimates for FSM (0.8–1.8 mm y−1) and assuming a consistent linear trend in these estimates, soil elevations in mangroves on Kosrae and Pohnpei are experiencing between an annual deficit of 4.95 mm and an annual surplus of 3.28 mm. Although natural disturbances are important in mediating elevation gain in some situations, constant allochthonous sediment deposition probably matters most on these Pacific high islands, and is especially helpful in certain hydrogeomorphic zones. Fringe mangrove forests are most susceptible to sea-level rise, such that protection of these outer zones from anthropogenic disturbances (for example, harvesting) may slow the rate at which these zones convert to open water.", "title": "" }, { "docid": "92b26cb86ba44eb63e3e9baba2e90acb", "text": "A compound or collision tumor is a rare occurrence in dermatological findings [1]. The coincidence of malignant melanoma (MM) and basal cell carcinoma (BCC) within the same lesion have only been described in few cases in the literature [2–5]. However, until now the pathogenesis of collision tumors existing of MM and BCC remains unclear [2]. To our knowledge it has not been yet established whether there is a concordant genetic background or independent origin as a possible cause for the development of such a compound tumor. We, therefore, present the extremely rare case of a collision tumor of MM and BCC and the results of a genome-wide analysis by single nucleotide polymorphism array (SNP-Array) for detection of identical genomic aberrations.", "title": "" }, { "docid": "cd6e9587aa41f95768d6c146df82c50f", "text": "This paper deals with genetic algorithm implementation in Python. Genetic algorithm is a probabilistic search algorithm based on the mechanics of natural selection and natural genetics. In genetic algorithms, a solution is represented by a list or a string. List or string processing in Python is more productive than in C/C++/Java. Genetic algorithms implementation in Python is quick and easy. In this paper, we introduce genetic algorithm implementation methods in Python. And we discuss various tools for speeding up Python programs.", "title": "" } ]
scidocsrr
85509932c3c9b193895a758ca86b60ba
A Survey of Human Activity Recognition Using WiFi CSI
[ { "docid": "382ed00313a1769a135c625d529b735e", "text": "Activity monitoring in home environments has become increasingly important and has the potential to support a broad array of applications including elder care, well-being management, and latchkey child safety. Traditional approaches involve wearable sensors and specialized hardware installations. This paper presents device-free location-oriented activity identification at home through the use of existing WiFi access points and WiFi devices (e.g., desktops, thermostats, refrigerators, smartTVs, laptops). Our low-cost system takes advantage of the ever more complex web of WiFi links between such devices and the increasingly fine-grained channel state information that can be extracted from such links. It examines channel features and can uniquely identify both in-place activities and walking movements across a home by comparing them against signal profiles. Signal profiles construction can be semi-supervised and the profiles can be adaptively updated to accommodate the movement of the mobile devices and day-to-day signal calibration. Our experimental evaluation in two apartments of different size demonstrates that our approach can achieve over 96% average true positive rate and less than 1% average false positive rate to distinguish a set of in-place and walking activities with only a single WiFi access point. Our prototype also shows that our system can work with wider signal band (802.11ac) with even higher accuracy.", "title": "" } ]
[ { "docid": "691da5852aad20ace40be20bfeae3ea7", "text": "Experimental manipulations of affect induced by a brief newspaper report of a tragic event produced a pervasive increase in subjects' estimates of the frequency of many risks and other undesirable events. Contrary to expectation, the effect was independent of the similarity between the report arid the estimated risk. An account of a fatal stabbing did not increase the frequency estimate of a closely related risk, homicide, more than the estimates of unrelated risks such as natural hazards. An account of a happy event that created positive affect produced a comparable global decrease in judged frequency of risks.", "title": "" }, { "docid": "b66f17947ef721b4e154c4941a685cd5", "text": "One important property of collaborative filtering recommender systems is that popular items are recommended disproportionately often because they provide extensive usage data and, thus, can be recommended to more users. Compared to popular products, the niches can be as economically attractive as mainstream fare for online retailers. The online retailers can stock virtually everything, and the number of available niche products exceeds the hits by several orders of magnitude. This work addresses accuracy, coverage and prediction time issues to propose a novel latent factor model called latent collaborative relations (LCR), which transforms the recommendation problem into a nearest neighbor search problem by using the proposed scoring function. We project users and items to the latent space, and calculate their similarities based on Euclidean metric. Additionally, the proposed model provides an elegant way to incorporate with locality sensitive hashing (LSH) to provide a fast recommendation while retaining recommendation accuracy and coverage. The experimental results indicate that the speedup is significant, especially when one is confronted with large-scale data sets. As for recommendation accuracy and coverage, the proposed method is competitive on three data sets. © 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "b71ec61f22457a5604a1c46087685e45", "text": "Deep neural networks have been widely adopted for automatic organ segmentation from abdominal CT scans. However, the segmentation accuracy of some small organs (e.g., the pancreas) is sometimes below satisfaction, arguably because deep networks are easily disrupted by the complex and variable background regions which occupies a large fraction of the input volume. In this paper, we formulate this problem into a fixed-point model which uses a predicted segmentation mask to shrink the input region. This is motivated by the fact that a smaller input region often leads to more accurate segmentation. In the training process, we use the ground-truth annotation to generate accurate input regions and optimize network weights. On the testing stage, we fix the network parameters and update the segmentation results in an iterative manner. We evaluate our approach on the NIH pancreas segmentation dataset, and outperform the state-of-the-art by more than 4%, measured by the average Dice-Sørensen Coefficient (DSC). In addition, we report 62.43% DSC in the worst case, which guarantees the reliability of our approach in clinical applications.", "title": "" }, { "docid": "05c617ddbace7ef78469f29ddc6e9b26", "text": "A PNP-triggered SCR with improved trigger techniques is proposed for high-speed I/O ESD protection. By using these techniques, a high trigger voltage, VTRIG, for latch-up immunity during normal operating conditions, together with a low trigger voltage, Vt1, during ESD stress conditions, can be realized with simple design. Moreover, I/O circuits including the ESD protection are operable at a higher voltage than an on-chip core VDD. The PNP-triggered SCR is demonstrated in our 90 nm CMOS technology, suitable trigger voltages at both conditions (VTRIG = 4.0 V@125degC/ Vt1 = 2.0 V), and good ESD performances (HBM: 3500 V/ MM: 200 V) are achieved.", "title": "" }, { "docid": "5aa14d0c93eded7085fe637bffa155f2", "text": "In the human genome, 98% of DNA sequences are non-protein-coding regions that were previously disregarded as junk DNA. In fact, non-coding regions host a variety of cis-regulatory regions which precisely control the expression of genes. Thus, Identifying active cis-regulatory regions in the human genome is critical for understanding gene regulation and assessing the impact of genetic variation on phenotype. The developments of high-throughput sequencing and machine learning technologies make it possible to predict cis-regulatory regions genome wide. Based on rich data resources such as the Encyclopedia of DNA Elements (ENCODE) and the Functional Annotation of the Mammalian Genome (FANTOM) projects, we introduce DECRES based on supervised deep learning approaches for the identification of enhancer and promoter regions in the human genome. Due to their ability to discover patterns in large and complex data, the introduction of deep learning methods enables a significant advance in our knowledge of the genomic locations of cis-regulatory regions. Using models for well-characterized cell lines, we identify key experimental features that contribute to the predictive performance. Applying DECRES, we delineate locations of 300,000 candidate enhancers genome wide (6.8% of the genome, of which 40,000 are supported by bidirectional transcription data), and 26,000 candidate promoters (0.6% of the genome). The predicted annotations of cis-regulatory regions will provide broad utility for genome interpretation from functional genomics to clinical applications. The DECRES model demonstrates potentials of deep learning technologies when combined with high-throughput sequencing data, and inspires the development of other advanced neural network models for further improvement of genome annotations.", "title": "" }, { "docid": "bec2b4da297daca5a5f04affea2b16b2", "text": "Using current reinforcement learning methods, it has recently become possible to learn to play unknown 3D games from raw pixels. In this work, we study the challenges that arise in such complex environments, and summarize current methods to approach these. We choose a task within the Doom game, that has not been approached yet. The goal for the agent is to fight enemies in a 3D world consisting of five rooms. We train the DQN and LSTMA3C algorithms on this task. Results show that both algorithms learn sensible policies, but fail to achieve high scores given the amount of training. We provide insights into the learned behavior, which can serve as a valuable starting point for further research in the Doom domain.", "title": "" }, { "docid": "dbea2e92ea791f60f1f3ce651b9ae17c", "text": "The crystalline silicon heterojunction structure adopted in photovoltaic modules commercialized as Panasonic's HIT has significantly reduced recombination loss, resulting in greater conversion efficiency. The structure of an interdigitated back contact was adopted with our crystalline silicon heterojunction solar cells to reduce optical loss from a front grid electrode, a transparent conducting oxide (TCO) layer, and a-Si:H layers as an approach for exceeding the conversion efficiency of 25%. As a result of the improved short-circuit current (Jsc), we achieved the world's highest efficiency of 25.6% for crystalline silicon-based solar cells under 1-sun illumination (designated area: 143.7 cm2).", "title": "" }, { "docid": "e28ba2ea209537cf9867428e3cf7fdd7", "text": "People take their mobile phones everywhere they go. In Saudi Arabia, the mobile penetration is very high and students use their phones for different reasons in the classroom. The use of mobile devices in classroom triggers an alert of the impact it might have on students’ learning. This study investigates the association between the use of mobile phones during classroom and the learners’ performance and satisfaction. Results showed that students get distracted, and that this diversion of their attention is reflected in their academic success. However, this is not applicable for all. Some students received high scores even though they declared using mobile phones in classroom, which triggers a request for a deeper study.", "title": "" }, { "docid": "093e8a62183287d5085ae3a8a10836b2", "text": "We present the first parser for UCCA, a cross-linguistically applicable framework for semantic representation, which builds on extensive typological work and supports rapid annotation. UCCA poses a challenge for existing parsing techniques, as it exhibits reentrancy (resulting in DAG structures), discontinuous structures and non-terminal nodes corresponding to complex semantic units. To our knowledge, the conjunction of these formal properties is not supported by any existing parser. Our transition-based parser, which uses a novel transition set and features based on bidirectional LSTMs, has value not just for UCCA parsing: its ability to handle more general graph structures can inform the development of parsers for other semantic DAG structures, and in languages that frequently use discontinuous structures.", "title": "" }, { "docid": "09380650b0af3851e19f18de4a2eacb2", "text": "This paper presents a novel self-assembly modular robot (Sambot) that also shares characteristics with self-reconfigurable and self-assembly and swarm robots. Each Sambot can move autonomously and connect with the others. Multiple Sambot can be self-assembled to form a robotic structure, which can be reconfigured into different configurable robots and can locomote. A novel mechanical design is described to realize function of autonomous motion and docking. Introducing embedded mechatronics integrated technology, whole actuators, sensors, microprocessors, power and communication unit are embedded in the module. The Sambot is compact and flexble, the overall size is 80×80×102mm. The preliminary self-assembly and self-reconfiguration of Sambot is discussed, and several possible configurations consisting of multiple Sambot are designed in simulation environment. At last, the experiment of self-assembly and self-reconfiguration and locomotion of multiple Sambot has been implemented.", "title": "" }, { "docid": "e245b1444428e4187737545408dacb72", "text": "Technology offers great potential to reshape our relationship to work, but the form of that reshaping should not be allowed to happen haphazardly. As work and technology use become increasingly intertwined, a number of issues deserve re-examination. Some of these relate to work intensification and/or longer hours and possible exchange for flexibility. Recent research on use of employer-supplied smart phones offers some insight into employee perceptions of why the company supplies this technology and whether there is risk to declining the opportunity. Because dangers are more readily apparent, current limitations of technology use have been approached more often through laws related to driving than through general policies or regulation about the work itself. However, there are other concerns that may translate into employer liability beyond the possibility of car accidents. A variety of these concerns are covered in this article, along with related suggestion for actions by employers, their advisory groups, technology companies, government and employees themselves.", "title": "" }, { "docid": "f4963c41832024b8cd7d3480204275fa", "text": "Almost surreptitiously, crowdsourcing has entered software engineering practice. In-house development, contracting, and outsourcing still dominate, but many development projects use crowdsourcing-for example, to squash bugs, test software, or gather alternative UI designs. Although the overall impact has been mundane so far, crowdsourcing could lead to fundamental, disruptive changes in how software is developed. Various crowdsourcing models have been applied to software development. Such changes offer exciting opportunities, but several challenges must be met for crowdsourcing software development to reach its potential.", "title": "" }, { "docid": "5dfbe9036bc9fd63edc53992daf1858d", "text": "The paper reviews applications of data mining in manufacturing engineering, in particular production processes, operations, fault detection, maintenance, decision support, and product quality improvement. Customer relationship management, information integration aspects, and standardization are also briefly discussed. This review is focused on demonstrating the relevancy of data mining to manufacturing industry, rather than discussing the data mining domain in general. The volume of general data mining literature makes it difficult to gain a precise view of a target area such as manufacturing engineering, which has its own particular needs and requirements for mining applications. This review reveals progressive applications in addition to existing gaps and less considered areas such as manufacturing planning and shop floor control. DOI: 10.1115/1.2194554", "title": "" }, { "docid": "651e4362136a5700a9beaa7242dae654", "text": "This thesis makes several contributions to the field of data compression. Lossless data compression algorithms shorten the description of input objects, such as sequences of text, in a way that allows perfect recovery of the original object. Such algorithms exploit the fact that input objects are not uniformly distributed: by allocating shorter descriptions to more probable objects and longer descriptions to less probable objects, the expected length of the compressed output can be made shorter than the object’s original description. Compression algorithms can be designed to match almost any given probability distribution over input objects. This thesis employs probabilistic modelling, Bayesian inference, and arithmetic coding to derive compression algorithms for a variety of applications, making the underlying probability distributions explicit throughout. A general compression toolbox is described, consisting of practical algorithms for compressing data distributed by various fundamental probability distributions, and mechanisms for combining these algorithms in a principled way. Building on the compression toolbox, new mathematical theory is introduced for compressing objects with an underlying combinatorial structure, such as permutations, combinations, and multisets. An example application is given that compresses unordered collections of strings, even if the strings in the collection are individually incompressible. For text compression, a novel unifying construction is developed for a family of contextsensitive compression algorithms. Special cases of this family include the PPM algorithm and the Sequence Memoizer, an unbounded depth hierarchical Pitman–Yor process model. It is shown how these algorithms are related, what their probabilistic models are, and how they produce fundamentally similar results. The work concludes with experimental results, example applications, and a brief discussion on cost-sensitive compression and adversarial sequences.", "title": "" }, { "docid": "44327eaaabf489d5deaf97a5bb041985", "text": "Convolutional neural networks with deeply trained make a significant performance improvement in face detection. However, the major shortcomings, i.e. need of high computational cost and slow calculation, make the existing CNN-based face detectors impractical in many applications. In this paper, a real-time approach for face detection was proposed by utilizing a single end-to-end deep neural network with multi-scale feature maps, multi-scale prior aspect ratios as well as confidence rectification. Multi-scale feature maps overcome the difficulties of detecting small face, and meanwhile, multiscale prior aspect ratios reduce the computing cost and the confidence rectification, which is in line with the biological intuition and can further improve the detection rate. Evaluated on the public benchmark, FDDB, the proposed algorithm, gained a performance as good as the state-of-the-art CNNbased methods, however, with much faster speed.", "title": "" }, { "docid": "b7d1428434a7274b55a00bce2cc0cf4f", "text": "This paper studies wideband hybrid precoder for downlink space-division multiple-access and orthogonal frequency-division multiple-access (SDMA-OFDMA) massive multi-input multi-output (MIMO) systems. We first derive an iterative algorithm to alternatingly optimize the phase-shifter based wideband analog precoder and low-dimensional digital precoders, then an efficient low-complexity non-iterative hybrid precoder proposes. Simulation results show that in wideband systems the performance of hybrid precoder is affected by the employed frequency-domain scheduling method and the number of available radio frequency (RF) chains, which can perform as well as narrowband hybrid precoder when greedy scheduling is employed and the number of RF chains is large.", "title": "" }, { "docid": "d01339e077c9d8300b4616e7c713f48e", "text": "Blockchains as a technology emerged to facilitate money exchange transactions and eliminate the need for a trusted third party to notarize and verify such transactions as well as protect data security and privacy. New structures of Blockchains have been designed to accommodate the need for this technology in other fields such as e-health, tourism and energy. This paper is concerned with the use of Blockchains in managing and sharing electronic health and medical records to allow patients, hospitals, clinics, and other medical stakeholder to share data amongst themselves, and increase interoperability. The selection of the Blockchains used architecture depends on the entities participating in the constructed chain network. Although the use of Blockchains may reduce redundancy and provide caregivers with consistent records about their patients, it still comes with few challenges which could infringe patients' privacy, or potentially compromise the whole network of stakeholders. In this paper, we investigate different Blockchains structures, look at existing challenges and provide possible solutions. We focus on challenges that may expose patients' privacy and the resiliency of Blockchains to possible attacks.", "title": "" }, { "docid": "71c39b7a45a7bef11c642441191a12e1", "text": "Scoliosis is a medical condition in which a person's spine is curved from side to side. Current methodology of diagnosis of scoliosis: The doctors analyze an X-ray image and determine the cobb angle and vertebral twist. These two parameters are critical in the treatment of scoliosis. Bottlenecks associated with current methodology are inherent errors associated with manual measurement of cobb angle and vertebral twist from X-rays by the concerned doctors and the treatment that is meted out to a particular case of 'cobb angle' and vertebral twist by different doctors may differ with varying results. Hence it becomes imperative to select the best treatment procedure for attaining the best results. Highlights of the new methodology proposed: An X-ray image is accepted as input, Cobb angle is measured by the computer which is programmed to do so, thus eliminating the errors associated with the doctors interpretation.", "title": "" }, { "docid": "c17e30a9d85c6ac776bdfc80e9283e30", "text": "Much of estimation of human internal state (goal, intentions, activities, preferences, etc.) is passive: an algorithm observes human actions and updates its estimate of human state. In this work, we embrace the fact that robot actions affect what humans do, and leverage it to improve state estimation. We enable robots to do active information gathering, by planning actions that probe the user in order to clarify their internal state. For instance, an autonomous car will plan to nudge into a human driver's lane to test their driving style. Results in simulation and in a user study suggest that active information gathering significantly outperforms passive state estimation.", "title": "" }, { "docid": "c30e938b57863772e8c7bc0085d22f71", "text": "Game theory is a set of tools developed to model interactions between agents with conflicting interests, and is thus well-suited to address some problems in communications systems. In this paper we present some of the basic concepts of game theory and show why it is an appropriate tool for analyzing some communication problems and providing insights into how communication systems should be designed. We then provided a detailed example in which game theory is applied to the power control problem in a", "title": "" } ]
scidocsrr
c0933e4b9b16c07345b563a3c7f108e9
The increasing burden of depression
[ { "docid": "7c106fc6fc05ec2d35b89a1dec8e2ca2", "text": "OBJECTIVE\nCurrent estimates of the prevalence of depression during pregnancy vary widely. A more precise estimate is required to identify the level of disease burden and develop strategies for managing depressive disorders. The objective of this study was to estimate the prevalence of depression during pregnancy by trimester, as detected by validated screening instruments (ie, Beck Depression Inventory, Edinburgh Postnatal Depression Score) and structured interviews, and to compare the rates among instruments.\n\n\nDATA SOURCES\nObservational studies and surveys were searched in MEDLINE from 1966, CINAHL from 1982, EMBASE from 1980, and HealthSTAR from 1975.\n\n\nMETHODS OF STUDY SELECTION\nA validated study selection/data extraction form detailed acceptance criteria. Numbers and percentages of depressed patients, by weeks of gestation or trimester, were reported.\n\n\nTABULATION, INTEGRATION, AND RESULTS\nTwo reviewers independently extracted data; a third party resolved disagreement. Two raters assessed quality by using a 12-point checklist. A random effects meta-analytic model produced point estimates and 95% confidence intervals (CIs). Heterogeneity was examined with the chi(2) test (no systematic bias detected). Funnel plots and Begg-Mazumdar test were used to assess publication bias (none found). Of 714 articles identified, 21 (19,284 patients) met the study criteria. Quality scores averaged 62%. Prevalence rates (95% CIs) were 7.4% (2.2, 12.6), 12.8% (10.7, 14.8), and 12.0% (7.4, 16.7) for the first, second, and third trimesters, respectively. Structured interviews found lower rates than the Beck Depression Inventory but not the Edinburgh Postnatal Depression Scale.\n\n\nCONCLUSION\nRates of depression, especially during the second and third trimesters of pregnancy, are substantial. Clinical and economic studies to estimate maternal and fetal consequences are needed.", "title": "" }, { "docid": "c5bc51e3e2ad5aedccfa17095ec1d7ed", "text": "CONTEXT\nLittle is known about the extent or severity of untreated mental disorders, especially in less-developed countries.\n\n\nOBJECTIVE\nTo estimate prevalence, severity, and treatment of Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) mental disorders in 14 countries (6 less developed, 8 developed) in the World Health Organization (WHO) World Mental Health (WMH) Survey Initiative.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nFace-to-face household surveys of 60 463 community adults conducted from 2001-2003 in 14 countries in the Americas, Europe, the Middle East, Africa, and Asia.\n\n\nMAIN OUTCOME MEASURES\nThe DSM-IV disorders, severity, and treatment were assessed with the WMH version of the WHO Composite International Diagnostic Interview (WMH-CIDI), a fully structured, lay-administered psychiatric diagnostic interview.\n\n\nRESULTS\nThe prevalence of having any WMH-CIDI/DSM-IV disorder in the prior year varied widely, from 4.3% in Shanghai to 26.4% in the United States, with an interquartile range (IQR) of 9.1%-16.9%. Between 33.1% (Colombia) and 80.9% (Nigeria) of 12-month cases were mild (IQR, 40.2%-53.3%). Serious disorders were associated with substantial role disability. Although disorder severity was correlated with probability of treatment in almost all countries, 35.5% to 50.3% of serious cases in developed countries and 76.3% to 85.4% in less-developed countries received no treatment in the 12 months before the interview. Due to the high prevalence of mild and subthreshold cases, the number of those who received treatment far exceeds the number of untreated serious cases in every country.\n\n\nCONCLUSIONS\nReallocation of treatment resources could substantially decrease the problem of unmet need for treatment of mental disorders among serious cases. Structural barriers exist to this reallocation. Careful consideration needs to be given to the value of treating some mild cases, especially those at risk for progressing to more serious disorders.", "title": "" } ]
[ { "docid": "48c28572e5eafda1598a422fa1256569", "text": "Future power networks will be characterized by safe and reliable functionality against physical and cyber attacks. This paper proposes a unified framework and advanced monitoring procedures to detect and identify network components malfunction or measurements corruption caused by an omniscient adversary. We model a power system under cyber-physical attack as a linear time-invariant descriptor system with unknown inputs. Our attack model generalizes the prototypical stealth, (dynamic) false-data injection and replay attacks. We characterize the fundamental limitations of both static and dynamic procedures for attack detection and identification. Additionally, we design provably-correct (dynamic) detection and identification procedures based on tools from geometric control theory. Finally, we illustrate the effectiveness of our method through a comparison with existing (static) detection algorithms, and through a numerical study.", "title": "" }, { "docid": "ca74dda60d449933ff72d14fe5c7493c", "text": "We introduce a novel training principle for generative probabilistic models that is an alternative to maximum likelihood. The proposed Generative Stochastic Networks (GSN) framework generalizes Denoising Auto-Encoders (DAE) and is based on learning the transition operator of a Markov chain whose stationary distribution estimates the data distribution. The transition distribution is a conditional distribution that generally involves a small move, so it has fewer dominant modes and is unimodal in the limit of small moves. This simplifies the learning problem, making it less like density estimation and more akin to supervised function approximation, with gradients that can be obtained by backprop. The theorems provided here provide a probabilistic interpretation for denoising autoencoders and generalize them; seen in the context of this framework, auto-encoders that learn with injected noise are a special case of GSNs and can be interpreted as generative models. The theorems also provide an interesting justification for dependency networks and generalized pseudolikelihood and define an appropriate joint distribution and sampling mechanism, even when the conditionals are not consistent. GSNs can be used with missing inputs and can be used to sample subsets of variables given the rest. Experiments validating these theoretical results are conducted on both synthetic datasets and image datasets. The experiments employ a particular architecture that mimics the Deep Boltzmann Machine Gibbs sampler but that allows training to proceed with backprop through a recurrent neural network with noise injected inside and without the need for layerwise pretraining.", "title": "" }, { "docid": "584347daded5d7efd6f1e6fd9c932869", "text": "Polar codes are shown to be instances of both generalized concatenated codes and multilevel codes. It is shown that the performance of a polar code can be improved by representing it as a multilevel code and applying the multistage decoding algorithm with maximum likelihood decoding of outer codes. Additional performance improvement is obtained by replacing polar outer codes with other ones with better error correction performance. In some cases this also results in complexity reduction. It is shown that Gaussian approximation for density evolution enables one to accurately predict the performance of polar codes and concatenated codes based on them.", "title": "" }, { "docid": "7e4a485d489f9e9ce94889b52214c804", "text": "A situated ontology is a world model used as a computational resource for solving a particular set of problems. It is treated as neither a \\natural\" entity waiting to be discovered nor a purely theoretical construct. This paper describes how a semantico-pragmatic analyzer, Mikrokosmos, uses knowledge from a situated ontology as well as from language-speciic knowledge sources (lexicons and microtheory rules). Also presented are some guidelines for acquiring ontological concepts and an overview of the technology developed in the Mikrokosmos project for large-scale acquisition and maintenance of ontological databases. Tools for acquiring, maintaining, and browsing ontologies can be shared more readily than ontologies themselves. Ontological knowledge bases can be shared as computational resources if such tools provide translators between diierent representation formats. 1 A Situated Ontology World models (ontologies) in computational applications are artiicially constructed entities. They are created, not discovered. This is why so many diierent world models were suggested. Many ontologies are developed for purely theoretical purposes or without the context of a practical situation (e. Many practical knowledge-based systems, on the other hand, employ world or domain models without recognizing them as a separate knowledge source (e.g., Farwell, et al. 1993). In the eld of natural language processing (NLP) there is now a consensus that all NLP systems that seek to represent and manipulate meanings of texts need an ontology (e. In our continued eeorts to build a multilingual knowledge-based machine translation (KBMT) system using an interlingual meaning representation (e.g., Onyshkevych and Nirenburg, 1994), we have developed an ontology to facilitate natural language interpretation and generation. The central goal of the Mikrokosmos project is to develop a system that produces a comprehensive Text Meaning Representation (TMR) for an input text in any of a set of source languages. 1 Knowledge that supports this process is stored both in language-speciic knowledge sources and in an independently motivated, language-neutral ontology (e. An ontology for NLP purposes is a body of knowledge about the world (or a domain) that a) is a repository of primitive symbols used in meaning representation; b) organizes these symbols in a tangled subsumption hierarchy; and c) further interconnects these symbols using a rich system of semantic and discourse-pragmatic relations deened among the concepts. In order for such an ontology to become a computational resource for solving problems such as ambiguity and reference resolution, it must be actually constructed, not merely deened formally, as is the …", "title": "" }, { "docid": "b2122d3a8a90d18d3265266e3dead849", "text": "BACKGROUND\nBecause of safety, repeatability, and portability, clinical echocardiography is well established as a standard for cardiac anatomy, cardiac function, and hemodynamics. Similarly, application of echocardiography in commonly used rat experimental models would be worthwhile. The use of noninvasive ultrasound imaging in the rat is a potential replacement for more invasive terminal techniques. Although echocardiography has become commonly used in the rat, normal parameters for cardiac anatomy and function, and comparison with established human values, have not been reported.\n\n\nMETHODS\nA total of 44 Sprague-Dawley male rats had baseline echocardiography replicating a protocol for clinical echocardiography.\n\n\nRESULTS\nComplete 2-dimensional echocardiography for cardiac anatomy and function was obtained in 44 rats. Hemodynamic parameters could be recorded in 85% of rats. The ejection fraction and fractional shortening values of the left ventricle were similar to those reported for healthy human beings. Pulsed Doppler velocities of atrial systole for mitral valve inflow, pulmonary vein reversal, and Doppler tissue of the lateral mitral valve annulus also had similar means as healthy human beings. The calculated left ventricular mass was at the same order of magnitude as a proportion of body weight of rat to man. All other observations in the clinical protocol were different from those reported in healthy human beings.\n\n\nCONCLUSION\nThe use of echocardiography for assessment of cardiac anatomy, function, and hemodynamics can be consistently applied to the rat and replicates much of the information used routinely in human echocardiography.", "title": "" }, { "docid": "b3450073ad3d6f2271d6a56fccdc110a", "text": "OBJECTIVE\nMindfulness-based therapies (MBTs) have been shown to be efficacious in treating internally focused psychological disorders (e.g., depression); however, it is still unclear whether MBTs provide improved functioning and symptom relief for individuals with externalizing disorders, including ADHD. To clarify the literature on the effectiveness of MBTs in treating ADHD and to guide future research, an effect-size analysis was conducted.\n\n\nMETHOD\nA systematic review of studies published in PsycINFO, PubMed, and Google Scholar was completed from the earliest available date until December 2014.\n\n\nRESULTS\nA total of 10 studies were included in the analysis of inattention and the overall effect size was d = -.66. A total of nine studies were included in the analysis of hyperactivity/impulsivity and the overall effect was calculated at d = -.53.\n\n\nCONCLUSION\nResults of this study highlight the possible benefits of MBTs in reducing symptoms of ADHD.", "title": "" }, { "docid": "210e040b7562e30f3818e30024c5717a", "text": "The quasi-envelopment of hepatitis A virus (HAV) capsids in exosome-like virions (eHAV) is an important but incompletely understood aspect of the hepatovirus life cycle. This process is driven by recruitment of newly assembled capsids to endosomal vesicles into which they bud to form multivesicular bodies with intraluminal vesicles that are later released at the plasma membrane as eHAV. The endosomal sorting complexes required for transport (ESCRT) are key to this process, as is the ESCRT-III-associated protein, ALIX, which also contributes to membrane budding of conventional enveloped viruses. YPX1or3L late domains in the structural proteins of these viruses mediate interactions with ALIX, and two such domains exist in the HAV VP2 capsid protein. Mutational studies of these domains are confounded by the fact that the Tyr residues (important for interactions of YPX1or3L peptides with ALIX) are required for efficient capsid assembly. However, single Leu-to-Ala substitutions within either VP2 YPX3L motif (L1-A and L2-A mutants) were well tolerated, albeit associated with significantly reduced eHAV release. In contrast, simultaneous substitutions in both motifs (L1,2-A) eliminated virus release but did not inhibit assembly of infectious intracellular particles. Immunoprecipitation experiments suggested that the loss of eHAV release was associated with a loss of ALIX recruitment. Collectively, these data indicate that HAV YPX3L motifs function as redundant late domains during quasi-envelopment and viral release. Since these motifs present little solvent-accessible area in the crystal structure of the naked extracellular capsid, the capsid structure may be substantially different during quasi-envelopment.IMPORTANCE Nonlytic release of hepatitis A virus (HAV) as exosome-like quasi-enveloped virions is a unique but incompletely understood aspect of the hepatovirus life cycle. Several lines of evidence indicate that the host protein ALIX is essential for this process. Tandem YPX3L \"late domains\" in the VP2 capsid protein could be sites of interaction with ALIX, but they are not accessible on the surface of an X-ray model of the extracellular capsid, raising doubts about this putative late domain function. Here, we describe YPX3L domain mutants that assemble capsids normally but fail to bind ALIX and be secreted as quasi-enveloped eHAV. Our data support late domain function for the VP2 YPX3L motifs and raise questions about the structure of the HAV capsid prior to and following quasi-envelopment.", "title": "" }, { "docid": "5f42f43bf4f46b821dac3b0d0be2f63a", "text": "The autonomous overtaking maneuver is a valuable technology in unmanned vehicle field. However, overtaking is always perplexed by its security and time cost. Now, an autonomous overtaking decision making method based on deep Q-learning network is proposed in this paper, which employs a deep neural network(DNN) to learn Q function from action chosen to state transition. Based on the trained DNN, appropriate action is adopted in different environments for higher reward state. A series of experiments are performed to verify the effectiveness and robustness of our proposed approach for overtaking decision making based on deep Q-learning method. The results support that our approach achieves better security and lower time cost compared with traditional reinforcement learning methods.", "title": "" }, { "docid": "55c86a30550c83778bfd03fb3a1e2fd3", "text": "Katsuhisa Furuta and Masaki Yamakita Department of Control Engineering Tokyo Institute of Technology 2121 0 h-0 kayama, Meguroku Tokyo 152 Japan Seiichi Kobayashi Nippon Seiko Ltd. 78 Toriba-machi, Maebashi-shi Gunma 371 Japan Inverted pendulums have been used for the verification of designed control systems and control laboratory experiments in control theory education. In particular, the control of a double inverted pendulum has been known as a good example to show the power of the state space approach. The triple pendulum, however, has not been successfully stabilized at the upright position due to the inaccuracy of the modelling and error introduced by the computation. Instead of these stabilizing control, one of the authors has studied the swing-up of a pendulum from the stable hanging state to the upright position. However, the control input was designed a priori, so the transfer of the state of the pendulum was not assured. One of the reasons is due to unmodelled dynamics. This paper presents a new type of pendulum on a rotating arm fixed to a rotating shaft and a swing up control algorithm based on state space setting. The control input for swing up is restricted to a bang-bang type, which is determined from the phase plane trajectory. Such state feedback type control gives reliable control for the transfer of the state.", "title": "" }, { "docid": "517d9e98352aa626cecae9e17cbbbc97", "text": "The variational encoder-decoder (VED) encodes source information as a set of random variables using a neural network, which in turn is decoded into target data using another neural network. In natural language processing, sequence-to-sequence (Seq2Seq) models typically serve as encoderdecoder networks. When combined with a traditional (deterministic) attention mechanism, the variational latent space may be bypassed by the attention model, and thus becomes ineffective. In this paper, we propose a variational attention mechanism for VED, where the attention vector is also modeled as Gaussian distributed random variables. Results on two experiments show that, without loss of quality, our proposed method alleviates the bypassing phenomenon as it increases the diversity of generated sentences.1", "title": "" }, { "docid": "d61496b6cb9e323ff907ac51ebb7f4a6", "text": "The reconstruction of a surface model from a point cloud is an important task in the reverse engineering of industrial parts. We aim at constructing a curve network on the point cloud that will define the border of the various surface patches. In this paper, we present an algorithm to extract closed sharp feature lines, which is necessary to create such a closed curve network. We use a first order segmentation to extract candidate feature points and process them as a graph to recover the sharp feature lines. To this end, a minimum spanning tree is constructed and afterwards a reconnection procedure closes the lines. The algorithm is fast and gives good results for real-world point sets from industrial applications. c © 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "70622607a75305882251c073536aa282", "text": "a r t i c l e i n f o", "title": "" }, { "docid": "77da7651b0e924d363c859d926e8c9da", "text": "Manual feedback in basic robot-assisted minimally invasive surgery (RMIS) training can consume a significant amount of time from expert surgeons’ schedule and is prone to subjectivity. In this paper, we explore the usage of different holistic features for automated skill assessment using only robot kinematic data and propose a weighted feature fusion technique for improving score prediction performance. Moreover, we also propose a method for generating ‘task highlights’ which can give surgeons a more directed feedback regarding which segments had the most effect on the final skill score. We perform our experiments on the publicly available JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS) and evaluate four different types of holistic features from robot kinematic data—sequential motion texture (SMT), discrete Fourier transform (DFT), discrete cosine transform (DCT) and approximate entropy (ApEn). The features are then used for skill classification and exact skill score prediction. Along with using these features individually, we also evaluate the performance using our proposed weighted combination technique. The task highlights are produced using DCT features. Our results demonstrate that these holistic features outperform all previous Hidden Markov Model (HMM)-based state-of-the-art methods for skill classification on the JIGSAWS dataset. Also, our proposed feature fusion strategy significantly improves performance for skill score predictions achieving up to 0.61 average spearman correlation coefficient. Moreover, we provide an analysis on how the proposed task highlights can relate to different surgical gestures within a task. Holistic features capturing global information from robot kinematic data can successfully be used for evaluating surgeon skill in basic surgical tasks on the da Vinci robot. Using the framework presented can potentially allow for real-time score feedback in RMIS training and help surgical trainees have more focused training.", "title": "" }, { "docid": "46d239e66c1de735f80312d8458b131d", "text": "Cloud computing is a dynamic, scalable and payper-use distributed computing model empowering designers to convey applications amid job designation and storage distribution. Cloud computing encourages to impart a pool of virtualized computer resource empowering designers to convey applications amid job designation and storage distribution. The cloud computing mainly aims to give proficient access to remote and geographically distributed resources. As cloud technology is evolving day by day and confronts numerous challenges, one of them being uncovered is scheduling. Scheduling is basically a set of constructs constructed to have a controlling hand over the order of work to be performed by a computer system. Algorithms are vital to schedule the jobs for execution. Job scheduling algorithms is one of the most challenging hypothetical problems in the cloud computing domain area. Numerous deep investigations have been carried out in the domain of job scheduling of cloud computing. This paper intends to present the performance comparison analysis of various pre-existing job scheduling algorithms considering various parameters. This paper discusses about cloud computing and its constructs in section (i). In section (ii) job scheduling concept in cloud computing has been elaborated. In section (iii) existing algorithms for job scheduling are discussed, and are compared in a tabulated form with respect to various parameters and lastly section (iv) concludes the paper giving brief summary of the work.", "title": "" }, { "docid": "aee5eb38d6cbcb67de709a30dd37c29a", "text": "Correct disassembly of the HIV-1 capsid shell, called uncoating, is increasingly recognised as central for multiple steps during retroviral replication. However, the timing, localisation and mechanism of uncoating are poorly understood and progress in this area is hampered by difficulties in measuring the process. Previous work suggested that uncoating occurs soon after entry of the viral core into the cell, but recent studies report later uncoating, at or in the nucleus. Furthermore, inhibiting reverse transcription delays uncoating, linking these processes. Here, we have used a combined approach of experimental interrogation of viral mutants and mathematical modelling to investigate the timing of uncoating with respect to reverse transcription. By developing a minimal, testable, model and employing multiple uncoating assays to overcome the disadvantages of each single assay, we find that uncoating is not concomitant with the initiation of reverse transcription. Instead, uncoating appears to be triggered once reverse transcription reaches a certain stage, namely shortly after first strand transfer. Using multiple approaches, we have identified a point during reverse transcription that induces uncoating of the HIV-1 CA shell. We propose that uncoating initiates after the first strand transfer of reverse transcription.", "title": "" }, { "docid": "44380ea0107c22d3f6412456f4533482", "text": "Shadow memory is used by dynamic program analysis tools to store metadata for tracking properties of application memory. The efficiency of mapping between application memory and shadow memory has substantial impact on the overall performance of such analysis tools. However, traditional memory mapping schemes that work well on 32-bit architectures cannot easily port to 64-bit architectures due to the much larger 64-bit address space.\n This paper presents EMS64, an efficient memory shadowing scheme for 64-bit architectures. By taking advantage of application reference locality and unused regions in the 64-bit address space, EMS64 provides a fast and flexible memory mapping scheme without relying on any underlying platform features or requiring any specific shadow memory size. Our experiments show that EMS64 is able to reduce the runtime shadow memory translation overhead to 81% on average, which almost halves the overhead of the fastest 64-bit shadow memory system we are aware of.", "title": "" }, { "docid": "1298c933e4907bdd1d3232291eb3d75e", "text": "Over the last decades with the rapid growth of industrial zones, manufacturing plants and the substantial urbanization, environmental pollution has become a crucial health, environmental and safety concern. In particular, due to the increased emissions of various pollutants caused mainly by human sources, the air pollution problem is elevated in such extent where significant measures need to be taken. Towards the identification and the qualification of that problem, we present in this paper an airborne wireless sensor network system for automated monitoring and measuring of the ambient air pollution. Our proposed system is comprised of a pollution-aware wireless sensor network and unmanned aerial vehicles (UAVs). It is designed for monitoring the pollutants and gases of the ambient air in three-dimensional spaces without the human intervention. In regards to the general architecture of our system, we came up with two schemes and algorithms for an autonomous monitoring of a three-dimensional area of interest. To demonstrate our solution, we deployed the system and we conducted experiments in a real environment measuring air pollutants such as: NH3, CH4, CO2, O2 along with temperature, relative humidity and atmospheric pressure. Lastly, we experimentally evaluated and analyzed the two proposed schemes. Copyright © 2015 IFSA Publishing, S. L.", "title": "" }, { "docid": "b6a600ea1c277bc3bf8f2452b8aef3f1", "text": "Fusion of data from multiple sensors can enable robust navigation in varied environments. However, for optimal performance, the sensors must calibrated relative to one another. Full sensor-to-sensor calibration is a spatiotemporal problem: we require an accurate estimate of the relative timing of measurements for each pair of sensors, in addition to the 6-DOF sensor-to-sensor transform. In this paper, we examine the problem of determining the time delays between multiple proprioceptive and exteroceptive sensor data streams. The primary difficultly is that the correspondences between measurements from different sensors are unknown, and hence the delays cannot be computed directly. We instead formulate temporal calibration as a registration task. Our algorithm operates by aligning curves in a three-dimensional orientation space, and, as such, can be considered as a variant of Iterative Closest Point (ICP). We present results from simulation studies and from experiments with a PR2 robot, which demonstrate accurate calibration of the time delays between measurements from multiple, heterogeneous sensors.", "title": "" }, { "docid": "31c16e6c916030b8f6e76d56e35d47ef", "text": "Assume that a multi-user multiple-input multiple-output (MIMO) communication system must be designed to cover a given area with maximal energy efficiency (bits/Joule). What are the optimal values for the number of antennas, active users, and transmit power? By using a new model that describes how these three parameters affect the total energy efficiency of the system, this work provides closed-form expressions for their optimal values and interactions. In sharp contrast to common belief, the transmit power is found to increase (not decrease) with the number of antennas. This implies that energy efficient systems can operate at high signal-to-noise ratio (SNR) regimes in which the use of interference-suppressing precoding schemes is essential. Numerical results show that the maximal energy efficiency is achieved by a massive MIMO setup wherein hundreds of antennas are deployed to serve relatively many users using interference-suppressing regularized zero-forcing precoding.", "title": "" }, { "docid": "3169a294b91fffeea4479fb3c1baa6eb", "text": "An ultra-wideband (UWB) compact slot antenna with a directional radiation pattern is presented in this communication. The concept is based on dielectric loaded multi-element slot antennas. Wide bandwidth operation is achieved using a driven wide slot antenna fed via an off-centered microstrip line capable of creating a fictitious short along the slot and a number of parasitic antenna elements. The proposed slot antenna uses a graded index superstrate with tapered dielectric constants, from high index to low index , in order to further improve the bandwidth and achieve directional radiation pattern. The superstrate dimensions are carefully chosen so that a dielectric resonator mode is excited to provide radiation at the lowest frequency. A sensitivity study is carried out to optimize the geometric parameters of the slot antennas and the graded index superstrate in order to achieve the maximum bandwidth as well as an unidirectional and frequency invariant radiation pattern. Through this optimization, a compact antenna is designed, fabricated, and tested to show a VSWR value of lower than 2.5 across a 2.9:1 frequency range whose dimensions are 0.27 λ×0.2 λ×0.068 λ at the lowest frequency of operation.", "title": "" } ]
scidocsrr
c9d53da4f87f26db43c2777fd2e3d2d2
Appropriation of Information Systems: Using Cognitive Mapping for Eliciting Users' Sensemaking
[ { "docid": "9c5535f218f6228ba6b2a8e5fdf93371", "text": "Recent analyses of organizational change suggest a growing concern with the tempo of change, understood as the characteristic rate, rhythm, or pattern of work or activity. Episodic change is contrasted with continuous change on the basis of implied metaphors of organizing, analytic frameworks, ideal organizations, intervention theories, and roles for change agents. Episodic change follows the sequence unfreeze-transition-refreeze, whereas continuous change follows the sequence freeze-rebalance-unfreeze. Conceptualizations of inertia are seen to underlie the choice to view change as episodic or continuous.", "title": "" } ]
[ { "docid": "7fe82f7231235ce6d4b16ec103130156", "text": "Autonomous grasping of household objects is one of the major skills that an intelligent service robot necessarily has to provide in order to interact with the environment. In this paper, we propose a grasping strategy for known objects, comprising an off-line, box-based grasp generation technique on 3D shape representations. The complete system is able to robustly detect an object and estimate its pose, flexibly generate grasp hypotheses from the assigned model and perform such hypotheses using visual servoing. We will present experiments implemented on the humanoid platform ARMAR-III.", "title": "" }, { "docid": "fa4653a3d762bae45cd17488ea4c286e", "text": "Now-a-days many researchers work on mining a content posted in natural language at different forums, blogs or social networking sites. Sentiment analysis is rapidly expanding topic with various applications. Previously a person collect response from any relatives previous to procuring an object, but today look is different, now person get reviews of many people on all sides of world. Blogs, e-commerce sites data consists number of implications, that expressing user opinions about specific object. Such data is pre-processed then classified into classes as positive, negative and irrelevant. Sentiment analysis allows us to determine view of public or general users feeling about any object. Two global techniques are used: Supervised Machine-Learning and Unsupervised machine-learning methods. In unsupervised learning use a lexicon with words scored for polarity values such as neutral, positive or negative. Whereas supervised methods require a training set of texts with manually assigned polarity values. This suggest one direction is make use of Fuzzy logic for sentiment analysis which may improve analysis results.", "title": "" }, { "docid": "29df7892b16864cb3721a05886bbcc82", "text": "With the rapid growth of the cyber attacks, sharing of cyber threat intelligence (CTI) becomes essential to identify and respond to cyber attack in timely and cost-effective manner. However, with the lack of standard languages and automated analytics of cyber threat information, analyzing complex and unstructured text of CTI reports is extremely time- and labor-consuming. Without addressing this challenge, CTI sharing will be highly impractical, and attack uncertainty and time-to-defend will continue to increase.\n Considering the high volume and speed of CTI sharing, our aim in this paper is to develop automated and context-aware analytics of cyber threat intelligence to accurately learn attack pattern (TTPs) from commonly available CTI sources in order to timely implement cyber defense actions. Our paper has three key contributions. First, it presents a novel threat-action ontology that is sufficiently rich to understand the specifications and context of malicious actions. Second, we developed a novel text mining approach that combines enhanced techniques of Natural Language Processing (NLP) and Information retrieval (IR) to extract threat actions based on semantic (rather than syntactic) relationship. Third, our CTI analysis can construct a complete attack pattern by mapping each threat action to the appropriate techniques, tactics and kill chain phases, and translating it any threat sharing standards, such as STIX 2.1. Our CTI analytic techniques were implemented in a tool, called TTPDrill, and evaluated using a randomly selected set of Symantec Threat Reports. Our evaluation tests show that TTPDrill achieves more than 82% of precision and recall in a variety of measures, very reasonable for this problem domain.", "title": "" }, { "docid": "f0ec66a9054c086e4141cb95995f5f68", "text": "We present a simple hierarchical Bayesian approach to the modeling collections of texts and other large-scale data collections. For text collections, we posit that a document is generated by choosing a random set of multinomial probabilities for a set of possible “topics,” and then repeatedly generating words by sampling from the topic mixture. This model is intractable for exact probabilistic inference, but approximate posterior probabilities and marginal likelihoods can be obtained via fast variational methods. We also present extensions to coupled models for joint text/image data and multiresolution models for topic hierarchies.", "title": "" }, { "docid": "d8b8aeb2cb7f2dd29af1c0363b31dfef", "text": "As cloud computing becomes prevalent, more and more sensitive data is being centralized into the cloud for sharing, which brings forth new challenges for outsourced data security and privacy. Attributebased encryption (ABE) is a promising cryptographic primitive, which has been widely applied to design fine-grained access control system recently. However, ABE is being criticized for its high scheme overhead as the computational cost grows with the complexity of the access formula. This disadvantage becomes more serious for mobile devices because they have constrained computing resources. Aiming at tackling the challenge above, we present a generic and efficient solution to implement attribute-based access control system by introducing secure outsourcing techniques into ABE. More precisely, two cloud service providers (CSPs), namely key generation-cloud service provider (KG-CSP) and decryption-cloud service provider (D-CSP) are introduced to perform the outsourced key-issuing and decryption on behalf of attribute authority and users respectively. In order to outsource heavy computation to both CSPs without private information leakage, we formulize an underlying primitive called outsourced ABE (OABE) and propose several constructions with outsourced decryption and keyissuing. Finally, extensive experiment demonstrates that with the help of KG-CSP and D-CSP, efficient key-issuing and decryption are achieved in our constructions.", "title": "" }, { "docid": "2d6d33cbbf69cc864c2a65c30f60e5ec", "text": "This article provides a framework for actuaries to think about cyber risk. We propose a differentiated view on cyber versus conventional risk by separating the nature of risk arrival from the target exposed to risk. Our review synthesizes the literature on cyber risk analysis from various disciplines, including computer and network engineering, economics, and actuarial sciences. As a result, we identify possible ways forward to improve rigorous modeling of cyber risk, including its driving factors. This is a prerequisite for establishing a deep and stable market for cyber risk insurance.", "title": "" }, { "docid": "b4dcc5c36c86f9b1fef32839d3a1484d", "text": "The popular Disney Princess line includes nine films (e.g., Snow White, Beauty and the Beast) and over 25,000 marketable products. Gender role depictions of the prince and princess characters were examined with a focus on their behavioral characteristics and climactic outcomes in the films. Results suggest that the prince and princess characters differ in their portrayal of traditionally masculine and feminine characteristics, these gender role portrayals are complex, and trends towards egalitarian gender roles are not linear over time. Content coding analyses demonstrate that all of the movies portray some stereotypical representations of gender, including the most recent film, The Princess and the Frog. Although both the male and female roles have changed over time in the Disney Princess line, the male characters exhibit more androgyny throughout and less change in their gender role portrayals.", "title": "" }, { "docid": "6097315ac2e4475e8afd8919d390babf", "text": "This paper presents an origami-inspired technique which allows the application of 2-D fabrication methods to build 3-D robotic systems. The ability to design robots as origami structures introduces a fast and low-cost fabrication method to modern, real-world robotic applications. We employ laser-machined origami patterns to build a new class of robotic systems for mobility and manipulation. Origami robots use only a flat sheet as the base structure for building complicated bodies. An arbitrarily complex folding pattern can be used to yield an array of functionalities, in the form of actuated hinges or active spring elements. For actuation, we use compact NiTi coil actuators placed on the body to move parts of the structure on-demand. We demonstrate, as a proof-of-concept case study, the end-to-end fabrication and assembly of a simple mobile robot that can undergo worm-like peristaltic locomotion.", "title": "" }, { "docid": "8ee1916587b3f264093048557bf3b05f", "text": "This paper proposes a strategy of semantic processing implemented in an Indonesian text understanding evaluation system. It uses component that already developed in Institut Teknologi Bandung consists of POS Tagger and Syntactic Parser. This research used syntax-driven semantic analysis technique by attaching semantic rules into associated syntactic rule. Semantic rule is used as base of λ-reduction operation, used to produce sentence's knowledge representation in first order logic's form. There is also semantic lexical attachment process, used to give a meaning of each words or phrases in sentence. Semantic processor is implemented in Indonesian text-understanding evaluation system. It has ability to evaluate a reader's understanding of a text by comparing the reader's understanding with computer's understanding. This system consists of text understanding module, question generation module, and understanding evaluation module. Text-understanding module is developed using semantic processor with addition of reference resolution to relate each sentences in text.", "title": "" }, { "docid": "b69f7c0db77c3012ae5e550b23a313fb", "text": "Speckle noise is an inherent property of medical ultrasound imaging, and it generally tends to reduce the image resolution and contrast, thereby reducing the diagnostic value of this imaging modality. As a result, speckle noise reduction is an important prerequisite, whenever ultrasound imaging is used for tissue characterization. Among the many methods that have been proposed to perform this task, there exists a class of approaches that use a multiplicative model of speckled image formation and take advantage of the logarithmical transformation in order to convert multiplicative speckle noise into additive noise. The common assumption made in a dominant number of such studies is that the samples of the additive noise are mutually uncorrelated and obey a Gaussian distribution. The present study shows conceptually and experimentally that this assumption is oversimplified and unnatural. Moreover, it may lead to inadequate performance of the speckle reduction methods. The study introduces a simple preprocessing procedure, which modifies the acquired radio-frequency images (without affecting the anatomical information they contain), so that the noise in the log-transformation domain becomes very close in its behavior to a white Gaussian noise. As a result, the preprocessing allows filtering methods based on assuming the noise to be white and Gaussian, to perform in nearly optimal conditions. The study evaluates performances of three different, nonlinear filters - wavelet denoising, total variation filtering, and anisotropic diffusion - and demonstrates that, in all these cases, the proposed preprocessing significantly improves the quality of resultant images. Our numerical tests include a series of computer-simulated and in vivo experiments.", "title": "" }, { "docid": "1d1eeb2f5a16fd8e1deed16a5839505b", "text": "Searchable symmetric encryption (SSE) is a widely popular cryptographic technique that supports the search functionality over encrypted data on the cloud. Despite the usefulness, however, most of existing SSE schemes leak the search pattern, from which an adversary is able to tell whether two queries are for the same keyword. In recent years, it has been shown that the search pattern leakage can be exploited to launch attacks to compromise the confidentiality of the client’s queried keywords. In this paper, we present a new SSE scheme which enables the client to search encrypted cloud data without disclosing the search pattern. Our scheme uniquely bridges together the advanced cryptographic techniques of chameleon hashing and indistinguishability obfuscation. In our scheme, the secure search tokens for plaintext keywords are generated in a randomized manner, so it is infeasible to tell whether the underlying plaintext keywords are the same given two secure search tokens. In this way, our scheme well avoids using deterministic secure search tokens, which is the root cause of the search pattern leakage. We provide rigorous security proofs to justify the security strengths of our scheme. In addition, we also conduct extensive experiments to demonstrate the performance. Although our scheme for the time being is not immediately applicable due to the current inefficiency of indistinguishability obfuscation, we are aware that research endeavors on making indistinguishability obfuscation practical is actively ongoing and the practical efficiency improvement of indistinguishability obfuscation will directly lead to the applicability of our scheme. Our paper is a new attempt that pushes forward the research on SSE with concealed search pattern.", "title": "" }, { "docid": "d8a98ed672f362fd75c644badbe69c5c", "text": "BACKGROUND\nTrichomoniasis vaginalis is now an important worldwide health problem. Metronidazole has so far been used in treatment, but the metronidazole-resistant strains and unpleasant adverse effects have been de-veloped. Myrrh is one of the oldest known medicinal plants used by the ancient Egyptians for medical purposes and for mummification. Commiphora molmol (Myrrh) proved safe for male reproductive organ which is the main habitat of T. vaginalis and this study aims to evaluate the efficacy of the herbal against T. vaginalis in females.\n\n\nMETHODS\nIn the present study, 33 metronidazole-resistant T. vaginalis females were treated with a combined course of metronidazole and tinidazole. Those still resistant to the combined treatment were given C. molmol. Also, natural plant extract purified from pomegranate (Punica granatum, Roman) was in-vitro investigated for its efficacy against T. vaginalis on Diamond media.\n\n\nRESULTS\nThe anti-T. vaginalis activity of both P. granatum (in-vitro) and C. molmol (in-vivo) extracts gave promis-ing results.\n\n\nCONCLUSION\nThe anti-T. vaginalis activity of P. granatum and C. molmol showed promising results indicating to sources of new anti-Ttrichomonas agents.", "title": "" }, { "docid": "55b2793f637d33e615ceb874d8923810", "text": "A Cr3+ and F- composite-doped LiNi0.5Mn1.5O4 cathode material was synthesized by the solid-state method, and the influence of the doping amount on the material's physical and electrochemical properties was investigated. The structure and morphology of the cathode material were characterized by XRD, SEM, TEM, and HRTEM, and the results revealed that the sample exhibited clear spinel features. No Cr3+ and F- impurity phases were found, and the spinel structure became more stable. The results of the charge/discharge tests, cyclic voltammetry (CV), and electrochemical impedance spectroscopy (EIS) test results suggested that LiCr0.05Ni0.475Mn1.475O3.95F0.05 in which the Cr3+ and F- doping amounts were both 0.05, had the optimal electrochemical properties, with discharge rates of 0.1, 0.5, 2, 5, and 10 C and specific capacities of 134.18, 128.70, 123.62, 119.63, and 97.68 mAh g-1 , respectively. After 50 cycles at a rate of 2 C, LiCr0.05Ni0.475Mn1.475O3.95F0.05 showed extremely good cycling performance, with a discharge specific capacity of 121.02 mAh g-1 and a capacity retention rate of 97.9%. EIS test revealed that the doping clearly decreased the charge-transfer resistance.", "title": "" }, { "docid": "8b083f91bf76942255ca71ddb9a6c841", "text": "This paper proposes an external rotor Permanent Magnet assisted Synchronous Reluctance Motor (PMa-SynRM). Due to the high material cost of PM machine, there was a continuous effort to reduce the amount of permanent magnets (PM) in manufacturing an electric machine. PMa-SynRM has reduced great amount of PM usage by hybridizing the architectures between Synchronous Reluctance Motor and Permanent Magnet Motor. To further reduce the PM usage and to further increase of torque density, design of PMa-SynRM has been investigated through external rotor architecture. With best of author's knowledge, external rotor has not been properly researched even with great advantage and design flexibility with PMa-SynRM architecture. Analytical comparison between two different types of motor (internal and external) has been investigated to prove the superior design flexibility and performance of proposed method. Performance characteristics such as average torque developed and variation of torque with respect to variation in speed have been analyzed in finite element atmosphere (FEA) to validate the proposed design. A lower torque ripple model with distributed winding structure has been modeled and its FEA simulation results for developed torque have been compared with the concentrated type external rotor PMa-SynRM model.", "title": "" }, { "docid": "f14f6d95f13ca6f92fe14c59e3ad0c81", "text": "The ever-increasing representativeness of software maintenance in the daily effort of software team requires initiatives for enhancing the activities accomplished to provide a good service for users who request a software improvement. This article presents a quantitative approach for evaluating software maintenance services based on cluster analysis techniques. The proposed approach provides a compact characterization of the services delivered by a maintenance organization, including characteristics such as service, waiting, and queue time. The ultimate goal is to help organizations to better understand, manage, and improve their current software maintenance process. We also report in this paper the usage of the proposed approach in a medium-sized organization throughout 2010. This case study shows that 72 software maintenance requests can be grouped in seven distinct clusters containing requests with similar characteristics. The in-depth analysis of the clusters found with our approach can foster the understanding of the nature of the requests and, consequently, it may improve the process followed by the software maintenance team.", "title": "" }, { "docid": "52d6711ebbafd94ab5404e637db80650", "text": "At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using Qlearning with an -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.", "title": "" }, { "docid": "2e7bc1cc2f4be94ad0e4bce072a9f98a", "text": "Glycosylation plays an important role in ensuring the proper structure and function of most biotherapeutic proteins. Even small changes in glycan composition, structure, or location can have a drastic impact on drug safety and efficacy. Recently, glycosylation has become the subject of increased focus as biopharmaceutical companies rush to create not only biosimilars, but also biobetters based on existing biotherapeutic proteins. Against this backdrop of ongoing biopharmaceutical innovation, updated methods for accurate and detailed analysis of protein glycosylation are critical for biopharmaceutical companies and government regulatory agencies alike. This review summarizes current methods of characterizing biopharmaceutical glycosylation, including compositional mass profiling, isomer-specific profiling and structural elucidation by MS and hyphenated techniques.", "title": "" }, { "docid": "494388072f3d7a62d00c5f3b5ad7a514", "text": "Recent years have seen an increasing interest in providing accurate prediction models for electrical energy consumption. In Smart Grids, energy consumption optimization is critical to enhance power grid reliability, and avoid supply-demand mismatches. Utilities rely on real-time power consumption data from individual customers in their service area to forecast the future demand and initiate energy curtailment programs. Currently however, little is known about the differences in consumption characteristics of various customer types, and their impact on the prediction method’s accuracy. While many studies have concentrated on aggregate loads, showing that accurate consumption prediction at the building level can be achieved, there is a lack of results regarding individual customers consumption prediction. In this study, we perform an empirical quantitative evaluation of various prediction methods of kWh energy consumption of two distinct customer types: 1) small, highly variable individual customers, and 2) aggregated, more stable consumption at the building level. We show that prediction accuracy heavily depends on customer type. Contrary to previous studies, we consider the consumption data granularity to be very small (i.e., 15-min interval), and focus on very short term predictions (next few hours). As Smart Grids move closer to dynamic curtailment programs, which enables demand response (DR) events not only on weekdays, but also during weekends, existing DR strategies prove to be inadequate. Here, we relax the constraint of workdays, and include weekends, where ISO models consistently under perform. Nonetheless, we show that simple ISO baselines, and short-term Time Series, which only depend on recent historical data, achieve superior prediction accuracy. This result suggests that large amounts of historical training data are not required, rather they should be avoided.", "title": "" }, { "docid": "9db664f2c379dd9a8b42356fc98ebe74", "text": "In this paper a doublet consisting of a pair of mushroom-shaped posts transversally positioned into an evanescent waveguide is presented. An ultra-compact second-order filter having a response with two transmission zeros has then been designed, manufactured and measured. Additive Manufacturing (AM) technology based on stereolithography (SLA) has been exploited for the filter manufacturing. The resulting structure shows excellent response thanks to the tight tolerances obtained in the AM process and to the very good surface conductivity obtained in the metallization process.", "title": "" }, { "docid": "2710d644a45697cdd3abd1286218d060", "text": "Significant ongoing debate exists amongst stakeholders as to the best front-of-pack labelling approach and emerging evidence suggests that the plethora of schemes may cause confusion for the consumer. To gain a better understanding of the relevant psychological phenomena and consumer perspectives surrounding FoP labelling schemes and their optimal development a Multiple Sort Procedure study involving free sorting of a range of nutritional labels presented on cards was performed in four countries (n=60). The underlying structure of the qualitative data generated was explored using Multiple Scalogram Analysis. Elicitation of categorisations from consumers has the potential to provide a very important perspective in this arena and results demonstrated that the amount of information contained within a nutrition label has high salience for consumers, as does the health utility of the label although a dichotomy exists in the affective evaluation of the labels containing varying degrees of information aggregation. Classification of exiting front-of-pack labelling systems on a proposed dimension of 'directiveness' leads to a better understanding of why some schemes may be more effective than others in particular situations or for particular consumers. Based on this research an enhanced hypothetical front-of-pack labelling scheme which combines both directive and non-directive elements is proposed.", "title": "" } ]
scidocsrr
4a442add6af0e808e9b894755b063413
Cancer subtype identification using deep learning approach
[ { "docid": "17f719b2bfe2057141e367afe39d7b28", "text": "Identification of cancer subtypes plays an important role in revealing useful insights into disease pathogenesis and advancing personalized therapy. The recent development of high-throughput sequencing technologies has enabled the rapid collection of multi-platform genomic data (e.g., gene expression, miRNA expression, and DNA methylation) for the same set of tumor samples. Although numerous integrative clustering approaches have been developed to analyze cancer data, few of them are particularly designed to exploit both deep intrinsic statistical properties of each input modality and complex cross-modality correlations among multi-platform input data. In this paper, we propose a new machine learning model, called multimodal deep belief network (DBN), to cluster cancer patients from multi-platform observation data. In our integrative clustering framework, relationships among inherent features of each single modality are first encoded into multiple layers of hidden variables, and then a joint latent model is employed to fuse common features derived from multiple input modalities. A practical learning algorithm, called contrastive divergence (CD), is applied to infer the parameters of our multimodal DBN model in an unsupervised manner. Tests on two available cancer datasets show that our integrative data analysis approach can effectively extract a unified representation of latent features to capture both intra- and cross-modality correlations, and identify meaningful disease subtypes from multi-platform cancer data. In addition, our approach can identify key genes and miRNAs that may play distinct roles in the pathogenesis of different cancer subtypes. Among those key miRNAs, we found that the expression level of miR-29a is highly correlated with survival time in ovarian cancer patients. These results indicate that our multimodal DBN based data analysis approach may have practical applications in cancer pathogenesis studies and provide useful guidelines for personalized cancer therapy.", "title": "" } ]
[ { "docid": "398b72faa5922bd7af153f055c6344b5", "text": "As a key component of a plug-in hybrid electric vehicle (PHEV) charger system, the front-end ac-dc converter must achieve high efficiency and power density. This paper presents a topology survey evaluating topologies for use in front end ac-dc converters for PHEV battery chargers. The topology survey is focused on several boost power factor corrected converters, which offer high efficiency, high power factor, high density, and low cost. Experimental results are presented and interpreted for five prototype converters, converting universal ac input voltage to 400 V dc. The results demonstrate that the phase shifted semi-bridgeless PFC boost converter is ideally suited for automotive level I residential charging applications in North America, where the typical supply is limited to 120 V and 1.44 kVA or 1.92 kVA. For automotive level II residential charging applications in North America and Europe the bridgeless interleaved PFC boost converter is an ideal topology candidate for typical supplies of 240 V, with power levels of 3.3 kW, 5 kW, and 6.6 kW.", "title": "" }, { "docid": "068386a089895bed3a7aebf2d1a7b35d", "text": "The purpose of this prospective study was to assess the efficacy of the Gertzbein classification and the Load Shearing classification in the conservative treatment of thoracolumbar burst spinal fractures. From 1997 to 1999, 30 consecutive patients with single-level thoracolumbar spinal injury with no neurological impairment were classified according to the Gertzbein classification and the Load Shearing scoring, and were treated conservatively. A custom-made thoracolumbosacral orthosis was worn in all patients for 6 months and several radiologic parameters were evaluated, while the Denis Pain and Work Scale were used to assess the clinical outcome. The average follow-up period was 24 months (range 12–39 months). During this period radiograms showed no improvement of any radiologic parameter. However, the clinical outcome was satisfactory in 28 of 30 patients with neither pseudarthrosis, nor any complications recorded on completion of treatment. This study showed that thoracolumbar burst fractures Gertzbein A3 with a load shearing score 6 or less can be successfully treated conservatively. Patient selection is a fundamental component in clinical success for these classification systems. Cette étude a pour objectif de classer les fractures comminutives du segment thoraco-lombaire de la colonne vertébrale qui ont été traitées de manière conservatrice, conformément à la classification de Gertzbein et à la classification de la répartition des contraintes. Depuis 1997 à 1999, trente malades présentant une fracture comminutive dans le segment thoraco-lombaire de la colonne vertébrale, sans dommages neurologiques, ont été traités de manière conservatoire, conformément aux classifications de Gertzbein et à la notation de la répartition des charges. Les patients ont porté une orthèse thoraco-lombaire pendant 6 mois et on a procédé à une évaluation des paramètres radiographiques. L'échelle de la douleur et du travail de Dennis a été utilisée pour évaluer les résultats. La durée moyenne d'observation des malades a été de 24 mois (de 12 à 39 mois). Bien que les paramètres radiologiques, pendant cette période, n'aient manifesté aucune amélioration, le résultat clinique de ces patients a été satisfaisant pour 93.33% d' entre eux. L'on n'a pas constaté de complications ni de pseudarthroses. La classification de Gertzbein associe le type de fracture au degré d'instabilité mécanique et au dommage neurologique. La classification de la répartition des contraintes relie l'écrasement et le déplacement de la fracture à la stabilité mécanique. Les fractures explosives du segment lombaire de la colonne vertébrale de type A3, selon Gertzbein, degré 6 ou inférieur à 6, selon la classification des contraintes, peuvent être traitées avec succès de manière conservatrice. Le choix judicieux des patients est important pour le succès clinique de cette méthode de classification.", "title": "" }, { "docid": "701c8ec9debc34937c430c9b81151b82", "text": "Many scientists and researchers have been considering Support Vector Machines (SVMs) as one of the most powerful and robust algorithm in machine learning. For this reason, they have been used in many fields, such as pattern recognition, image processing, robotics, and many others. Since their appearance in 1995, from an idea of Vladimir Vapnik, bioinformatics community started to use this new technique to solve the most common classification and clustering problems in the biomolecular domain. In this document, we first give a general description of Support Vector Machine technique, a technique based on the statistical learning theory (Section 1). Then we provide a survey of the many applications of the algorithm in the bioinformatics domain (Section 2). Finally, we report a short list of SVM implementation codes available on the internet (Section 3). About this survey This document is freely available and can be download from http://www.DavideChicco.it author’s website. Alessandro Lazaric (INRIA, Lille, France, EU) kindly supervised and corrected this document before publication.", "title": "" }, { "docid": "b5b91947716e3594e3ddbb300ea80d36", "text": "In this paper, a novel drive method, which is different from the traditional motor drive techniques, for high-speed brushless DC (BLDC) motor is proposed and verified by a series of experiments. It is well known that the BLDC motor can be driven by either pulse-width modulation (PWM) techniques with a constant dc-link voltage or pulse-amplitude modulation (PAM) techniques with an adjustable dc-link voltage. However, to our best knowledge, there is rare study providing a proper drive method for a high-speed BLDC motor with a large power over a wide speed range. Therefore, the detailed theoretical analysis comparison of the PWM control and the PAM control for high-speed BLDC motor is first given. Then, a conclusion that the PAM control is superior to the PWM control at high speed is obtained because of decreasing the commutation delay and high-frequency harmonic wave. Meanwhile, a new high-speed BLDC motor drive method based on the hybrid approach combining PWM and PAM is proposed. Finally, the feasibility and effectiveness of the performance analysis comparison and the new drive method are verified by several experiments.", "title": "" }, { "docid": "32c068c8341ae0ff12556050bb8f526d", "text": "In this paper, we assess the challenges for multi-domain, multi-lingual question answering, create necessary resources for benchmarking and develop a baseline model. We curate 500 articles in six different domains from the web. These articles form a comparable corpora of 250 English documents and 250 Hindi documents. From these comparable corpora, we have created 5, 495 question-answer pairs with the questions and answers, both being in English and Hindi. The question can be both factoid or short descriptive types. The answers are categorized in 6 coarse and 63 finer types. To the best of our knowledge, this is the very first attempt towards creating multi-domain, multi-lingual question answering evaluation involving English and Hindi. We develop a deep learning based model for classifying an input question into the coarse and finer categories depending upon the expected answer. Answers are extracted through similarity computation and subsequent ranking. For factoid question, we obtain an MRR value of 49.10% and for short descriptive question, we obtain a BLEU score of 41.37%. Evaluation of question classification model shows the accuracies of 90.12% and 80.30% for coarse and finer classes, respectively.", "title": "" }, { "docid": "eced59d8ec159f3127e7d2aeca76da96", "text": "Mano-a-Mano is a unique spatial augmented reality system that combines dynamic projection mapping, multiple perspective views and device-less interaction to support face to face, or dyadic, interaction with 3D virtual objects. Its main advantage over more traditional AR approaches, such as handheld devices with composited graphics or see-through head worn displays, is that users are able to interact with 3D virtual objects and each other without cumbersome devices that obstruct face to face interaction. We detail our prototype system and a number of interactive experiences. We present an initial user experiment that shows that participants are able to deduce the size and distance of a virtual projected object. A second experiment shows that participants are able to infer which of a number of targets the other user indicates by pointing.", "title": "" }, { "docid": "d495f9ae71492df9225249147563a3d9", "text": "The control of a PWM rectifier with LCL-filter using a minimum number of sensors is analyzed. In addition to the DC-link voltage either the converter or line current is measured. Two different ways of current control are shown, analyzed and compared by simulations as well as experimental investigations. Main focus is spent on active damping of the LCL filter resonance and on robustness against line inductance variations.", "title": "" }, { "docid": "b986dfc42547b64dd2ed0f86cd4e203d", "text": "A deep learning approach to reinforcement learning led to a general learner able to train on visual input to play a variety of arcade games at the human and superhuman levels. Its creators at the Google DeepMind’s team called the approach: Deep Q-Network (DQN). We present an extension of DQN by “soft” and “hard” attention mechanisms. Tests of the proposed Deep Attention Recurrent Q-Network (DARQN) algorithm on multiple Atari 2600 games show level of performance superior to that of DQN. Moreover, built-in attention mechanisms allow a direct online monitoring of the training process by highlighting the regions of the game screen the agent is focusing on when making decisions.", "title": "" }, { "docid": "b3e32f77fde76eba0adfccdc6878a0f3", "text": "The paper describes a work in progress on humorous response generation for short-text conversation using information retrieval approach. We gathered a large collection of funny tweets and implemented three baseline retrieval models: BM25, the query term reweighting model based on syntactic parsing and named entity recognition, and the doc2vec similarity model. We evaluated these models in two ways: in situ on a popular community question answering platform and in laboratory settings. The approach proved to be promising: even simple search techniques demonstrated satisfactory performance. The collection, test questions, evaluation protocol, and assessors’ judgments create a ground for future research towards more sophisticated models.", "title": "" }, { "docid": "39fc7b710a6d8b0fdbc568b48221de5d", "text": "The framework of cognitive wireless networks is expected to endow the wireless devices with the cognition-intelligence ability with which they can efficiently learn and respond to the dynamic wireless environment. In many practical scenarios, the complexity of network dynamics makes it difficult to determine the network evolution model in advance. Thus, the wireless decision-making entities may face a black-box network control problem and the model-based network management mechanisms will be no longer applicable. In contrast, model-free learning enables the decision-making entities to adapt their behaviors based on the reinforcement from their interaction with the environment and (implicitly) build their understanding of the system from scratch through trial-and-error. Such characteristics are highly in accordance with the requirement of cognition-based intelligence for devices in cognitive wireless networks. Therefore, model-free learning has been considered as one key implementation approach to adaptive, self-organized network control in cognitive wireless networks. In this paper, we provide a comprehensive survey on the applications of the state-of-the-art model-free learning mechanisms in cognitive wireless networks. According to the system models on which those applications are based, a systematic overview of the learning algorithms in the domains of single-agent system, multiagent systems, and multiplayer games is provided. The applications of model-free learning to various problems in cognitive wireless networks are discussed with the focus on how the learning mechanisms help to provide the solutions to these problems and improve the network performance over the model-based, non-adaptive methods. Finally, a broad spectrum of challenges and open issues is discussed to offer a guideline for the future research directions.", "title": "" }, { "docid": "63bf62c5c0027980958a90481a18d642", "text": "Spiking neural network simulators provide environments in which to implement and experiment with models of biological brain structures. Simulating large-scale models is computationally expensive, however, due to the number and interconnectedness of neurons in the brain. Furthermore, where such simulations are used in an embodied setting, the simulation must be real-time in order to be useful. In this paper we present a platform (nemo) for such simulations which achieves high performance on parallel commodity hardware in the form of graphics processing units (GPUs). This work makes use of the Izhikevich neuron model which provides a range of realistic spiking dynamics while being computationally efficient. Learning is facilitated through spike-timing dependent synaptic plasticity. Our GPU kernel can deliver up to 550 million spikes per second using a single device. This corresponds to a real-time simulation of around 55 000 neurons under biologically plausible conditions with 1000 synapses per neuron and a mean firing rate of 10 Hz.", "title": "" }, { "docid": "62f67cf8f628be029ce748121ff52c42", "text": "This paper reviews interface design of web pages for e-commerce. Different tasks in e-commerce are contrasted. A systems model is used to illustrate the information flow between three subsystems in e-commerce: store environment, customer, and web technology. A customer makes several decisions: to enter the store, to navigate, to purchase, to pay, and to keep the merchandize. This artificial environment must be designed so that it can support customer decision-making. To retain customers it must be pleasing and fun, and create a task with natural flow. Customers have different needs, competence and motivation, which affect decision-making. It may therefore be important to customize the design of the e-store environment. Future ergonomics research will have to investigate perceptual aspects, such as presentation of merchandize, and cognitive issues, such as product search and navigation, as well as decision making while considering various economic parameters. Five theories on e-commerce research are presented.", "title": "" }, { "docid": "d469d31d26d8bc07b9d8dfa8ce277e47", "text": "BACKGROUND/PURPOSE\nMorbidity in children treated with appendicitis results either from late diagnosis or negative appendectomy. A Prospective analysis of efficacy of Pediatric Appendicitis Score for early diagnosis of appendicitis in children was conducted.\n\n\nMETHODS\nIn the last 5 years, 1,170 children aged 4 to 15 years with abdominal pain suggestive of acute appendicitis were evaluated prospectively. Group 1 (734) were patients with appendicitis and group 2 (436) nonappendicitis. Multiple linear logistic regression analysis of all clinical and investigative parameters was performed for a model comprising 8 variables to form a diagnostic score.\n\n\nRESULTS\nLogistic regression analysis yielded a model comprising 8 variables, all statistically significant, P <.001. These variables in order of their diagnostic index were (1) cough/percussion/hopping tenderness in the right lower quadrant of the abdomen (0.96), (2) anorexia (0.88), (3) pyrexia (0.87), (4) nausea/emesis (0.86), (5) tenderness over the right iliac fossa (0.84), (6) leukocytosis (0.81), (7) polymorphonuclear neutrophilia (0.80) and (8) migration of pain (0.80). Each of these variables was assigned a score of 1, except for physical signs (1 and 5), which were scored 2 to obtain a total of 10. The Pediatric Appendicitis Score had a sensitivity of 1, specificity of 0.92, positive predictive value of 0.96, and negative predictive value of 0.99.\n\n\nCONCLUSION\nPediatric appendicitis score is a simple, relatively accurate diagnostic tool for accessing an acute abdomen and diagnosing appendicitis in children.", "title": "" }, { "docid": "a9d437179b45e17629b98900764f03a0", "text": "— The purpose of this work is to introduce and study the notion of spherical vectors, which we can consider as a natural generalization of the arguments of complex numbers in the case of quaternions. After having established some elementary properties of these particular vectors, we show by transport of structure that spherical vectors form a non-abelian additive group, isomorphic to the group of unit quaternions. In general and by concrete examples, this identification allows us, first, to present a new polar form of the quaternions, then to represent the unit quaternions on the unit sphere of R and to geometrically interpret their multiplications. Résumé. — L’objet de ce travail est d’introduire et étudier la notion des vecteurs sphériques, que nous pouvons considérer comme une généralisation naturelle des arguments des nombres complexes au cas des quaternions. Après avoir établi quelques propriétés élémentaires de ces vecteurs particuliers, nous montrons par transport de structure que les vecteurs sphériques forment un groupe additif non-abélien, isomorphe au groupe des quaternions unitaires. Au plan général et sur des exemples concrets, cette identification nous permet, premièrement, de présenter une nouvelle forme polaire des quaternions, puis de représenter les quaternions unitaires sur la sphère unité de R et d’interpréter géométriquement leurs multiplications. Mots clefs. — quaternions, argument, vecteurs sphériques, interprétation géométrique, forme polaire, forme exponentielle.", "title": "" }, { "docid": "e5b543b8880ec436874bee6b03a58618", "text": "This paper outlines my concerns with Qualitative Data Analysis’ (QDA) numerous remodelings of Grounded Theory (GT) and the subsequent eroding impact. I cite several examples of the erosion and summarize essential elements of classic GT methodology. It is hoped that the article will clarify my concerns with the continuing enthusiasm but misunderstood embrace of GT by QDA methodologists and serve as a preliminary guide to novice researchers who wish to explore the fundamental principles of GT.", "title": "" }, { "docid": "da33a718aa9dbf6e9feaff5e63765639", "text": " This paper introduces a new frequency-domain approach to describe the relationships (direction of information flow) between multivariate time series based on the decomposition of multivariate partial coherences computed from multivariate autoregressive models. We discuss its application and compare its performance to other approaches to the problem of determining neural structure relations from the simultaneous measurement of neural electrophysiological signals. The new concept is shown to reflect a frequency-domain representation of the concept of Granger causality.", "title": "" }, { "docid": "2172e78731ee63be5c15549e38c4babb", "text": "The design of a low-cost low-power ring oscillator-based truly random number generator (TRNG) macrocell, which is suitable to be integrated in smart cards, is presented. The oscillator sampling technique is exploited, and a tetrahedral oscillator with large jitter has been employed to realize the TRNG. Techniques to improve the statistical quality of the ring oscillatorbased TRNGs' bit sequences have been presented and verified by simulation and measurement. A postdigital processor is added to further enhance the randomness of the output bits. Fabricated in the HHNEC 0.13-μm standard CMOS process, the proposed TRNG has an area as low as 0.005 mm2. Powered by a single 1.8-V supply voltage, the TRNG has a power consumption of 40 μW. The bit rate of the TRNG after postprocessing is 100 kb/s. The proposed TRNG has been made into an IP and successfully applied in an SD card for encryption application. The proposed TRNG has passed the National Institute of Standards and Technology tests and Diehard tests.", "title": "" }, { "docid": "2a9e4ed54dd91eb8a6bad757afc9ac75", "text": "The modern advancements in digital electronics allow waveforms to be easily synthesized and captured using only digital electronics. The synthesis of radar waveforms using only digital electronics, such as Digital-to-Analog Converters (DACs) and Analog-to-Digital Converters (ADCs) allows for a majority of the analog chain to be removed from the system. In order to create a constant amplitude waveform, the amplitude distortions must be compensated for. The method chosen to compensate for the amplitude distortions is to pre-distort the waveform so, when it is influenced by the system, the output waveform has a near constant amplitude modulus. The effects of the predistortion were observed to be successful in both range and range-Doppler radar implementations.", "title": "" }, { "docid": "53cf85922865609c4a7591bd06679660", "text": "Speeded visual word naming and lexical decision performance are reported for 2428 words for young adults and healthy older adults. Hierarchical regression techniques were used to investigate the unique predictive variance of phonological features in the onsets, lexical variables (e.g., measures of consistency, frequency, familiarity, neighborhood size, and length), and semantic variables (e.g. imageahility and semantic connectivity). The influence of most variables was highly task dependent, with the results shedding light on recent empirical controversies in the available word recognition literature. Semantic-level variables accounted for unique variance in both speeded naming and lexical decision performance, level with the latter task producing the largest semantic-level effects. Discussion focuses on the utility of large-scale regression studies in providing a complementary approach to the standard factorial designs to investigate visual word recognition.", "title": "" }, { "docid": "a22ebcf11189744e7e4f15d82b1fa9d2", "text": "Several mathematical models of epidemic cholera have recently been proposed in response to outbreaks in Zimbabwe and Haiti. These models aim to estimate the dynamics of cholera transmission and the impact of possible interventions, with a goal of providing guidance to policy makers in deciding among alternative courses of action, including vaccination, provision of clean water, and antibiotics. Here, we discuss concerns about model misspecification, parameter uncertainty, and spatial heterogeneity intrinsic to models for cholera. We argue for caution in interpreting quantitative predictions, particularly predictions of the effectiveness of interventions. We specify sensitivity analyses that would be necessary to improve confidence in model-based quantitative prediction, and suggest types of monitoring in future epidemic settings that would improve analysis and prediction.", "title": "" } ]
scidocsrr
8716fa5f1bcc692f52b788ae3e12d2bd
Hypervisors vs. Lightweight Virtualization: A Performance Comparison
[ { "docid": "c77b2b45f189b6246c9f2e2ed527772f", "text": "PaaS vendors face challenges in efficiently providing services with the growth of their offerings. In this paper, we explore how PaaS vendors are using containers as a means of hosting Apps. The paper starts with a discussion of PaaS Use case and the current adoption of Container based PaaS architectures with the existing vendors. We explore various container implementations - Linux Containers, Docker, Warden Container, lmctfy and OpenVZ. We look at how each of this implementation handle Process, FileSystem and Namespace isolation. We look at some of the unique features of each container and how some of them reuse base Linux Container implementation or differ from it. We also explore how IaaSlayer itself has started providing support for container lifecycle management along with Virtual Machines. In the end, we look at factors affecting container implementation choices and some of the features missing from the existing implementations for the next generation PaaS.", "title": "" } ]
[ { "docid": "f2edf7cc3671b38ae5f597e840eda3a2", "text": "This paper describes the process of creating a design pattern management interface for a collection of mobile design patterns. The need to communicate how patterns are interrelated and work together to create solutions motivated the creation of this interface. Currently, most design pattern collections are presented in alphabetical lists. The Oracle Mobile User Experience team approach is to communicate relationships visually by highlighting and connecting related patterns. Before the team designed the interface, we first analyzed common relationships between patterns and created a pattern language map. Next, we organized the patterns into conceptual design categories. Last, we designed a pattern management interface that enables users to browse patterns and visualize their relationships.", "title": "" }, { "docid": "87eed2ab66bd9bda90cf2a838b990207", "text": "We present a new framework for compositional distributional semantics in which the distributional contexts of lexemes are expressed in terms of anchored packed dependency trees. We show that these structures have the potential to capture the full sentential contexts of a lexeme and provide a uniform basis for the composition of distributional knowledge in a way that captures both mutual disambiguation and generalization.", "title": "" }, { "docid": "6b57c73406000ca0683b275c7e164c24", "text": "In this letter, a novel compact and broadband integrated transition between a laminated waveguide and an air-filled rectangular waveguide operating in Ka band is proposed. A three-pole filter equivalent circuit model is employed to interpret the working mechanism and to predict the performance of the transition. A back-to-back prototype of the proposed transition is designed and fabricated for proving the concept. Good agreement of the measured and simulated results is obtained. The measured result shows that the insertion loss of better than 0.26 dB from 34.8 to 37.8 GHz can be achieved.", "title": "" }, { "docid": "5ca14c0581484f5618dd806a6f994a03", "text": "Many of existing criteria for evaluating Web sites quality require methods such as heuristic evaluations, or/and empirical usability tests. This paper aims at defining a quality model and a set of characteristics relating internal and external quality factors and giving clues about potential problems, which can be measured by automated tools. The first step in the quality assessment process is an automatic check of the source code, followed by manual evaluation, possibly supported by an appropriate user panel. As many existing tools can check sites (mainly considering accessibility issues), the general architecture will be based upon a conceptual model of the site/page, and the tools will export their output to a Quality Data Base, which is the basis for subsequent actions (checking, reporting test results, etc.).", "title": "" }, { "docid": "4e97003a5609901f1f18be1ccbf9db46", "text": "Fog computing is strongly emerging as a relevant and interest-attracting paradigm+technology for both the academic and industrial communities. However, architecture and methodological approaches are still prevalent in the literature, while few research activities have specifically targeted so far the issues of practical feasibility, cost-effectiveness, and efficiency of fog solutions over easily-deployable environments. In this perspective, this paper originally presents i) our fog-oriented framework for Internet-of-Things applications based on innovative scalability extensions of the open-source Kura gateway and ii) its Docker-based containerization over challenging and resource-limited fog nodes, i.e., RaspberryPi devices. Our practical experience and experimental work show the feasibility of using even extremely constrained nodes as fog gateways; the reported results demonstrate that good scalability and limited overhead can be coupled, via proper configuration tuning and implementation optimizations, with the significant advantages of containerization in terms of flexibility and easy deployment, also when working on top of existing, off-the-shelf, and limited-cost gateway nodes.", "title": "" }, { "docid": "5ec61b8fb2d63282e0c5129b82efeaaa", "text": "1. Tecnologia blockchain per il trading energetico peer-to-peer Il mercato dell’energia sta attraversando una vera e propria rivoluzione dovuta alla sua liberalizzazione e all’avvento dei prosumers, consumatori che sono al tempo stesso piccoli produttori di energia da fonti rinnovabili. I modelli tradizionali di trading basati su un mercato centralizzato che opera su base giornaliera o settimanale non sono quindi piú adeguati per scenari in cui il numero e la frequenza delle transazioni fra micro-produttori cresce esponenzialmente. Lo scopo della tesi é di progettare e sviluppare le API di accesso e gestione di una piattaforma di trading peer-to-peer dell’energia basata su tecnologie blockchain open source, che permetta agli utenti di inviare, eseguire ed archiviare ordini e transazioni in modo flessibile e con tempi di risposta brevi. La tesi prevede la realizzazione di una dApp che definisca un front-end utilizzato dai trader per ricevere ed eseguire le transazioni. La tesi si inquadra nell’ambito di una collaborazione del mio gruppo di ricerca con IIT-CNR, nell’ambito di un progetto su blockchain che raggruppa sia partner accademici che industriali.", "title": "" }, { "docid": "b7b8b850659367695ca3d2eb3d0f710c", "text": "Human face-to-face communication is a complex multimodal signal. We use words (language modality), gestures (vision modality) and changes in tone (acoustic modality) to convey our intentions. Humans easily process and understand faceto-face communication, however, comprehending this form of communication remains a significant challenge for Artificial Intelligence (AI). AI must understand each modality and the interactions between them that shape human communication. In this paper, we present a novel neural architecture for understanding human communication called the Multiattention Recurrent Network (MARN). The main strength of our model comes from discovering interactions between modalities through time using a neural component called the Multi-attention Block (MAB) and storing them in the hybrid memory of a recurrent component called the Long-short Term Hybrid Memory (LSTHM). We perform extensive comparisons on six publicly available datasets for multimodal sentiment analysis, speaker trait recognition and emotion recognition. MARN shows state-of-the-art performance on all the datasets.", "title": "" }, { "docid": "48fea4f95e6b7dfa7bb371f28751ac5a", "text": "The suppression mechanism of the differential-mode noise of an X capacitor in offline power supplies is, for the first time, attributed to two distinct concepts: 1) impedance mismatch (regarding a line impedance stabilization network or mains and the equivalent power supply noise source impedance) and 2) C(dv/dt) noise current balancing (to suppress mix-mode noise). The effectiveness of X capacitors is investigated with this theory, along with experimental supports. Understanding of the two aforementioned mechanisms gives better insight into filter effectiveness, which may lead to a more compact filter design.", "title": "" }, { "docid": "49affcad06d142003c063b94bc0343e8", "text": "Despite its evident universality and high social value, the ultimate biological role of music and its connection to brain disorders remain poorly understood. Recent findings from basic neuroscience have shed fresh light on these old problems. New insights provided by clinical neuroscience concerning the effects of brain disorders promise to be particularly valuable in uncovering the underlying cognitive and neural architecture of music and for assessing candidate accounts of the biological role of music. Here we advance a new model of the biological role of music in human evolution and the link to brain disorders, drawing on diverse lines of evidence derived from comparative ethology, cognitive neuropsychology and neuroimaging studies in the normal and the disordered brain. We propose that music evolved from the call signals of our hominid ancestors as a means mentally to rehearse and predict potentially costly, affectively laden social routines in surrogate, coded, low-cost form: essentially, a mechanism for transforming emotional mental states efficiently and adaptively into social signals. This biological role of music has its legacy today in the disordered processing of music and mental states that characterizes certain developmental and acquired clinical syndromes of brain network disintegration.", "title": "" }, { "docid": "47a1db2dd3367a7ed2c7318911eb833a", "text": "Scale of data and scale of computation infrastructures together enable the current deep learning renaissance. However, training large-scale deep architectures demands both algorithmic improvement and careful system configuration. In this paper, we focus on employing the system approach to speed up large-scale training. Taking both the algorithmic and system aspects into consideration, we develop a procedure for setting mini-batch size and choosing computation algorithms. We also derive lemmas for determining the quantity of key components such as the number of GPUs and parameter servers. Experiments and examples show that these guidelines help effectively speed up large-scale deep learning training.", "title": "" }, { "docid": "bff6e87727db20562091a6c8c08f3667", "text": "Many trust-aware recommender systems have explored the value of explicit trust, which is specified by users with binary values and simply treated as a concept with a single aspect. However, in social science, trust is known as a complex term with multiple facets, which have not been well exploited in prior recommender systems. In this paper, we attempt to address this issue by proposing a (dis)trust framework with considerations of both interpersonal and impersonal aspects of trust and distrust. Specifically, four interpersonal aspects (benevolence, competence, integrity and predictability) are computationally modelled based on users’ historic ratings, while impersonal aspects are formulated from the perspective of user connections in trust networks. Two logistic regression models are developed and trained by accommodating these factors, and then applied to predict continuous values of users’ trust and distrust, respectively. Trust information is further refined by corresponding predicted distrust information. The experimental results on real-world data sets demonstrate the effectiveness of our proposed model in further improving the performance of existing state-of-the-art trust-aware recommendation approaches.", "title": "" }, { "docid": "af5b219dcc303a486ce4f4c5bd41eb61", "text": "We demonstrate that simple, unobtrusive sensors attached to the lower arm can be used to capture muscle activations during specific hand and arm activities such as grasping. Specifically, we investigate the use of force sensitive resistors and fabric stretch sensors, that can both be easily integrated into clothing. We use the above sensors to detect the contractions of arm muscles. We present and compare the signals that both sensors produce for a set of typical hand actions. We finally argue that they can provide important information for activity recognition", "title": "" }, { "docid": "2f0d1a5ca593527cec163cb280ccde4a", "text": "Energy consumption in metropolitan cities is increasing day by day. In every city, considerable amount of electricity is being used for the purpose of street lighting system. Some areas of the city may have low frequency of passerby, but it's observed that the amount of energy the street lights consume in these areas is same as that of areas with high frequency passerby. As a result enormous amount of energy is wasted without being used. In the proposed system, high intensity discharge lamps are replaced by LED's which can alter its intensity based on the need. Movement of vehicles is sensed using LDR (Light Dependent Resistor) and the intensity of the street light is reduced when not in use. The system also detects fault in the system and indicates it to the base station using GSM (Global System for Mobile communication) technology by sending SMS (short message service).", "title": "" }, { "docid": "7d3263b5454001c1ae47d20f41b5d7a8", "text": "This paper describes a parsing model that combines the exact dynamic programming of CRF parsing with the rich nonlinear featurization of neural net approaches. Our model is structurally a CRF that factors over anchored rule productions, but instead of linear potential functions based on sparse features, we use nonlinear potentials computed via a feedforward neural network. Because potentials are still local to anchored rules, structured inference (CKY) is unchanged from the sparse case. Computing gradients during learning involves backpropagating an error signal formed from standard CRF sufficient statistics (expected rule counts). Using only dense features, our neural CRF already exceeds a strong baseline CRF model (Hall et al., 2014). In combination with sparse features, our system1 achieves 91.1 F1 on section 23 of the Penn Treebank, and more generally outperforms the best prior single parser results on a range of languages.", "title": "" }, { "docid": "ac9902e52a3185f34694dde80abdf89f", "text": "SUMMARY This paper presents three exploratory studies of life skills interventions (employment, money management or food/nutrition) with 73 homeless individuals from four shelters and supportive housing programs located in the urban Midwest for youth, victims of domestic violence and adults with mental illness. The Ansell Casey Life Skills Assessment was administered prior to the eight group and individual sessions. Quizzes and posttests indicated clinical change in all groups, with statistical significance in the domestic violence group. The intervention implementation, challenges encountered, and strategies developed for implementing shelter-based interventions are discussed. Recommendations for successfully providing collaborative university-shelter clinical interventions are provided.", "title": "" }, { "docid": "cf1fe85c1dcc305c4fbf2390b2bc456a", "text": "We propose the Probabilistic Sentential Decision Diagram (PSDD): A complete and canonical representation of probability distributions defined over the models of a given propositional theory. Each parameter of a PSDD can be viewed as the (conditional) probability of making a decision in a corresponding Sentential Decision Diagram (SDD). The SDD itself is a recently proposed complete and canonical representation of propositional theories. PSDDs are tractable representations, and further, the parameters of a PSDD can be efficiently estimated, in closed form, from complete data. We empirically evaluate the quality of PSDDs learned from data, when we have knowledge, a priori, of the domain logical constraints.", "title": "" }, { "docid": "99d57cef03e21531be9f9663ec023987", "text": "Anton Schwartz Dept. of Computer Science Stanford University Stanford, CA 94305 Email: schwartz@cs.stanford.edu Reinforcement learning addresses the problem of learning to select actions in order to maximize one's performance in unknown environments. To scale reinforcement learning to complex real-world tasks, such as typically studied in AI, one must ultimately be able to discover the structure in the world, in order to abstract away the myriad of details and to operate in more tractable problem spaces. This paper presents the SKILLS algorithm. SKILLS discovers skills, which are partially defined action policies that arise in the context of multiple, related tasks. Skills collapse whole action sequences into single operators. They are learned by minimizing the compactness of action policies, using a description length argument on their representation. Empirical results in simple grid navigation tasks illustrate the successful discovery of structure in reinforcement learning.", "title": "" }, { "docid": "91c792fac981d027ac1f2a2773674b10", "text": "Cancer is a molecular disease associated with alterations in the genome, which, thanks to the highly improved sensitivity of mutation detection techniques, can be identified in cell-free DNA (cfDNA) circulating in blood, a method also called liquid biopsy. This is a non-invasive alternative to surgical biopsy and has the potential of revealing the molecular signature of tumors to aid in the individualization of treatments. In this review, we focus on cfDNA analysis, its advantages, and clinical applications employing genomic tools (NGS and dPCR) particularly in the field of oncology, and highlight its valuable contributions to early detection, prognosis, and prediction of treatment response.", "title": "" }, { "docid": "f234f04e1adaba8a64fd4d7fcd29282f", "text": "In this paper, we introduce two different transforming steering wheel systems that can be utilized to augment user experience for future partially autonomous and fully autonomous vehicles. The first one is a robotic steering wheel that can mechanically transform by using its actuators to move the various components into different positions. The second system is a LED steering wheel that can visually transform by using LEDs embedded along the rim of wheel to change colors. Both steering wheel systems contain onboard microcontrollers developed to interface with our driving simulator. The main function of these two systems is to provide emergency warnings to drivers in a variety of safety critical scenarios, although the design space that we propose for these steering wheel systems also includes the use as interactive user interfaces. To evaluate the effectiveness of the emergency alerts, we conducted a driving simulator study examining the performance of participants (N=56) after an abrupt loss of autonomous vehicle control. Drivers who experienced the robotic steering wheel performed significantly better than those who experienced the LED steering wheel. The results of this study suggest that alerts utilizing mechanical movement are more effective than purely visual warnings.", "title": "" }, { "docid": "e29774fe6bd529b769faca8e54202be1", "text": "The main objective of this research is to develop a n Intelligent System using data mining modeling tec hnique, namely, Naive Bayes. It is implemented as web based applica tion in this user answers the predefined questions. It retrieves hidden data from stored database and compares the u er values with trained data set. It can answer com plex queries for diagnosing heart disease and thus assist healthcare practitioners to make intelligent clinical decisio ns which traditional decision support systems cannot. By providing effec tiv treatments, it also helps to reduce treatment cos s. Keyword: Data mining Naive bayes, heart disease, prediction", "title": "" } ]
scidocsrr
8e6a62eae08b3658e290cf0b103d6c64
A systematic mapping on gamification applied to education
[ { "docid": "78e21364224b9aa95f86ac31e38916ef", "text": "Gamification is the use of game design elements and game mechanics in non-game contexts. This idea has been used successfully in many web based businesses to increase user engagement. Some researchers suggest that it could also be used in web based education as a tool to increase student motivation and engagement. In an attempt to verify those theories, we have designed and built a gamification plugin for a well-known e-learning platform. We have made an experiment using this plugin in a university course, collecting quantitative and qualitative data in the process. Our findings suggest that some common beliefs about the benefits obtained when using games in education can be challenged. Students who completed the gamified experience got better scores in practical assignments and in overall score, but our findings also suggest that these students performed poorly on written assignments and participated less on class activities, although their initial motivation was higher. 2013 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "93ec0a392a7a29312778c6834ffada73", "text": "BACKGROUND\nThe new world of safe aesthetic injectables has become increasingly popular with patients. Not only is there less risk than with surgery, but there is also significantly less downtime to interfere with patients' normal work and social schedules. Botulinum toxin (BoNT) type A (BoNTA) is an indispensable tool used in aesthetic medicine, and its broad appeal has made it a hallmark of modern culture. The key to using BoNTA to its best effect is to understand patient-specific factors that will determine the treatment plan and the physician's ability to personalize injection strategies.\n\n\nOBJECTIVES\nTo present international expert viewpoints and consensus on some of the contemporary best practices in aesthetic BoNTA, so that beginner and advanced injectors may find pearls that provide practical benefits.\n\n\nMETHODS AND MATERIALS\nExpert aesthetic physicians convened to discuss their approaches to treatment with BoNT. The discussions and consensus from this meeting were used to provide an up-to-date review of treatment strategies to improve patient results. Information is presented on patient management and assessment, documentation and consent, aesthetic scales, injection strategies, dilution, dosing, and adverse events.\n\n\nCONCLUSION\nA range of product- and patient-specific factors influence the treatment plan. Truly optimized outcomes are possible only when the treating physician has the requisite knowledge, experience, and vision to use BoNTA as part of a unique solution for each patient's specific needs.", "title": "" }, { "docid": "34b86ea3920e8d7d1fff69c2e2d2ea23", "text": "OBJECTIVES\nTo review studies that have reported on the prevalence of memory complaints and the relationship between memory complaints and impairment or decline (dementia) in elderly individuals.\n\n\nDATA SOURCES AND STUDY SELECTION\nAll publications in the English language relating to memory complaints, memory impairment, cognitive disorder and dementia in MEDLINE, PSYCHLIT and EMBASE computerized databases, together with a search of relevant citations.\n\n\nDATA SYNTHESIS\nThe prevalence of memory complaints, defined as everyday memory problems, shows a large variation of approximately 25 - 50%. A high age, female gender and a low level of education are generally associated with a high prevalence of memory complaints. In community-based samples of elderly subjects an association has been found between memory complaints and memory impairment, after adjustment for depressive symptomatology. Memory complaints predict dementia after a follow-up of at least 2 years, in particular in those with mild cognitive impairment, defined as Mini Mental State Examination (MMSE) > 23. Memory complaints in highly educated elderly subjects may be predictive of dementia even when there is no indication of cognitive impairment on short cognitive screen tests. The shift in methodology which is noticeable in the recently published major studies is discussed as a possible explanation for the established association between memory complaints and decline in memory (or dementia) in elderly subjects. Three methodological factors, in particular, are responsible for the results: community-based sampling, longitudinal design and the treatment of variables such as depression, cognitive impairment and level of education.\n\n\nCONCLUSION\nMemory complaints in elderly people should no longer be considered merely as an innocent age-related phenomenon or a symptom of depression. Instead, these complaints deserve to be taken seriously, at least as a possible early sign of dementia.", "title": "" }, { "docid": "c8b382852f445c6f05c905371330dd07", "text": "Novelty and surprise play significant roles in animal behavior and in attempts to understand the neural mechanisms underlying it. They also play important roles in technology, where detecting observations that are novel or surprising is central to many applications, such as medical diagnosis, text processing, surveillance, and security. Theories of motivation, particularly of intrinsic motivation, place novelty and surprise among the primary factors that arouse interest, motivate exploratory or avoidance behavior, and drive learning. In many of these studies, novelty and surprise are not distinguished from one another: the words are used more-or-less interchangeably. However, while undeniably closely related, novelty and surprise are very different. The purpose of this article is first to highlight the differences between novelty and surprise and to discuss how they are related by presenting an extensive review of mathematical and computational proposals related to them, and then to explore the implications of this for understanding behavioral and neuroscience data. We argue that opportunities for improved understanding of behavior and its neural basis are likely being missed by failing to distinguish between novelty and surprise.", "title": "" }, { "docid": "87247a457fa266d017d33a340e6bd6ae", "text": "In this paper we give a synoptic view of the growth text processing technology of information extraction (IE) whose function is to extract information about a pre-specified set of entities, relations or events from natural language textsand to record this information in structured representations called templates. Here we describe the nature of the IE task, review the history of the area from its origins in AI work in the1960's and 70's till thepresent, discuss the techniques being used to carry out thetask, describeapplication areaswhereIE systemsareor areabout to beat work, and conclude with a discussion of the challenges facing the area. What emerges is a pictureof an exciting new text processing technology with ahost of new applications, both on its own and in conjunction with other technologies, such as information retrieval, machine translation and data mining.", "title": "" }, { "docid": "a34a49a337cd0d198fe8bcc05f8a91ea", "text": "In most real-world audio recordings, we encounter several types of audio events. In this paper, we develop a technique for detecting signature audio events, that is based on identifying patterns of occurrences of automatically learned atomic units of sound, which we call Acoustic Unit Descriptors or AUDs. Experiments show that the methodology works as well for detection of individual events and their boundaries in complex recordings.", "title": "" }, { "docid": "b058bbc1485f99f37c0d72b960dd668b", "text": "In two experiments short-term forgetting was investigated in a short-term cued recall task designed to examine proactive interference effects. Mixed modality study lists were tested at varying retention intervals using verbal and non-verbal distractor activities. When an interfering foil was read aloud and a target item read silently, strong PI effects were observed for both types of distractor activity. When the target was read aloud and followed by a verbal distractor activity, weak PI effects emerged. However, when a target item was read aloud and non-verbal distractor activity filled the retention interval, performance was immune to the effects of PI for at least eight seconds. The results indicate that phonological representations of items read aloud still influence performance after 15 seconds of distractor activity. Short-term Forgetting 3 Determinants of Short-term Forgetting: Decay, Retroactive Interference or Proactive Interference? Most current models of short-term memory assert that to-be-remembered items are represented in terms of easily degraded phonological representations. However, there is disagreement on how the traces become degraded. Some propose that trace degradation is due to decay brought about by the prevention of rehearsal (Baddeley, 1986; Burgess & Hitch, 1992; 1996), or a switch in attention (Cowan, 1993); others attribute degradation to retroactive interference (RI) from other list items (Nairne, 1990; Tehan & Fallon; in press; Tehan & Humphreys, 1998). We want to add proactive interference (PI) to the possible causes of short-term forgetting, and by showing how PI effects change as a function of the type of distractor task employed during a filled retention interval, we hope to evaluate the causes of trace degradation. By manipulating the type of distractor activity in a brief retention interval it is possible to test some of the assumptions about decay versus interference explanations of short-term forgetting. The decay position is quite straightforward. If rehearsal is prevented, then the trace should decay; the type of distractor activity should be immaterial as long as rehearsal is prevented. From the interference perspective both the Feature Model (Nairne, 1990) and the Tehan and Humphreys (1995,1998) connectionist model predict that there should be occasions where very little forgetting occurs. In the Feature Model items are represented as sets of modality dependent and modality independent features. Forgetting occurs when adjacent list items have common features. Some of the shared features of the first item are overwritten by the latter item, thereby producing a trace that bears only partial resemblance to the Short-term Forgetting 4 original item. One occasion in which interference would be minimized is when an auditory list is followed by a non-auditory distractor task. The modality dependent features of the list items would not be overwritten or degraded by the distractor activity because the modality dependent features of the list and distractor items are different to each other. By the same logic, a visually presented list should not be affected by an auditory distractor task, since modality specific features are again different in each case. In the Tehan and Humphreys (1995) approach, presentation modality is related to the strength of phonological representations that support recall. They assume that auditory activity produces stronger representations than does visual activity. Thus this model also predicts that when a list is presented auditorially, it will not be much affected by subsequent non-auditory distractor activity. However, in the case of a visual list with auditory distraction, the assumption would be that interference would be maximised. The phonological codes for the list items would be relatively weak in the first instance and a strong source of auditory retroactive interference follows. This prediction is the opposite of that derived from the Feature Model. Since PI effects appear to be sensitive to retention interval effects (Tehan & Humphreys, 1995; Wickens, Moody & Dow, 1981), we have chosen to employ a PI task to explore these differential predictions. We have recently developed a short-term cued recall task in which PI can easily be manipulated (Tehan & Humphreys, 1995; 1996; 1998). In this task, participants study a series of trials in which items are presented in blocks of four items with each trial consisting of either one or two blocks. Each trial has a target item that is an instance of either a taxonomic or rhyme category, and the category label is presented at test as a retrieval cue. The two-block trials are the important trials Short-term Forgetting 5 because it is in these trials that PI is manipulated. In these trials the two blocks are presented under directed forgetting instructions. That is, once participants find out that it is a two-block trial they are to forget the first block and remember the second block because the second block contains the target item. On control trials, all nontarget items in both blocks are unrelated to the target. On interference trials, a foil that is related to the target is embedded among three other to-be-forgotten fillers in the first block and the target is embedded among three unrelated filler items in the second block. Following the presentation of the second block the category cue is presented and subjects are asked to recall the word from the second block that is an instance of that category. Using this task we have been able to show that when taxonomic categories are used on an immediate test (e.g., dog is the foil, cat is the target and ANIMAL is the cue), performance is immune to PI. However, when recall is tested after a 2-second filled retention interval, PI effects are observed; target recall is depressed and the foil is often recalled instead of the target. In explaining these results, Tehan and Humphreys (1995) assumed that items were represented in terms of sets of features. The representation of an item was seen to involve both semantic and phonological features, with the phonological features playing a dominant role in item recall. They assumed that the cue would elicit the representations of the two items in the list, and that while the semantic features of both target and foil would be available, only the target would have active phonological features. Thus on an immediate test, knowing that the target ended in -at would make the task of discriminating between cat and dog relatively easy. On a delayed test they assumed that all phonological features were inactive and the absence of phonological information would make discrimination more difficult. Short-term Forgetting 6 A corollary of the Tehan and Humphreys (1995) assumption is that if phonological codes could be provided for a non-rhyming foil, then discrimination should again be problematic. Presentation modality is one variable that appears to produce differences in strength of phonological codes with reading aloud producing stronger representations than reading silently. Tehan and Humphreys (Experiment 5) varied the modality of the two blocks such that participants either read the first block silently and then read the second block aloud or vice versa. In the silent aloud condition performance was immune to PI. The assumption was that the phonological representation of the target item in the second block was very strong with the result that there were no problems in discrimination. However, PI effects were present in the aloud-silent condition. The phonological representation of the read-aloud foil appeared to serve as a strong source of competition to the read-silently target item. All the above research has been based on the premise that phonological representations for visually presented items are weak and rapidly lose their ability to support recall. This assumption seems tenable given that phonological similarity effects and phonological intrusion effects in serial recall are attenuated rapidly with brief periods of distractor activity (Conrad, 1967; Estes, 1973; Tehan & Humphreys, 1995). The cued recall experiments that have used a filled retention interval have always employed silent visual presentation of the study list and required spoken shadowing of the distractor items. That is, the phonological representations of both target and foil are assumed to be quite weak and the shadowing task would provide a strong source of interference. These are likely to be the conditions that produce maximum levels of PI. The patterns of PI may change with mixed modality study lists and alternative forms of distractor activity. For example, given a strong phonological representation of the target, weak representations of the foil and a weak source of Short-term Forgetting 7 retroactive interference, it might be possible to observe immunity to PI on a delayed test. The following experiments explore the relationship between presentation modality, distractor modality and PI Experiment 1 The Tehan and Humphreys (1995) mixed modality experiment indicated that PI effects were sensitive to the modalities of the first and second block of items. In the current study we use mixed modality study lists but this time include a two-second retention interval, the same as that used by Tehan and Humphreys. However, the modality of the distractor activity was varied as well. Participants either had to respond aloud verbally or make a manual response that did not involve any verbal output. From the Tehan and Humphreys perspective the assumption made is that the verbal distractor activity will produce more disruption to the phonological representation of the target item than will a non-verbal distractor activity and the PI will be observed. However, it is quite possible that with silent-aloud presentation and a non-verbal distractor activity immunity to PI might be maintained across a twosecond retention interval. From the Nairne perspective, interfe", "title": "" }, { "docid": "cb456d94420dcc3811983004a1af7c6b", "text": "A new method for deriving isolated buck-boost (IBB) converter with single-stage power conversion is proposed in this paper and novel IBB converters based on high-frequency bridgeless-interleaved boost rectifiers are presented. The semiconductors, conduction losses, and switching losses are reduced significantly by integrating the interleaved boost converters into the full-bridge diode-rectifier. Various high-frequency bridgeless boost rectifiers are harvested based on different types of interleaved boost converters, including the conventional boost converter and high step-up boost converters with voltage multiplier and coupled inductor. The full-bridge IBB converter with voltage multiplier is analyzed in detail. The voltage multiplier helps to enhance the voltage gain and reduce the voltage stresses of the semiconductors in the rectification circuit. Hence, a transformer with reduced turns ratio and parasitic parameters, and low-voltage rated MOSFETs and diodes with better switching and conduction performances can be applied to improve the efficiency. Moreover, optimized phase-shift modulation strategy is applied to the full-bridge IBB converter to achieve isolated buck and boost conversion. What's more, soft-switching performance of all of the active switches and diodes within the whole operating range is achieved. A 380-V output prototype is fabricated to verify the effectiveness of the proposed IBB converters and its control strategies.", "title": "" }, { "docid": "ae4a6db7594ef7645af2db42f599c178", "text": "In this position paper, we discuss how different branches of research on clustering and pattern mining, while rather different at first glance, in fact have a lot in common and can learn a lot from each other’s solutions and approaches. We give brief introductions to the fundamental problems of different sub-fields of clustering, especially focusing on subspace clustering, ensemble clustering, alternative (as a variant of constraint) clustering, and multiview clustering (as a variant of alternative clustering). Second, we relate a representative of these areas, subspace clustering, to pattern mining. We show that, while these areas use different vocabularies and intuitions, they share common roots and they are exposed to essentially the same fundamental problems; in particular, we detail how certain problems currently faced by the one field, have been solved by the other field, and vice versa. The purpose of our survey is to take first steps towards bridging the linguistic gap between different (sub-) communities and to make researchers from different fields aware of the existence of similar problems (and, partly, of similar solutions or of solutions that could be transferred) in the literature on the other research topic.", "title": "" }, { "docid": "471bb6ffa65dac100e59837df9f57540", "text": "Given the existence of many change detection algorithms, each with its own peculiarities and strengths, we propose a combination strategy, that we termed IUTIS (In Unity There Is Strength), based on a genetic Programming framework. This combination strategy is aimed at leveraging the strengths of the algorithms and compensate for their weakness. In this paper we show our findings in applying the proposed strategy in two different scenarios. The first scenario is purely performance-based. The second scenario performance and efficiency must be balanced. Results demonstrate that starting from simple algorithms we can achieve comparable results with respect to more complex state-of-the-art change detection algorithms, while keeping the computational complexity affordable for real-time applications.", "title": "" }, { "docid": "7b1cc7b3f8e31828900c4d53ab295db5", "text": "Unsupervised domain mapping aims to learn a function to translate domain X to Y by a function GXY in the absence of paired examples. Finding the optimal GXY without paired data is an ill-posed problem, so appropriate constraints are required to obtain reasonable solutions. One of the most prominent constraints is cycle consistency, which enforces the translated image by GXY to be translated back to the input image by an inverse mapping GY X . While cycle consistency requires the simultaneous training of GXY and GY X , recent studies have shown that one-sided domain mapping can be achieved by preserving pairwise distances between images. Although cycle consistency and distance preservation successfully constrain the solution space, they overlook the special properties of images that simple geometric transformations do not change the image’s semantic structure. Based on this special property, we develop a geometry-consistent generative adversarial network (GcGAN), which enables one-sided unsupervised domain mapping. GcGAN takes the original image and its counterpart image transformed by a predefined geometric transformation as inputs and generates two images in the new domain coupled with the corresponding geometry-consistency constraint. The geometryconsistency constraint reduces the space of possible solutions while keep the correct solutions in the search space. Quantitative and qualitative comparisons with the baseline (GAN alone) and the state-of-the-art methods including CycleGAN [62] and DistanceGAN [5] demonstrate the effectiveness of our method.", "title": "" }, { "docid": "474572cef9f1beb875d3ae012e06160f", "text": "Published attacks against smartphones have concentrated on software running on the application processor. With numerous countermeasures like ASLR, DEP and code signing being deployed by operating system vendors, practical exploitation of memory corruptions on this processor has become a time-consuming endeavor. At the same time, the cellular baseband stack of most smartphones runs on a separate processor and is significantly less hardened, if at all. In this paper we demonstrate the risk of remotely exploitable memory corruptions in cellular baseband stacks. We analyze two widely deployed baseband stacks and give exemplary cases of memory corruptions that can be leveraged to inject and execute arbitrary code on the baseband processor. The vulnerabilities can be triggered over the air interface using a rogue GSM base station, for instance using OpenBTS together with a USRP software defined radio.", "title": "" }, { "docid": "7b66188e4e61ff4837ad53e29110c1f2", "text": "Carrier aggregation (CA) is an inevitable technology to improve the data transfer rate with widening operation bandwidths, while the current frequency assignment of cellular bands is dispersed over. In the frequency division duplex (FDD) CA, acoustic multiplexers are one of the most important key devices. This paper describes the design technologies for the surface acoustic wave (SAW) multiplexers, such as filter topologies, matching network configurations, SAW characteristics and so on. In the case of narrow duplex gap bandwidth such as Band4 and Band25, the characteristics of SAW resonators such as unloaded quality factor (Q) and out-of band impedances act as extremely important role to realize the low insertion loss and the steep skirt characteristics. In order to solve these challenges, a new type high Q SAW resonator that is named IHP-SAW is introduced. The results of a novel quadplexer of Band4-Band25 using those technologies show enhanced performances.", "title": "" }, { "docid": "7b27d8b8f05833888b9edacf9ace0a18", "text": "This paper reports results from a study on the adoption of an information visualization system by administrative data analysts. Despite the fact that the system was neither fully integrated with their current software tools nor with their existing data analysis practices, analysts identified a number of key benefits that visualization systems provide to their work. These benefits for the most part occurred when analysts went beyond their habitual and well-mastered data analysis routines and engaged in creative discovery processes. We analyze the conditions under which these benefits arose, to inform the design of visualization systems that can better assist the work of administrative data analysts.", "title": "" }, { "docid": "0f0799a04328852b8cfa742cbc2396c9", "text": "Bitcoin does not scale, because its synchronization mechanism, the blockchain, limits the maximum rate of transactions the network can process. However, using off-blockchain transactions it is possible to create long-lived channels over which an arbitrary number of transfers can be processed locally between two users, without any burden to the Bitcoin network. These channels may form a network of payment service providers (PSPs). Payments can be routed between any two users in real time, without any confirmation delay. In this work we present a protocol for duplex micropayment channels, which guarantees end-to-end security and allow instant transfers, laying the foundation of the PSP network.", "title": "" }, { "docid": "a6f2cee851d2c22d471f473caf1710a1", "text": "One of the main reasons why Byzantine fault-tolerant (BFT) systems are currently not widely used lies in their high resource consumption: <inline-formula><tex-math notation=\"LaTeX\">$3f+1$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq1-2495213.gif\"/></alternatives></inline-formula> replicas are required to tolerate only <inline-formula><tex-math notation=\"LaTeX\">$f$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq2-2495213.gif\"/></alternatives></inline-formula> faults. Recent works have been able to reduce the minimum number of replicas to <inline-formula><tex-math notation=\"LaTeX\">$2f+1$</tex-math> <alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq3-2495213.gif\"/></alternatives></inline-formula> by relying on trusted subsystems that prevent a faulty replica from making conflicting statements to other replicas without being detected. Nevertheless, having been designed with the focus on fault handling, during normal-case operation these systems still use more resources than actually necessary to make progress in the absence of faults. This paper presents <italic>Resource-efficient Byzantine Fault Tolerance</italic> (<sc>ReBFT</sc>), an approach that minimizes the resource usage of a BFT system during normal-case operation by keeping <inline-formula> <tex-math notation=\"LaTeX\">$f$</tex-math><alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq4-2495213.gif\"/> </alternatives></inline-formula> replicas in a passive mode. In contrast to active replicas, passive replicas neither participate in the agreement protocol nor execute client requests; instead, they are brought up to speed by verified state updates provided by active replicas. In case of suspected or detected faults, passive replicas are activated in a consistent manner. To underline the flexibility of our approach, we apply <sc>ReBFT</sc> to two existing BFT systems: PBFT and MinBFT.", "title": "" }, { "docid": "4b22eaf527842e0fa41a1cd740ad9b40", "text": "Music transcription is the process of creating a written score of music from an audio recording. Musicians and musicologists use transcription to better understand music that may not have a written form, from improvised jazz solos to traditional folk music. Automatic music transcription introduces signal-processing algorithms to extract pitch and rhythm information from recordings. This speeds up and automates the process of music transcription, which requires musical training and is very time consuming even for experts. This thesis explores the still unsolved problem of automatic music transcription through an in-depth analysis of the problem itself and an overview of different techniques to solve the hardest subtask of music transcription, multiple pitch estimation. It concludes with a close study of a typical multiple pitch estimation algorithm and highlights the challenges that remain unsolved.", "title": "" }, { "docid": "e50d1b34d58a957eb09468e894ab02f7", "text": "The promise of legged robots over standard wheeled robots is to provide improved mobility over rough terrain. This promise builds on the decoupling between the environment and the main body of the robot that the presence of articulated legs allows, with two consequences. First, the motion of the main body of the robot can be made largely independent from the roughness of the terrain, within the kinematic limits of the legs: legs provide an active suspension system. Indeed, one of the most advanced hexapod robots of the 1980s was aptly called the Adaptive Suspension Vehicle [1]. Second, this decoupling allows legs to temporarily leave their contact with the ground: isolated footholds on a discontinuous terrain can be overcome, allowing to visit places absolutely out of reach otherwise. Note that having feet firmly planted on the ground is not mandatory here: skating is an equally interesting option, although rarely approached so far in robotics.", "title": "" }, { "docid": "3c09c8de76a9896dbf5f934cea752ca0", "text": "Future video prediction: • Applications: Unsupervised learning scene structure and spatio-temporal relationships • Challenges: High variability and non-specificity of future frames We introduce double-mapping Gated Recurrent Units (dGRU). Standard GRUs update an output state given an input. We also consider the input as a recurrent state, using an extra set of logic gates to update it given the output, allowing for: • Lower memory and computational costs • Mitigation and recovery from temporal error propagation • An identity function during training, helping convergence • Model explainability/pruning through layer removal", "title": "" }, { "docid": "fd3faa049df1d2a0b2fe9af6cf0f3e06", "text": "Wireless Mesh Networks improve their capacities by equipping mesh nodes with multi-radios tuned to non-overlapping channels. Hence the data forwarding between two nodes has multiple selections of links and the bandwidth between the pair of nodes varies dynamically. Under this condition, a mesh node adopts machine learning mechanisms to choose the possible best next hop which has maximum bandwidth when it intends to forward data. In this paper, we present a machine learning based forwarding algorithm to let a forwarding node dynamically select the next hop with highest potential bandwidth capacity to resume communication based on learning algorithm. Key to this strategy is that a node only maintains three past status, and then it is able to learn and predict the potential bandwidth capacities of its links. Then, the node selects the next hop with potential maximal link bandwidth. Moreover, a geometrical based algorithm is developed to let the source node figure out the forwarding region in order to avoid flooding. Simulations demonstrate that our approach significantly speeds up the transmission and outperforms other peer algorithms.", "title": "" }, { "docid": "795bede0ff85ce04e956cdc23f8ecb0a", "text": "Neuromorphic computing using post-CMOS technologies is gaining immense popularity due to its promising abilities to address the memory and power bottlenecks in von-Neumann computing systems. In this paper, we propose RESPARC - a reconfigurable and energy efficient architecture built-on Memristive Crossbar Arrays (MCA) for deep Spiking Neural Networks (SNNs). Prior works were primarily focused on device and circuit implementations of SNNs on crossbars. RESPARC advances this by proposing a complete system for SNN acceleration and its subsequent analysis. RESPARC utilizes the energy-efficiency of MCAs for inner-product computation and realizes a hierarchical reconfigurable design to incorporate the data-flow patterns in an SNN in a scalable fashion. We evaluate the proposed architecture on different SNNs ranging in complexity from 2k-230k neurons and 1.2M-5.5M synapses. Simulation results on these networks show that compared to the baseline digital CMOS architecture, RESPARC achieves 500x (15x) efficiency in energy benefits at 300x (60x) higher throughput for multi-layer perceptrons (deep convolutional networks). Furthermore, RESPARC is a technology-aware architecture that maps a given SNN topology to the most optimized MCA size for the given crossbar technology.", "title": "" } ]
scidocsrr
a67befdc3db3a65c502d3ca811bc972b
A Comprehensive Look at Distance Education in the K – 12 Context
[ { "docid": "b47c7d2b469806eb2d75ca76417f62e3", "text": "........................................................................................................................... 4 Introduction ...................................................................................................................... 5 Differences in State Policies Regarding Teaching .......................................................... 14 Trends in Student Achievement: Policy Hypotheses ...................................................... 17 A National View of Teacher Qualifications and Student Achievement ............................. 27 Analysis of Policy Relationships...................................................................................... 32 Conclusions and Implications.......................................................................................... 38 Endnotes ......................................................................................................................... 40 References ...................................................................................................................... 41 CONTENTS", "title": "" } ]
[ { "docid": "3a81f0fc24dd90f6c35c47e60db3daa4", "text": "Advances in information and Web technologies have open numerous opportunities for online retailing. The pervasiveness of the Internet coupled with the keenness in competition among online retailers has led to virtual experiential marketing (VEM). This study examines the relationship of five VEM elements on customer browse and purchase intentions and loyalty, and the moderating effects of shopping orientation and Internet experience on these relationships. A survey was conducted of customers who frequently visited two online game stores to play two popular games in Taiwan. The results suggest that of the five VEM elements, three have positive effects on browse intention, and two on purchase intentions. Both browse and purchase intentions have positive effects on customer loyalty. Economic orientation was found to moderate that relationships between the VEM elements and browse and purchase intentions. However, convenience orientation moderated only the relationships between the VEM elements and browse intention.", "title": "" }, { "docid": "31b26778e230d2ea40f9fe8996e095ed", "text": "The effects of beverage alcohol (ethanol) on the body are determined largely by the rate at which it and its main breakdown product, acetaldehyde, are metabolized after consumption. The main metabolic pathway for ethanol involves the enzymes alcohol dehydrogenase (ADH) and aldehyde dehydrogenase (ALDH). Seven different ADHs and three different ALDHs that metabolize ethanol have been identified. The genes encoding these enzymes exist in different variants (i.e., alleles), many of which differ by a single DNA building block (i.e., single nucleotide polymorphisms [SNPs]). Some of these SNPs result in enzymes with altered kinetic properties. For example, certain ADH1B and ADH1C variants that are commonly found in East Asian populations lead to more rapid ethanol breakdown and acetaldehyde accumulation in the body. Because acetaldehyde has harmful effects on the body, people carrying these alleles are less likely to drink and have a lower risk of alcohol dependence. Likewise, an ALDH2 variant with reduced activity results in acetaldehyde buildup and also has a protective effect against alcoholism. In addition to affecting drinking behaviors and risk for alcoholism, ADH and ALDH alleles impact the risk for esophageal cancer.", "title": "" }, { "docid": "358a8ab77d93a06fc43c878c1e79d2a7", "text": "Learning-based hashing is a leading approach of approximate nearest neighbor search for large-scale image retrieval. In this paper, we develop a deep supervised hashing method for multi-label image retrieval, in which we propose to learn a binary “mask” map that can identify the approximate locations of objects in an image, so that we use this binary “mask” map to obtain length-limited hash codes which mainly focus on an image’s objects but ignore the background. The proposed deep architecture consists of four parts: 1) a convolutional sub-network to generate effective image features; 2) a binary “mask” sub-network to identify image objects’ approximate locations; 3) a weighted average pooling operation based on the binary “mask” to obtain feature representations and hash codes that pay most attention to foreground objects but ignore the background; and 4) the combination of a triplet ranking loss designed to preserve relative similarities among images and a cross entropy loss defined on image labels. We conduct comprehensive evaluations on four multi-label image data sets. The results indicate that the proposed hashing method achieves superior performance gains over the state-of-the-art supervised or unsupervised hashing baselines.", "title": "" }, { "docid": "6b0f5ddb7be84cf9043b23f1141699b4", "text": "We show that even when face images are unconstrained and arbitrarily paired, face swapping between them is quite simple. To this end, we make the following contributions. (a) Instead of tailoring systems for face segmentation, as others previously proposed, we show that a standard fully convolutional network (FCN) can achieve remarkably fast and accurate segmentations, provided that it is trained on a rich enough example set. For this purpose, we describe novel data collection and generation routines which provide challenging segmented face examples. (b) We use our segmentations for robust face swapping under unprecedented conditions. (c) Unlike previous work, our swapping is robust enough to allow for extensive quantitative tests. To this end, we use the Labeled Faces in the Wild (LFW) benchmark and measure the effect of intra- and inter-subject face swapping on recognition. We show that our intra-subject swapped faces remain as recognizable as their sources, testifying to the effectiveness of our method. In line with established perceptual studies, we show that better face swapping produces less recognizable inter-subject results. This is the first time this effect was quantitatively demonstrated by machine vision systems.", "title": "" }, { "docid": "2e389715d9beb1bc7c9ab06131abc67a", "text": "Digital forensic science is very much still in its infancy, but is becoming increasingly invaluable to investigators. A popular area for research is seeking a standard methodology to make the digital forensic process accurate, robust, and efficient. The first digital forensic process model proposed contains four steps: Acquisition, Identification, Evaluation and Admission. Since then, numerous process models have been proposed to explain the steps of identifying, acquiring, analysing, storage, and reporting on the evidence obtained from various digital devices. In recent years, an increasing number of more sophisticated process models have been proposed. These models attempt to speed up the entire investigative process or solve various of problems commonly encountered in the forensic investigation. In the last decade, cloud computing has emerged as a disruptive technological concept, and most leading enterprises such as IBM, Amazon, Google, and Microsoft have set up their own cloud-based services. In the field of digital forensic investigation, moving to a cloud-based evidence processing model would be extremely beneficial and preliminary attempts have been made in its implementation. Moving towards a Digital Forensics as a Service model would not only expedite the investigative process, but can also result in significant cost savings – freeing up digital forensic experts and law enforcement personnel to progress their caseload. This paper aims to evaluate the applicability of existing digital forensic process models and analyse how each of these might apply to a cloudbased evidence processing paradigm.", "title": "" }, { "docid": "89013222fccc85c1321020153b8a416b", "text": "The objective of this paper is to summarize the work that has been developed by the authors for the last several years, in order to demonstrate that the Theory of Characteristic Modes can be used to perform a systematic design of different types of antennas. Characteristic modes are real current modes that can be computed numerically for conducting bodies of arbitrary shape. Since characteristic modes form a set of orthogonal functions, they can be used to expand the total current on the surface of the body. However, this paper shows that what makes characteristic modes really attractive for antenna design is the physical insight they bring into the radiating phenomena taking place in the antenna. The resonance frequency of modes, as well as their radiating behavior, can be determined from the information provided by the eigenvalues associated with the characteristic modes. Moreover, by studying the current distribution of modes, an optimum feeding arrangement can be found in order to obtain the desired radiating behavior.", "title": "" }, { "docid": "7ef0d4d81d601c9affd98ae06376e1d8", "text": "There are several studies that suggest that different people deposit different quantities of their own DNA on items they touch, i.e. some are good shedders and others are bad shedders. It is of interest to determine if individuals deposit consistent quantities of their own DNA, no matter the occasion, as well as the degree of variability among individuals. To investigate this, participants were tested for their ability to deposit DNA by placing right and left handprints on separate DNA-free glass plates at three set times during the day (morning, midday and afternoon) on four different days spaced over several weeks. Information regarding recent activities performed by the individual was recorded, along with information on gender, hand dominance and hand size. A total of 240 handprint deposits were collected from 10 individuals and analyzed for differences in DNA quantity and the type of the DNA profile obtained at different times of the day, on different days, between the two hands of the same individual, and between different individuals. Furthermore, the correlation between the deposit quantity and the ratio of self to non-self DNA in the mixed deposits was analyzed to determine if the amount of non-self DNA has an effect on overall DNA quantities obtained. In general, this study has shown that while there is substantial variation in the quantities deposited by individuals on different occasions, some clear trends were evident with some individuals consistently depositing significantly more or less DNA than others. Non-self DNA was usually deposited along with self DNA and, in most instances, was the minor component. Incidents where the non-self portion was the major component were very rare and, when observed, were associated with a poor depositor/shedder. Forensic DNA scientists need to consider the range and variability of DNA a person deposits when touching an object, the likelihood of non-self DNA being co-deposited onto the handled object of interest and the factors that may affect the relative quantity of this component within the deposit.", "title": "" }, { "docid": "c7c63f08639660f935744309350ab1e0", "text": "A composite of graphene oxide supported by needle-like MnO(2) nanocrystals (GO-MnO(2) nanocomposites) has been fabricated through a simple soft chemical route in a water-isopropyl alcohol system. The formation mechanism of these intriguing nanocomposites investigated by transmission electron microscopy and Raman and ultraviolet-visible absorption spectroscopy is proposed as intercalation and adsorption of manganese ions onto the GO sheets, followed by the nucleation and growth of the crystal species in a double solvent system via dissolution-crystallization and oriented attachment mechanisms, which in turn results in the exfoliation of GO sheets. Interestingly, it was found that the electrochemical performance of as-prepared nanocomposites could be enhanced by the chemical interaction between GO and MnO(2). This method provides a facile and straightforward approach to deposit MnO(2) nanoparticles onto the graphene oxide sheets (single layer of graphite oxide) and may be readily extended to the preparation of other classes of hybrids based on GO sheets for technological applications.", "title": "" }, { "docid": "5179662c841302180848dc566a114f10", "text": "Hyperspectral image (HSI) unmixing has attracted increasing research interests in recent decades. The major difficulty of it lies in that the endmembers and the associated abundances need to be separated from highly mixed observation data with few a priori information. Recently, sparsity-constrained nonnegative matrix factorization (NMF) algorithms have been proved effective for hyperspectral unmixing (HU) since they can sufficiently utilize the sparsity property of HSIs. In order to improve the performance of NMF-based unmixing approaches, spectral and spatial constrains have been added into the unmixing model, but spectral-spatial joint structure is required to be more accurately estimated. To exploit the property that similar pixels within a small spatial neighborhood have higher possibility to share similar abundances, hypergraph structure is employed to capture the similarity relationship among the spatial nearby pixels. In the construction of a hypergraph, each pixel is taken as a vertex of the hypergraph, and each vertex with its k nearest spatial neighboring pixels form a hyperedge. Using the hypergraph, the pixels with similar abundances can be accurately found, which enables the unmixing algorithm to obtain promising results. Experiments on synthetic data and real HSIs are conducted to investigate the performance of the proposed algorithm. The superiority of the proposed algorithm is demonstrated by comparing it with some state-of-the-art methods.", "title": "" }, { "docid": "49b0ba019f6f968804608aeacec2a959", "text": "In this paper, we introduce a novel problem of audio-visual event localization in unconstrained videos. We define an audio-visual event as an event that is both visible and audible in a video segment. We collect an Audio-Visual Event (AVE) dataset to systemically investigate three temporal localization tasks: supervised and weakly-supervised audio-visual event localization, and cross-modality localization. We develop an audio-guided visual attention mechanism to explore audio-visual correlations, propose a dual multimodal residual network (DMRN) to fuse information over the two modalities, and introduce an audio-visual distance learning network to handle the cross-modality localization. Our experiments support the following findings: joint modeling of auditory and visual modalities outperforms independent modeling, the learned attention can capture semantics of sounding objects, temporal alignment is important for audio-visual fusion, the proposed DMRN is effective in fusing audio-visual features, and strong correlations between the two modalities enable cross-modality localization.", "title": "" }, { "docid": "64c156ee4171b5b84fd4eedb1d922f55", "text": "We introduce a large computational subcategorization lexicon which includes subcategorization frame (SCF) and frequency information for 6,397 English verbs. This extensive lexicon was acquired automatically from five corpora and the Web using the current version of the comprehensive subcategorization acquisition system of Briscoe and Carroll (1997). The lexicon is provided freely for research use, along with a script which can be used to filter and build sub-lexicons suited for different natural language processing (NLP) purposes. Documentation is also provided which explains each sub-lexicon option and evaluates its accuracy.", "title": "" }, { "docid": "1b3efa626d1e2221051477c587572230", "text": "In diesem Bericht wird die neue Implementation von Threads unter Linux behandelt. Die bis jetzt noch eingesetzte Implementation ist veraltet und basiert auf nicht mehr aktuellen Voraussetzungen. Es ist wichtig zuerst die fundamentalen Kenntnisse über ThreadImplementationen zu erhalten und die Probleme der aktuellen Implementation zu erkennen, um die nötigen Änderungen zu sehen. Florian Dürrbaum 14.12.2003 2 FH Aargau Enterprise Computing", "title": "" }, { "docid": "6ee2ee4a1cff7b1ddb8e5e1e2faf3aa5", "text": "An array of four uniform half-width microstrip leaky-wave antennas (MLWAs) was designed and tested to obtain maximum radiation in the boresight direction. To achieve this, uniform MLWAs are placed at 90 ° and fed by a single probe at the center. Four beams from four individual branches combine to form the resultant directive beam. The measured matched bandwidth of the array is 300 MHz (3.8-4.1 GHz). Its beam toward boresight occurs over a relatively wide 6.4% (3.8-4.05 GHz) band. The peak measured boresight gain of the array is 10.1 dBi, and its variation within the 250-MHz boresight radiation band is only 1.7 dB.", "title": "" }, { "docid": "06bba1f9f57b7b452af47321ac8fa358", "text": "Little is known about the genetic changes that distinguish domestic cat populations from their wild progenitors. Here we describe a high-quality domestic cat reference genome assembly and comparative inferences made with other cat breeds, wildcats, and other mammals. Based upon these comparisons, we identified positively selected genes enriched for genes involved in lipid metabolism that underpin adaptations to a hypercarnivorous diet. We also found positive selection signals within genes underlying sensory processes, especially those affecting vision and hearing in the carnivore lineage. We observed an evolutionary tradeoff between functional olfactory and vomeronasal receptor gene repertoires in the cat and dog genomes, with an expansion of the feline chemosensory system for detecting pheromones at the expense of odorant detection. Genomic regions harboring signatures of natural selection that distinguish domestic cats from their wild congeners are enriched in neural crest-related genes associated with behavior and reward in mouse models, as predicted by the domestication syndrome hypothesis. Our description of a previously unidentified allele for the gloving pigmentation pattern found in the Birman breed supports the hypothesis that cat breeds experienced strong selection on specific mutations drawn from random bred populations. Collectively, these findings provide insight into how the process of domestication altered the ancestral wildcat genome and build a resource for future disease mapping and phylogenomic studies across all members of the Felidae.", "title": "" }, { "docid": "edf41dbd01d4060982c2c75469bbac6b", "text": "In this paper, we develop a design method for inclined and displaced (compound) slotted waveguide array antennas. The characteristics of a compound slot element and the design results by using an equivalent circuit are shown. The effectiveness of the designed antennas is verified through experiments.", "title": "" }, { "docid": "c9b6f91a7b69890db88b929140f674ec", "text": "Pedestrian detection is a key problem in computer vision, with several applications that have the potential to positively impact quality of life. In recent years, the number of approaches to detecting pedestrians in monocular images has grown steadily. However, multiple data sets and widely varying evaluation protocols are used, making direct comparisons difficult. To address these shortcomings, we perform an extensive evaluation of the state of the art in a unified framework. We make three primary contributions: 1) We put together a large, well-annotated, and realistic monocular pedestrian detection data set and study the statistics of the size, position, and occlusion patterns of pedestrians in urban scenes, 2) we propose a refined per-frame evaluation methodology that allows us to carry out probing and informative comparisons, including measuring performance in relation to scale and occlusion, and 3) we evaluate the performance of sixteen pretrained state-of-the-art detectors across six data sets. Our study allows us to assess the state of the art and provides a framework for gauging future efforts. Our experiments show that despite significant progress, performance still has much room for improvement. In particular, detection is disappointing at low resolutions and for partially occluded pedestrians.", "title": "" }, { "docid": "b29947243b1ad21b0529a6dd8ef3c529", "text": "We define a multiresolution spline technique for combining two or more images into a larger image mosaic. In this procedure, the images to be splined are first decomposed into a set of band-pass filtered component images. Next, the component images in each spatial frequency hand are assembled into a corresponding bandpass mosaic. In this step, component images are joined using a weighted average within a transition zone which is proportional in size to the wave lengths represented in the band. Finally, these band-pass mosaic images are summed to obtain the desired image mosaic. In this way, the spline is matched to the scale of features within the images themselves. When coarse features occur near borders, these are blended gradually over a relatively large distance without blurring or otherwise degrading finer image details in the neighborhood of th e border.", "title": "" }, { "docid": "8fd1e9e07c8dcd2e23e5be107a63ca5d", "text": "We describe how to embed a simple typed functional logic programming language in Haskell. The embedding is a natural extension of the Prolog embedding by Seres and Spivey [16]. To get full static typing we need to use the Haskell extensions of quantified types and the ST-monad.", "title": "" }, { "docid": "17813a603f0c56c95c96f5b2e0229026", "text": "Geographic ranges are estimated for brachiopod and bivalve species during the late Middle (mid-Givetian) to the middle Late (terminal Frasnian) Devonian to investigate range changes during the time leading up to and including the Late Devonian biodiversity crisis. Species ranges were predicted using GARP (Genetic Algorithm using Rule-set Prediction), a modeling program developed to predict fundamental niches of modern species. This method was applied to fossil species to examine changing ranges during a critical period of Earth’s history. Comparisons of GARP species distribution predictions with historical understanding of species occurrences indicate that GARP models predict accurately the presence of common species in some depositional settings. In addition, comparison of GARP distribution predictions with species-range reconstructions from geographic information systems (GIS) analysis suggests that GARP modeling has the potential to predict species ranges more completely and tailor ranges more specifically to environmental parameters than GIS methods alone. Thus, GARP modeling is a potentially useful tool for predicting fossil species ranges and can be used to address a wide array of palaeontological problems. The use of GARP models allows a statistical examination of the relationship of geographic range size with species survival during the Late Devonian. Large geographic range was statistically associated with species survivorship across the crisis interval for species examined in the linguiformis Zone but not for species modeled in the preceding Lower varcus or punctata zones. The enhanced survival benefit of having a large geographic range, therefore, appears to be restricted to the biodiversity crisis interval.", "title": "" }, { "docid": "9af78a1d6b47ac72aae4f1e7b47b197b", "text": "Communication about requirements is often handled in issue tracking systems, especially in a distributed setting. As issue tracking systems also contain bug reports or programming tasks, the software feature requests of the users are often difficult to identify. This paper investigates natural language processing and machine learning features to detect software feature requests in natural language data of issue tracking systems. It compares traditional linguistic machine learning features, such as \"bag of words\", with more advanced features, such as subject-action-object, and evaluates combinations of machine learning features derived from the natural language and features taken from the issue tracking system meta-data. Our investigation shows that some combinations of machine learning features derived from natural language and the issue tracking system meta-data outperform traditional approaches. We show that issues or data fields (e.g. descriptions or comments), which contain software feature requests, can be identified reasonably well, but hardly the exact sentence. Finally, we show that the choice of machine learning algorithms should depend on the goal, e.g. maximization of the detection rate or balance between detection rate and precision. In addition, the paper contributes a double coded gold standard and an open-source implementation to further pursue this topic.", "title": "" } ]
scidocsrr
85f29d0e7177cf5557b50e7b64d80510
Decentralized Cloud-SDN Architecture in Smart Grid: A Dynamic Pricing Model
[ { "docid": "adec3b3578d56cefed73fd74d270ca22", "text": "In the framework of liberalized electricity markets, distributed generation and controllable demand have the opportunity to participate in the real-time operation of transmission and distribution networks. This may be done by using the virtual power plant (VPP) concept, which consists of aggregating the capacity of many distributed energy resources (DER) in order to make them more accessible and manageable across energy markets. This paper provides an optimization algorithm to manage a VPP composed of a large number of customers with thermostatically controlled appliances. The algorithm, based on a direct load control (DLC), determines the optimal control schedules that an aggregator should apply to the controllable devices of the VPP in order to optimize load reduction over a specified control period. The results define the load reduction bid that the aggregator can present in the electricity market, thus helping to minimize network congestion and deviations between generation and demand. The proposed model, which is valid for both transmission and distribution networks, is tested on a real power system to demonstrate its applicability.", "title": "" }, { "docid": "c2606da8495680b58898c4145365888e", "text": "This paper proposes a distributed framework for demand response and user adaptation in smart grid networks. In particular, we borrow the concept of congestion pricing in Internet traffic control and show that pricing information is very useful to regulate user demand and hence balance network load. User preference is modeled as a willingness to pay parameter which can be seen as an indicator of differential quality of service. Both analysis and simulation results are presented to demonstrate the dynamics and convergence behavior of the algorithm. Based on this algorithm, we then propose a novel charging method for plug-in hybrid electric vehicles (PHEVs) in a smart grid, where users or PHEVs can adapt their charging rates according to their preferences. Simulation results are presented to demonstrate the dynamic behavior of the charging algorithm and impact of different parameters on system performance.", "title": "" } ]
[ { "docid": "9dd83eb5760e8dbf6f3bd918eb73c79f", "text": "Pontine tegmental cap dysplasia (PTCD) is a recently described hindbrain malformation characterized by pontine hypoplasia and ectopic dorsal transverse pontine fibers (1). To date, a total of 19 cases of PTCD have been published, all patients had sensorineural hearing loss (SNHL). We contribute 1 additional case of PTCD with SNHL with and VIIIth cranial nerve and temporal bone abnormalities using dedicated magnetic resonance (MR) and high-resolution temporal bone computed tomographic (CT) images.", "title": "" }, { "docid": "2c2e0f5ddfb2e1d5121a9a58e2ee870d", "text": "Emotional events often attain a privileged status in memory. Cognitive neuroscientists have begun to elucidate the psychological and neural mechanisms underlying emotional retention advantages in the human brain. The amygdala is a brain structure that directly mediates aspects of emotional learning and facilitates memory operations in other regions, including the hippocampus and prefrontal cortex. Emotion–memory interactions occur at various stages of information processing, from the initial encoding and consolidation of memory traces to their long-term retrieval. Recent advances are revealing new insights into the reactivation of latent emotional associations and the recollection of personal episodes from the remote past.", "title": "" }, { "docid": "11a28e11ba6e7352713b8ee63291cd9c", "text": "This review focuses on discussing the main changes on the upcoming fourth edition of the WHO Classification of Tumors of the Pituitary Gland emphasizing histopathological and molecular genetics aspects of pituitary neuroendocrine (i.e., pituitary adenomas) and some of the non-neuroendocrine tumors involving the pituitary gland. Instead of a formal review, we introduced the highlights of the new WHO classification by answering select questions relevant to practising pathologists. The revised classification of pituitary adenomas, in addition to hormone immunohistochemistry, recognizes the role of other immunohistochemical markers including but not limited to pituitary transcription factors. Recognizing this novel approach, the fourth edition of the WHO classification has abandoned the concept of \"a hormone-producing pituitary adenoma\" and adopted a pituitary adenohypophyseal cell lineage designation of the adenomas with subsequent categorization of histological variants according to hormone content and specific histological and immunohistochemical features. This new classification does not require a routine ultrastructural examination of these tumors. The new definition of the Null cell adenoma requires the demonstration of immunonegativity for pituitary transcription factors and adenohypophyseal hormones Moreover, the term of atypical pituitary adenoma is no longer recommended. In addition to the accurate tumor subtyping, assessment of the tumor proliferative potential by mitotic count and Ki-67 index, and other clinical parameters such as tumor invasion, is strongly recommended in individual cases for consideration of clinically aggressive adenomas. This classification also recognizes some subtypes of pituitary neuroendocrine tumors as \"high-risk pituitary adenomas\" due to the clinical aggressive behavior; these include the sparsely granulated somatotroph adenoma, the lactotroph adenoma in men, the Crooke's cell adenoma, the silent corticotroph adenoma, and the newly introduced plurihormonal Pit-1-positive adenoma (previously known as silent subtype III pituitary adenoma). An additional novel aspect of the new WHO classification was also the definition of the spectrum of thyroid transcription factor-1 expressing pituitary tumors of the posterior lobe as representing a morphological spectrum of a single nosological entity. These tumors include the pituicytoma, the spindle cell oncocytoma, the granular cell tumor of the neurohypophysis, and the sellar ependymoma.", "title": "" }, { "docid": "a43698feab07ba6e1ea917843cc4129a", "text": "The nation's critical infrastructures, such as those found in Supervisory Control and Data Acquisition (SCADA) and industrial control systems (ICS), are increasingly at risk and vulnerable to internal and external threats. Security best practices on these systems come at a very opportune time. Further, the value of risk assessment of these systems is something that cannot just be relegated as irrelevant. In this paper, we present a review of security best practices and risk assessment of SCADA and ICS and report our research findings on an on-going risk modeling of a prototypical industrial control system using the CORAS framework tool.", "title": "" }, { "docid": "abcbd831178e1bc5419da8274dc17bbf", "text": "Most state-of-the-art statistical machine translation systems use log-linear models, which are defined in terms of hypothesis features and weights for those features. It is standard to tune the feature weights in order to maximize a translation quality metric, using heldout test sentences and their corresponding reference translations. However, obtaining reference translations is expensive. In our earlier work (Madnani et al., 2007), we introduced a new full-sentence paraphrase technique, based on English-to-English decoding with an MT system, and demonstrated that the resulting paraphrases can be used to cut the number of human reference translations needed in half. In this paper, we take the idea a step further, asking how far it is possible to get with just a single good reference translation for each item in the development set. Our analysis suggests that it is necessary to invest in four or more human translations in order to significantly improve on a single translation augmented by monolingual paraphrases.", "title": "" }, { "docid": "1503fae33ae8609a2193e978218d1543", "text": "The construct of resilience has captured the imagination of researchers across various disciplines over the last five decades (Ungar, 2008a). Despite a growing body of research in the area of resilience, there is little consensus among researchers about the definition and meaning of this concept. Resilience has been used to describe eight kinds of phenomena across different disciplines. These eight phenomena can be divided into two clusters based on the disciplinary origin. The first cluster mainly involves definitions of resilience derived from the discipline of psychology and covers six themes including (i) personality traits, (ii) positive outcomes/forms of adaptation despite high-risk, (iii) factors associated with positive adaptation, (iv) processes, (v) sustained competent functioning/stress resistance, and (vi) recovery from trauma or adversity. The second cluster of definitions is rooted in the discipline of sociology and encompasses two themes including (i) human agency and resistance, and (ii) survival. This paper discusses the inconsistencies in the varied definitions used within the published literature and describes the differing conceptualizations of resilience as well as their limitations. The paper concludes by offering a unifying conceptualization of resilience and by discussing implications for future research on resilience.", "title": "" }, { "docid": "6fe9aaaa0033d3322e989588df3105fe", "text": "Set-valued data, in which a set of values are associated with an individual, is common in databases ranging from market basket data, to medical databases of patients’ symptoms and behaviors, to query engine search logs. Anonymizing this data is important if we are to reconcile the conflicting demands arising from the desire to release the data for study and the desire to protect the privacy of individuals represented in the data. Unfortunately, the bulk of existing anonymization techniques, which were developed for scenarios in which each individual is associated with only one sensitive value, are not well-suited for set-valued data. In this paper we propose a top-down, partition-based approach to anonymizing set-valued data that scales linearly with the input size and scores well on an information-loss data quality metric. We further note that our technique can be applied to anonymize the infamous AOL query logs, and discuss the merits and challenges in anonymizing query logs using our approach.", "title": "" }, { "docid": "f1fe8a9d2e4886f040b494d76bc4bb78", "text": "The benefits of enhanced condition monitoring in the asset management of the electricity transmission infrastructure are increasingly being exploited by the grid operators. Adding more sensors helps to track the plant health more accurately. However, the installation or operating costs of any additional sensors could outweigh the benefits they bring due to the requirement for new cabling or battery maintenance. Energy harvesting devices are therefore being proposed to power a new generation of wireless sensors. The harvesting devices could enable the sensors to be maintenance free over their lifetime and substantially reduce the cost of installing and operating a condition monitoring system.", "title": "" }, { "docid": "f86b96306f56150679eaa65330a2eb0e", "text": "DEFINITION Visual analytics is the science of analytical reasoning supported by interactive visual interfaces according to [6]. Over the last decades data was produced at an incredible rate. However, the ability to collect and store this data is increasing at a faster rate than the ability to analyze it. While purely automatic or purely visual analysis methods were developed in the last decades, the complex nature of many problems makes it indispensable to include humans at an early stage in the data analysis process. Visual analytics methods allow decision makers to combine their flexibility, creativity, and background knowledge with the enormous storage and processing capacities of today’s computers to gain insight into complex problems. The goal of visual analytics research is thus to turn the information overload into an opportunity by enabling decision-makers to examine this massive information stream to take effective actions in real-time situations.", "title": "" }, { "docid": "8c68b4a0f02b0764fc2d69a65341a4a7", "text": "This paper presents a miniature DC-70 GHz single-pole four-throw (SP4T) built in a low-cost 0.13-µm CMOS process. The switch is based on a series-shunt design with input and output matching circuits. Deep n-well (also called triple-well) CMOS transistors are used to minimize the substrate coupling. Also, deep trench isolation is used between the different ports to minimize the port-to-port coupling. The SP4T results in a measured insertion loss of less than 3.5 dB up to 67 GHz with an isolation of greater than 25 dB. The measured port-to-port coupling is less than 28 dB up to 67 GHz. The measured P1dB and IIP3 are independent of frequency and are 9–10 dBm and 20–21 dBm, respectively. The active chip area is 0.24×0.23 mm2. To our knowledge, this work represents the widest bandwidth SP4T switch in any CMOS technology to-date.", "title": "" }, { "docid": "766bc5cee369a729dc310c7134edc36e", "text": "Spatial multiple access holds the promise to boost the capacity of wireless networks when an access point has multiple antennas. Due to the asynchronous and uncontrolled nature of wireless LANs, conventional MIMO technology does not work efficiently when concurrent transmissions from multiple stations are uncoordinated. In this paper, we present the design and implementation of a crosslayer system, called SAM, that addresses the challenges of enabling spatial multiple access for multiple devices in a random access network like WLAN. SAM uses a chain-decoding technique to reliably recover the channel parameters for each device, and iteratively decode concurrent frames with misaligned symbol timings and frequency offsets. We propose a new MAC protocol, called CCMA, to enable concurrent transmissions by different mobile stations while remaining backward compatible with 802.11. Finally, we implement the PHY and MAC layer of SAM using the Sora high-performance software radio platform. Our evaluation results under real wireless conditions show that SAM can improve network uplink throughput by 70% with two antennas over 802.11.", "title": "" }, { "docid": "b24a0f878f50d5b92d268e183fe62dde", "text": "Management is the process of setting and achieving organizational goals through its functions: forecasting, organization, coordination, training and monitoring-evaluation.Leadership is: the ability to influence, to make others follow you, the ability to guide, the human side of business for \"teacher\". Interest in leadership increased during the early part of the twentieth century. Early leadership theories focused on what qualities distinguished between leaders and followers, while subsequent theories looked at other variables such as situational factors and skill levels. Other considerations emphasize aspects that separate management of leadership, calling them two completely different processes.The words manager and lider are very often used to designate the same person who leads, however, they represent different realities and the main difference arises form the way in which people around are motivated. The difference between being a manager and being a leader is simple. Management is a career. Leadership is a calling. A leader is someone who people naturally follow through their own choice, whereas a manager must be obeyed. A manager may only have obtained his position of authority through time and loyalty given to the company, not as a result of his leadership qualities. A leader may have no organisational skills, but his vision unites people behind him. Leadership and management are two notions that are often used interchangeably. However, these words actually describe two different concepts. Leadership is the main component of change, providing vision, and dedication necessary for its realization. Leadership is a skill that is formed by education, experiences, interaction with people and inspiring, of course, practice. Effective leadership depends largely on how their leaders define, follow and share the vision to followers. Leadership is just one important component of the directing function. A manager cannot just be a leader, he also needs formal authority to be effective.", "title": "" }, { "docid": "b68a728f4e737f293dca0901970b41fe", "text": "With maturity of advanced technologies and urgent requirement for maintaining a healthy environment with reasonable price, China is moving toward a trend of generating electricity from renewable wind resources. How to select a suitable wind farm becomes an important focus for stakeholders. This paper first briefly introduces wind farm and then develops its critical success criteria. A new multi-criteria decision-making (MCDM) model, based on the analytic hierarchy process (AHP) associated with benefits, opportunities, costs and risks (BOCR), is proposed to help select a suitable wind farm project. Multiple factors that affect the success of wind farm operations are analyzed by taking into account experts’ opinions, and a performance ranking of the wind farms is generated. 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "c5b39921ebebb8bbb20fdef471e9d275", "text": "One popular justification for punishment is the just deserts rationale: A person deserves punishment proportionate to the moral wrong committed. A competing justification is the deterrence rationale: Punishing an offender reduces the frequency and likelihood of future offenses. The authors examined the motivation underlying laypeople's use of punishment for prototypical wrongs. Study 1 (N = 336) revealed high sensitivity to factors uniquely associated with the just deserts perspective (e.g., offense seriousness, moral trespass) and insensitivity to factors associated with deterrence (e.g., likelihood of detection, offense frequency). Study 2 (N = 329) confirmed the proposed model through structural equation modeling (SEM). Study 3 (N = 351) revealed that despite strongly stated preferences for deterrence theory, individual sentencing decisions seemed driven exclusively by just deserts concerns.", "title": "" }, { "docid": "b34db00c8a84eab1c7b1a6458fc6cd97", "text": "The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). In particular, visual interpretation of hand gestures can help in achieving the ease and naturalness desired for HCI. This has motivated a very active research area concerned with computer vision-based analysis and interpretation of hand gestures. We survey the literature on visual interpretation of hand gestures in the context of its role in HCI. This discussion is organized on the basis of the method used for modeling, analyzing, and recognizing gestures. Important differences in the gesture interpretation approaches arise depending on whether a 3D model of the human hand or an image appearance model of the human hand is used. 3D hand models offer a way of more elaborate modeling of hand gestures but lead to computational hurdles that have not been overcome given the real-time requirements of HCI. Appearance-based models lead to computationally efficient “purposive” approaches that work well under constrained situations but seem to lack the generality desirable for HCI. We also discuss implemented gestural systems as well as other potential applications of vision-based gesture recognition. Although the current progress is encouraging, further theoretical as well as computational advances are needed before gestures can be widely used for HCI. We discuss directions of future research in gesture recognition, including its integration with other natural modes of humancomputer interaction. Index Terms —Vision-based gesture recognition, gesture analysis, hand tracking, nonrigid motion analysis, human-computer", "title": "" }, { "docid": "86e5c9defae0135db8466df0bdbe5aef", "text": "Autonomous Underwater Vehicles (AUVs) are robots able to perform tasks without human intervention (remote operators). Research and development of this class of vehicles has growing, due to the excellent characteristics of the AUVs to operate in different situations. Therefore, this study aims to analyze turbulent single fluid flow over different geometric configurations of an AUV hull, in order to obtain test geometry that generates lower drag force, which reduces the energy consumption of the vehicle, thereby increasing their autonomy during operation. In the numerical analysis was used ANSYS-CFX® 11.0 software, which is a powerful tool for solving problems involving fluid mechanics. Results of the velocity (vectors and streamlines), pressure distribution and drag coefficient are showed and analyzed. Optimum hull geometry was found. Lastly, a relationship between the geometric parameters analyzed and the drag coefficient was obtained.", "title": "" }, { "docid": "9d1c0462c27516974a2b4e520916201e", "text": "The current method of grading prostate cancer on histology uses the Gleason system, which describes five increasingly malignant stages of cancer according to qualitative analysis of tissue architecture. The Gleason grading system has been shown to suffer from inter- and intra-observer variability. In this paper we present a new method for automated and quantitative grading of prostate biopsy specimens. A total of 102 graph-based, morphological, and textural features are extracted from each tissue patch in order to quantify the arrangement of nuclei and glandular structures within digitized images of histological prostate tissue specimens. A support vector machine (SVM) is used to classify the digitized histology slides into one of four different tissue classes: benign epithelium, benign stroma, Gleason grade 3 adenocarcinoma, and Gleason grade 4 adenocarcinoma. The SVM classifier was able to distinguish between all four types of tissue patterns, achieving an accuracy of 92.8% when distinguishing between Gleason grade 3 and stroma, 92.4% between epithelium and stroma, and 76.9% between Gleason grades 3 and 4. Both textural and graph-based features were found to be important in discriminating between different tissue classes. This work suggests that the current Gleason grading scheme can be improved by utilizing quantitative image analysis to aid pathologists in producing an accurate and reproducible diagnosis", "title": "" }, { "docid": "12b205881ead4d31ae668d52f4ba52c7", "text": "The general theory of side-looking synthetic aperture radar systems is developed. A simple circuit-theory model is developed; the geometry of the system determines the nature of the prefilter and the receiver (or processor) is the postfilter. The complex distributed reflectivity density appears as the input, and receiver noise is first considered as the interference which limits performance. Analysis and optimization are carried out for three performance criteria (resolution, signal-to-noise ratio, and least squares estimation of the target field). The optimum synthetic aperture length is derived in terms of the noise level and average transmitted power. Range-Doppler ambiguity limitations and optical processing are discussed briefly. The synthetic aperture concept for rotating target fields is described. It is observed that, for a physical aperture, a side-looking radar, and a rotating target field, the azimuth resolution is λ/α where α is the change in aspect angle over which the target field is viewed, The effects of phase errors on azimuth resolution are derived in terms of the power density spectrum of the derivative of the phase errors and the performance in the absence of phase errors.", "title": "" }, { "docid": "c433b602177782e814848a26c711361a", "text": "Running is a complex dynamical task which places strict design requirements on both the physical components and software control systems of a robot. This paper explores some of those requirements and illustrates how a variable compliance actuation system can satisfy them. We present the design, analysis, simulation, and benchtop experimental validation of such an actuator system. We demonstrate, through simulation, the application of our prototype actuator to the problem of biped running.", "title": "" }, { "docid": "30e798ef3668df14f1625d40c53011a0", "text": "Classification with big data has become one of the latest trends when talking about learning from the available information. The data growth in the last years has rocketed the interest in effectively acquiring knowledge to analyze and predict trends. The variety and veracity that are related to big data introduce a degree of uncertainty that has to be handled in addition to the volume and velocity requirements. This data usually also presents what is known as the problem of classification with imbalanced datasets, a class distribution where the most important concepts to be learned are presented by a negligible number of examples in relation to the number of examples from the other classes. In order to adequately deal with imbalanced big data we propose the Chi-FRBCS-BigDataCS algorithm, a fuzzy rule based classification system that is able to deal with the uncertainly that is introduced in large volumes of data without disregarding the learning in the underrepresented class. The method uses the MapReduce framework to distribute the computational operations of the fuzzy model while it includes cost-sensitive learning techniques in its design to address the imbalance that is present in the data. The good performance of this approach is supported by the experimental analysis that is carried out over twenty-four imbalanced big data cases of study. The results obtained show that the proposal is able to handle these problems obtaining competitive results both in the classification performance of the model and the time needed for the computation. © 2014 Elsevier B.V. All rights reserved.", "title": "" } ]
scidocsrr
fd400de0b8281c9be69308b354587eb4
Secure Automotive On-Board Protocols: A Case of Over-the-Air Firmware Updates
[ { "docid": "a10a51d1070396e1e8a8b186af18f87d", "text": "An upcoming trend for automobile manufacturers is to provide firmware updates over the air (FOTA) as a service. Since the firmware controls the functionality of a vehicle, security is important. To this end, several secure FOTA protocols have been developed. However, the secure FOTA protocols only solve the security for the transmission of the firmware binary. Once the firmware is downloaded, an attacker could potentially modify its contents before it is flashed to the corresponding ECU'S ROM. Thus, there is a need to extend the flashing procedure to also verify that the correct firmware has been flashed to the ECU. We present a framework for self-verification of firmware updates over the air. We include a verification code in the transmission to the vehicle, and after the firmware has been flashed, the integrity of the memory contents can be verified using the verification code. The verification procedure entails only simple hash functions and is thus suitable for the limited resources in the vehicle. Virtualization techniques are employed to establish a trusted computing base in the ECU, which is then used to perform the verification. The proposed framework allows the ECU itself to perform self-verification and can thus ensure the successful flashing of the firmware.", "title": "" }, { "docid": "8d041241f1a587b234c8784dea9088a4", "text": "Modern intelligent vehicles have electronic control units containing firmware that enables various functions in the vehicle. New firmware versions are constantly developed to remove bugs and improve functionality. Automobile manufacturers have traditionally performed firmware updates over cables but in the near future they are aiming at conducting firmware updates over the air, which would allow faster updates and improved safety for the driver. In this paper, we present a protocol for secure firmware updates over the air. The protocol provides data integrity, data authentication, data confidentiality, and freshness. In our protocol, a hash chain is created of the firmware, and the first packet is signed by a trusted source, thus authenticating the whole chain. Moreover, the packets are encrypted using symmetric keys. We discuss the practical considerations that exist for implementing our protocol and show that the protocol is computationally efficient, has low memory overhead, and is suitable for wireless communication. Therefore, it is well suited to the limited hardware resources in the wireless vehicle environment.", "title": "" } ]
[ { "docid": "910fdcf9e9af05b5d1cb70a9c88e4143", "text": "We propose NEURAL ENQUIRER — a neural network architecture for answering natural language (NL) questions given a knowledge base (KB) table. Unlike previous work on end-to-end training of semantic parsers, NEURAL ENQUIRER is fully “neuralized”: it gives distributed representations of queries and KB tables, and executes queries through a series of differentiable operations. The model can be trained with gradient descent using both endto-end and step-by-step supervision. During training the representations of queries and the KB table are jointly optimized with the query execution logic. Our experiments show that the model can learn to execute complex NL queries on KB tables with rich structures.", "title": "" }, { "docid": "281f23c51d3ba27e09e3109c8578c385", "text": "Generative Adversarial Networks (GANs) are an incredibly exciting approach for efficiently training computers to learn many features in data, as well as to generate realistic novel samples. Thanks to a number of their unique characteristics, some experts believe they may reinvent machine learning. In this thesis I explore the state of the GAN, focusing on the mechanisms by which they work, the fundamental challenges and strategies associated with training them, a selection of their various extensions, and what they may have to offer to the the greater machine learning community. I also consider the broader idea of building machine learning systems comprised of multiple neural networks, as opposed to using a single network. Using the state of the art progressive growing of GANs approach, I conducted experiments where I generated painting-like images that I believe to be the most authentic GAN-generated portrait paintings. I also generated highly realistic chest X-ray images, using a progressively grown GAN trained without labels on the NIH’s ChestX-ray14 dataset, which contains 112,000 chest X-ray images with 14 different disease diagnoses represented; it still remains to be seen whether the GAN-generated X-ray images contain clear identifying features of the various diseases. My generated results further demonstrate the relatively stable training of the progressive growing approach as well as the GAN’s compelling capacity for learning features in a variety of forms of image data.", "title": "" }, { "docid": "e8207f548a8daac1d8ae261796943f7f", "text": "OBJECTIVE\nAccurate endoscopic differentiation would enable to resect and discard small and diminutive colonic lesions, thereby increasing cost-efficiency. Current classification systems based on narrow band imaging (NBI), however, do not include neoplastic sessile serrated adenomas/polyps (SSA/Ps). We aimed to develop and validate a new classification system for endoscopic differentiation of adenomas, hyperplastic polyps and SSA/Ps <10 mm.\n\n\nDESIGN\nWe developed the Workgroup serrAted polypS and Polyposis (WASP) classification, combining the NBI International Colorectal Endoscopic classification and criteria for differentiation of SSA/Ps in a stepwise approach. Ten consultant gastroenterologists predicted polyp histology, including levels of confidence, based on the endoscopic aspect of 45 polyps, before and after participation in training in the WASP classification. After 6 months, the same endoscopists predicted polyp histology of a new set of 50 polyps, with a ratio of lesions comparable to daily practice.\n\n\nRESULTS\nThe accuracy of optical diagnosis was 0.63 (95% CI 0.54 to 0.71) at baseline, which improved to 0.79 (95% CI 0.72 to 0.86, p<0.001) after training. For polyps diagnosed with high confidence the accuracy was 0.73 (95% CI 0.64 to 0.82), which improved to 0.87 (95% CI 0.80 to 0.95, p<0.01). The accuracy of optical diagnosis after 6 months was 0.76 (95% CI 0.72 to 0.80), increasing to 0.84 (95% CI 0.81 to 0.88) considering high confidence diagnosis. The combined negative predictive value with high confidence of diminutive neoplastic lesions (adenomas and SSA/Ps together) was 0.91 (95% CI 0.83 to 0.96).\n\n\nCONCLUSIONS\nWe developed and validated the first integrative classification method for endoscopic differentiation of small and diminutive adenomas, hyperplastic polyps and SSA/Ps. In a still image evaluation setting, introduction of the WASP classification significantly improved the accuracy of optical diagnosis overall as well as SSA/P in particular, which proved to be sustainable after 6 months.", "title": "" }, { "docid": "5b3fd9394bc6dd84f48a23def50f8ace", "text": "This study presents the first behavioral genetic investigation of the relationships between trait emotional intelligence (trait EI or trait emotional self-efficacy) and the Dark Triad traits of narcissism, Machiavellianism, and psychopathy. In line with trait EI theory, the construct correlated positively with narcissism, but negatively with the other two traits. Generally, the correlations were consistent across the 4 factors and 15 facets of the construct. Cholesky decomposition analysis revealed that the phenotypic associations were primarily due to correlated genetic factors and secondarily due to correlated nonshared environmental factors, with shared environmental factors being nonsignificant in all cases. Results are discussed from the perspective of trait EI theory with particular reference to the issue of adaptive value.", "title": "" }, { "docid": "a2c5e8f11a4ac8ff2ec1554d0a67ce1e", "text": "Over the past few years, injection vulnerabilities have become the primary target for remote exploits. SQL injection, command injection, and cross-site scripting are some of the popular attacks that exploit these vulnerabilities. Taint-tracking has emerged as one of the most promising approaches for defending against these exploits, as it supports accurate detection (and prevention) of popular injection attacks. However, practical deployment of tainttracking defenses has been hampered by a number of factors, including: (a) high performance overheads (often over 100%), (b) the need for deep instrumentation, which has the potential to impact application robustness and stability, and (c) specificity to the language in which an application is written. In order to overcome these limitations, we present a new technique in this paper called taint inference. This technique does not require any source-code or binary instrumentation of the application to be protected; instead, it operates by intercepting requests and responses from this application. For most web applications, this interception may be achieved using network layer interposition or library interposition. We then develop a class of policies called syntaxand taint-aware policies that can accurately detect and/or block most injection attacks. An experimental evaluation shows that our techniques are effective in detecting a broad range of attacks on applications written in multiple languages (including PHP, Java and C), and impose low performance overheads (below 5%).", "title": "" }, { "docid": "5e1d615dde71c4ca09578152e39e6741", "text": "Cognitive radio is a promising technology aiming to solve the spectrum scarcity problem by allocating the spectrum dynamically to unlicensed users. It uses the free spectrum bands which are not being used by the licensed users without causing interference to the incumbent transmission. So, spectrum sensing is the essential mechanism on which the entire communication depends. If the spectrum sensing result is violated, the entire networks activities will be disrupted. Primary User Emulation Attack (PUEA) is one of the major threats to the spectrum sensing, which decreases the spectrum access probability. In this paper, our objectives are to give the various security issues in cognitive radio networks and then to discuss the PUEA with the existing techniques to mitigate it. Keywords-cognitive radio; spectrum sensing; PUEA", "title": "" }, { "docid": "b19f473f77b20dcb566fded46100a71b", "text": "Large amount of information are available online on web.The discussion forum, review sites, blogs are some of the opinion rich resources where review or posted articles is their sentiment, or overall opinion towards the subject matter. The opinions obtained from those can be classified in to positive or negative which can be used by customer to make product choice and by businessmen for finding customer satisfaction .This paper studies online movie reviews using sentiment analysis approaches. In this study, sentiment classification techniques were applied to movie reviews. Specifically, we compared two supervised machine learning approaches SVM, Navie Bayes for Sentiment Classification of Reviews. Results states that Naïve Bayes approach outperformed the svm. If the training dataset had a large number of reviews, Naive bayes approach reached high accuracies as compare to other.", "title": "" }, { "docid": "0fc3344d9ad054fc138447204a423255", "text": "Stereo matching is a challenging problem with respect to weak texture, discontinuities, illumination difference and occlusions. Therefore, a deep learning framework is presented in this paper, which focuses on the first and last stage of typical stereo methods: the matching cost computation and the disparity refinement. For matching cost computation, two patch-based network architectures are exploited to allow the trade-off between speed and accuracy, both of which leverage multi-size and multi-layer pooling unit with no strides to learn cross-scale feature representations. For disparity refinement, unlike traditional handcrafted refinement algorithms, we incorporate the initial optimal and sub-optimal disparity maps before outlier detection. Furthermore, diverse base learners are encouraged to focus on specific replacement tasks, corresponding to the smooth regions and details. Experiments on different datasets demonstrate the effectiveness of our approach, which is able to obtain sub-pixel accuracy and restore occlusions to a great extent. Specifically, our accurate framework attains near-peak accuracy both in non-occluded and occluded region and our fast framework achieves competitive performance against the fast algorithms on Middlebury benchmark.", "title": "" }, { "docid": "4a9a53444a74f7125faa99d58a5b0321", "text": "The new transformed read-write Web has resulted in a rapid growth of user generated content on the Web resulting into a huge volume of unstructured data. A substantial part of this data is unstructured text such as reviews and blogs. Opinion mining and sentiment analysis (OMSA) as a research discipline has emerged during last 15 years and provides a methodology to computationally process the unstructured data mainly to extract opinions and identify their sentiments. The relatively new but fast growing research discipline has changed a lot during these years. This paper presents a scientometric analysis of research work done on OMSA during 20 0 0–2016. For the scientometric mapping, research publications indexed in Web of Science (WoS) database are used as input data. The publication data is analyzed computationally to identify year-wise publication pattern, rate of growth of publications, types of authorship of papers on OMSA, collaboration patterns in publications on OMSA, most productive countries, institutions, journals and authors, citation patterns and an year-wise citation reference network, and theme density plots and keyword bursts in OMSA publications during the period. A somewhat detailed manual analysis of the data is also performed to identify popular approaches (machine learning and lexicon-based) used in these publications, levels (document, sentence or aspect-level) of sentiment analysis work done and major application areas of OMSA. The paper presents a detailed analytical mapping of OMSA research work and charts the progress of discipline on various useful parameters. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a7bbf188c7219ff48af391a5f8b140b8", "text": "The paper presents the results of studies concerning the designation of COD fraction in raw wastewater. The research was conducted in three mechanical-biological sewage treatment plants. The results were compared with data assumed in the ASM models. During the investigation, the following fractions of COD were determined: dissolved non-biodegradable SI, dissolved easily biodegradable SS, in organic suspension slowly degradable XS, and in organic suspension non-biodegradable XI. The methodology for determining the COD fraction was based on the ATVA 131guidelines. The real concentration of fractions in raw wastewater and the percentage of each fraction in total COD are different from data reported in the literature.", "title": "" }, { "docid": "87ed7ebdf8528df1491936000649761b", "text": "Internet of Vehicles (IoV) is an important constituent of next generation smart cities that enables city wide connectivity of vehicles for traffic management applications. A secure and reliable communications is an important ingredient of safety applications in IoV. While the use of a more robust security algorithm makes communications for safety applications secure, it could reduce application QoS due to increased packet overhead and security processing delays. Particularly, in high density scenarios where vehicles receive large number of safety packets from neighborhood, timely signature verification of these packets could not be guaranteed. As a result, critical safety packets remain unverified resulting in cryptographic loss. In this paper, we propose two security mechanisms that aim to reduce cryptographic loss rate. The first mechanism is random transmitter security level section whereas the second one is adaptive scheme that iteratively selects the best possible security level at the transmitter depending on the current cryptographic loss rate. Simulation results show the effectiveness of the proposed mechanisms in comparison with the static security technique recommended by the ETSI standard.", "title": "" }, { "docid": "8a114142634c00d593af6644a3f396f6", "text": "Parent involvement has a sound research base attesting to the many potential benefits it can offer in education. However, student motivation as an academic outcome of parental involvement has only recently been investigated. The purpose of this article is to show how parent involvement is related to students’ motivation. Studies of students from the elementary school to high school show a beneficial relationship between parental involvement and the following motivational constructs: school engagement, intrinsic/extrinsic motivation, perceived competence, perceived control, self-regulation, mastery goal orientation, and motivation to read. From the synthesis of the parent involvement and motivation literature, we offer potential explanations for their relationship. Directions for areas of continued research are also presented.", "title": "" }, { "docid": "4211e323e2efac1a08d8caae607f737d", "text": "Mean reversion is a feature largely recognized for the price processes of many financial securities and especially commodities. In the literature there are examples where some simple speculative strategies, before transaction costs, were devised to earn excess returns from such price processes. Actually, the gain opportunities of mean reversion must be corrected to account for transaction costs, which may represent a major issue. In this work we try to determine sufficient conditions for the parameters of a mean reverting price process as a function of transaction costs, to allow a speculative trader to have positive expectations when deciding to take a position. We estimate the mean reverting parameters for some commodities and correct them for transaction costs to assess whether the potential inefficiency is actually relevant for speculative purposes. 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "dca65464cc8a3bb59f2544ef9a09e388", "text": "Some authors clearly showed that faking reduces the construct validity of personality questionnaires, whilst many others found no such effect. A possible explanation for mixed results could be searched for in a variety of methodological strategies in forming comparison groups supposed to differ in the level of faking: candidates vs. non-candidates; groups of individuals with \"high\" vs. \"low\" social desirability score; and groups given instructions to respond honestly vs. instructions to \"fake good\". All three strategies may be criticized for addressing the faking problem indirectly – assuming that comparison groups really differ in the level of response distortion, which might not be true. Therefore, in a within-subject design study we examined how faking affects the construct validity of personality inventories using a direct measure of faking. The results suggest that faking reduces the construct validity of personality questionnaires gradually – the effect was stronger in the subsample of participants who distorted their responses to a greater extent.", "title": "" }, { "docid": "cc6c485fdd8d4d61c7b68bfd94639047", "text": "Passive geolocaton of communication emitters provides great benefits to military and civilian surveillance and security operations. Time Difference of Arrival (TDOA) and Frequency Difference of Arrival (FDOA) measurement combination for stationary emitters may be obtained by sensors mounted on mobile platforms, for example on a pair of UAVs. Complex Ambiguity Function (CAF) of received complex signals can be efficiently calculated to provide required TDOA / FDOA measurement combination. TDOA and FDOA measurements are nonlinear in the sense that the emitter uncertainty given measurements in the Cartesian domain is non-Gaussian. Multiple non-linear measurements of emitter location need to be fused to provide the geolocation estimates. Gaussian Mixture Measurement (GMM) filter fuses nonlinear measurements as long as the uncertainty of each measurement in the surveillance (Cartesian) space is modeled by a Gaussian Mixture. Simulation results confirm this approach and compare it with geolocation using Bearings Only (BO) measurements.", "title": "" }, { "docid": "e85a019405a29e19670c99f9eabfff78", "text": "Online shopping, different from traditional shopping behavior, is characterized with uncertainty, anonymity, and lack of control and potential opportunism. Therefore, trust is an important factor to facilitate online transactions. The purpose of this study is to explore the role of trust in consumer online purchase behavior. This study undertook a comprehensive survey of online customers having e-shopping experiences in Taiwan and we received 1258 valid questionnaires. The empirical results, using structural equation modeling, indicated that perceived ease of use and perceived usefulness affect have a significant impact on trust in e-commerce. Trust also has a significant influence on attitude towards online purchase. However, there is no significant impact from trust on the intention of online purchase.", "title": "" }, { "docid": "c9c440243d8a247f2daa9d0dbe3f478b", "text": "Orthogonal Frequency Division Multiplexing (OFDM) is a multi-carrier system where data bits are encoded to multiple subcarriers, while being sent simultaneously. This results in the optimal usage of bandwidth. A set of orthogonal sub-carriers together forms an OFDM symbol. To avoid ISI due to multi-path, successive OFDM symbols are separated by guard band. This makes the OFDM system resistant to multipath effects. The principles of OFDM modulation have been around since 1960s. However, recently, the attention toward OFDM has grown dramatically in the field of wireless and wired communication systems. This is reflected by the adoption of this technique in applications such as digital audio/video broadcast (DAB/DVB), wireless LAN (802.11a and HiperLAN2), broadband wireless (802.16) and xDSL. In this work, a pure VHDL design, integrated with some intellectual property (IP) blocks, is employed to implement an OFDM transmitter and receiver. In this paper design of OFDM system using IFFT and FFT blocks has been introduced and simulation was done on XILINX ISE 14.2 software. Keywords– FFT, IFFT, OFDM, QAM, VHDL.", "title": "" }, { "docid": "ba89a62ac2d1b36738e521d4c5664de2", "text": "Currently, the network traffic control systems are mainly composed of the Internet core and wired/wireless heterogeneous backbone networks. Recently, these packet-switched systems are experiencing an explosive network traffic growth due to the rapid development of communication technologies. The existing network policies are not sophisticated enough to cope with the continually varying network conditions arising from the tremendous traffic growth. Deep learning, with the recent breakthrough in the machine learning/intelligence area, appears to be a viable approach for the network operators to configure and manage their networks in a more intelligent and autonomous fashion. While deep learning has received a significant research attention in a number of other domains such as computer vision, speech recognition, robotics, and so forth, its applications in network traffic control systems are relatively recent and garnered rather little attention. In this paper, we address this point and indicate the necessity of surveying the scattered works on deep learning applications for various network traffic control aspects. In this vein, we provide an overview of the state-of-the-art deep learning architectures and algorithms relevant to the network traffic control systems. Also, we discuss the deep learning enablers for network systems. In addition, we discuss, in detail, a new use case, i.e., deep learning based intelligent routing. We demonstrate the effectiveness of the deep learning-based routing approach in contrast with the conventional routing strategy. Furthermore, we discuss a number of open research issues, which researchers may find useful in the future.", "title": "" }, { "docid": "c5ca8f5d78b001f05b214566f5586193", "text": "As architecture, systems, and data management communities pay greater attention to innovative big data systems and architecture, the pressure of benchmarking and evaluating these systems rises. However, the complexity, diversity, frequently changed workloads, and rapid evolution of big data systems raise great challenges in big data benchmarking. Considering the broad use of big data systems, for the sake of fairness, big data benchmarks must include diversity of data and workloads, which is the prerequisite for evaluating big data systems and architecture. Most of the state-of-the-art big data benchmarking efforts target evaluating specific types of applications or system software stacks, and hence they are not qualified for serving the purposes mentioned above. This paper presents our joint research efforts on this issue with several industrial partners. Our big data benchmark suite-BigDataBench not only covers broad application scenarios, but also includes diverse and representative data sets. Currently, we choose 19 big data benchmarks from dimensions of application scenarios, operations/ algorithms, data types, data sources, software stacks, and application types, and they are comprehensive for fairly measuring and evaluating big data systems and architecture. BigDataBench is publicly available from the project home page http://prof.ict.ac.cn/BigDataBench. Also, we comprehensively characterize 19 big data workloads included in BigDataBench with varying data inputs. On a typical state-of-practice processor, Intel Xeon E5645, we have the following observations: First, in comparison with the traditional benchmarks: including PARSEC, HPCC, and SPECCPU, big data applications have very low operation intensity, which measures the ratio of the total number of instructions divided by the total byte number of memory accesses; Second, the volume of data input has non-negligible impact on micro-architecture characteristics, which may impose challenges for simulation-based big data architecture research; Last but not least, corroborating the observations in CloudSuite and DCBench (which use smaller data inputs), we find that the numbers of L1 instruction cache (L1I) misses per 1000 instructions (in short, MPKI) of the big data applications are higher than in the traditional benchmarks; also, we find that L3 caches are effective for the big data applications, corroborating the observation in DCBench.", "title": "" }, { "docid": "fd11fbed7a129e3853e73040cbabb56c", "text": "A digitally modulated power amplifier (DPA) in 1.2 V 0.13 mum SOI CMOS is presented, to be used as a building block in multi-standard, multi-band polar transmitters. It performs direct amplitude modulation of an input RF carrier by digitally controlling an array of 127 unary-weighted and three binary-weighted elementary gain cells. The DPA is based on a novel two-stage topology, which allows seamless operation from 800 MHz through 2 GHz, with a full-power efficiency larger than 40% and a 25.2 dBm maximum envelope power. Adaptive digital predistortion is exploited for DPA linearization. The circuit is thus able to reconstruct 21.7 dBm WCDMA/EDGE signals at 1.9 GHz with 38% efficiency and a higher than 10 dB margin on all spectral specifications. As a result of the digital modulation technique, a higher than 20.1 % efficiency is guaranteed for WCDMA signals with a peak-to-average power ratio as high as 10.8 dB. Furthermore, a 15.3 dBm, 5 MHz WiMAX OFDM signal is successfully reconstructed with a 22% efficiency and 1.53% rms EVM. A high 10-bit nominal resolution enables a wide-range TX power control strategy to be implemented, which greatly minimizes the quiescent consumption down to 10 mW. A 16.4% CDMA average efficiency is thus obtained across a > 70 dB power control range, while complying with all the spectral specifications.", "title": "" } ]
scidocsrr
dab3de6c37e87c9cfec99892cc072778
Dependency-based Discourse Parser for Single-Document Summarization
[ { "docid": "7f6a45292aeca83bebb9556c938e0782", "text": "Many methods of text summarization combining sentence selection and sentence compression have recently been proposed. Although the dependency between words has been used in most of these methods, the dependency between sentences, i.e., rhetorical structures, has not been exploited in such joint methods. We used both dependency between words and dependency between sentences by constructing a nested tree, in which nodes in the document tree representing dependency between sentences were replaced by a sentence tree representing dependency between words. We formulated a summarization task as a combinatorial optimization problem, in which the nested tree was trimmed without losing important content in the source document. The results from an empirical evaluation revealed that our method based on the trimming of the nested tree significantly improved the summarization of texts.", "title": "" }, { "docid": "7f415c10d8c57a9c3d202f7a36b8071a", "text": "Previous researches on Text-level discourse parsing mainly made use of constituency structure to parse the whole document into one discourse tree. In this paper, we present the limitations of constituency based discourse parsing and first propose to use dependency structure to directly represent the relations between elementary discourse units (EDUs). The state-of-the-art dependency parsing techniques, the Eisner algorithm and maximum spanning tree (MST) algorithm, are adopted to parse an optimal discourse dependency tree based on the arcfactored model and the large-margin learning techniques. Experiments show that our discourse dependency parsers achieve a competitive performance on text-level discourse parsing.", "title": "" }, { "docid": "c3b691cd3671011278ecd30563b27245", "text": "We formalize weighted dependency parsing as searching for maximum spanning trees (MSTs) in directed graphs. Using this representation, the parsing algorithm of Eisner (1996) is sufficient for searching over all projective trees in O(n3) time. More surprisingly, the representation is extended naturally to non-projective parsing using Chu-Liu-Edmonds (Chu and Liu, 1965; Edmonds, 1967) MST algorithm, yielding anO(n2) parsing algorithm. We evaluate these methods on the Prague Dependency Treebank using online large-margin learning techniques (Crammer et al., 2003; McDonald et al., 2005) and show that MST parsing increases efficiency and accuracy for languages with non-projective dependencies.", "title": "" } ]
[ { "docid": "94f39416ba9918e664fb1cd48732e3ae", "text": "In this paper, a nanostructured biosensor is developed to detect glucose in tear by using fluorescence resonance energy transfer (FRET) quenching mechanism. The designed FRET pair, including the donor, CdSe/ZnS quantum dots (QDs), and the acceptor, dextran-binding malachite green (MG-dextran), was conjugated to concanavalin A (Con A), an enzyme with specific affinity to glucose. In the presence of glucose, the quenched emission of QDs through the FRET mechanism is restored by displacing the dextran from Con A. To have a dual-modulation sensor for convenient and accurate detection, the nanostructured FRET sensors were assembled onto a patterned ZnO nanorod array deposited on the synthetic silicone hydrogel. Consequently, the concentration of glucose detected by the patterned sensor can be converted to fluorescence spectra with high signal-to-noise ratio and calibrated image pixel value. The photoluminescence intensity of the patterned FRET sensor increases linearly with increasing concentration of glucose from 0.03mmol/L to 3mmol/L, which covers the range of tear glucose levels for both diabetics and healthy subjects. Meanwhile, the calibrated values of pixel intensities of the fluorescence images captured by a handhold fluorescence microscope increases with increasing glucose. Four male Sprague-Dawley rats with different blood glucose concentrations were utilized to demonstrate the quick response of the patterned FRET sensor to 2µL of tear samples.", "title": "" }, { "docid": "d3eeb9e96881dc3bd60433bdf3e89749", "text": "The first € price and the £ and $ price are net prices, subject to local VAT. Prices indicated with * include VAT for books; the €(D) includes 7% for Germany, the €(A) includes 10% for Austria. Prices indicated with ** include VAT for electronic products; 19% for Germany, 20% for Austria. All prices exclusive of carriage charges. Prices and other details are subject to change without notice. All errors and omissions excepted. M. Bushnell, V.D. Agrawal Essentials of Electronic Testing for Digital, Memory and MixedSignal VLSI Circuits", "title": "" }, { "docid": "a2346fbd1aa496bb2924bdeec7282be5", "text": "During a survey carried out in August 2013 along all coastal areas of north-eastern Tunisia (governorships of Bizerte, Ariana, Tunis, Ben Arous, Nabeul, Sousse), eucalyptus trees were found to be highly infested by the invasive pest Glycaspis brimblecombei Moore, 1964, also known as red gum lerp psyllid. This insect, native to the Australian region and secondarily dispersed also in the Americas, Mauritius, Madagascar and South Africa, very recently started to invade the Mediterranean region and in almost 5 years has spread to the Iberian Peninsula, Italy, Greece and Morocco. Its presence in Tunisia (which is recorded here for the first time) most probably dates back to summer 2012, since typical necrotic spots caused by the lerp of the psyllid had already been noted on leaves during spring 2013. No presence of its main parasitoid – Psyllaephagus bliteus Riek – nor of any other natural enemy, was noted up to now during our survey in Tunisia.", "title": "" }, { "docid": "0545516ad4d986b15f8dc179f1b7b3c0", "text": "Reinforced concrete detection system design is based on the principle of electromagnetic detection to STM32 chip as the core, the design of the various components of the system modules, built a sophisticated detection system, in STM32 chip as the core, it design the various components of the system modules, and built a sophisticated detection system. Through mutual cooperation between the various modules, the system can achieve accurate measurements reinforced the position and thickness of the protective layer. The software part of the system's design, the system can be detected timely data processing and storage.", "title": "" }, { "docid": "0687e28b42ca1acff99dc4917b920127", "text": "Advanced Synchronization Facility (ASF) is an AMD64 hardware extension for lock-free data structures and transactional memory. It provides a speculative region that atomically executes speculative accesses in the region. Five new instructions are added to demarcate the region, use speculative accesses selectively, and control the speculative hardware context. Programmers can use speculative regions to build flexible multi-word atomic primitives with no additional software support by relying on the minimum guarantee of available ASF hardware resources for lock-free programming. Transactional programs with high-level TM language constructs can either be compiled directly to the ASF code or be linked to software TM systems that use ASF to accelerate transactional execution. In this paper we develop an out-of-order hardware design to implement ASF on a future AMD processor and evaluate it with an in-house simulator. The experimental results show that the combined use of the L1 cache and the LS unit is very helpful for the performance robustness of ASF-based lock free data structures, and that the selective use of speculative accesses enables transactional programs to scale with limited ASF hardware resources.", "title": "" }, { "docid": "dc2752eeee3e4ccf0bf6d912ef72b5a8", "text": "It takes more than water to restore a wetland. Now, scientists are documenting how landscape setting, habitat type, hydrological regime, soil properties, topography, nutrient supplies, disturbance regimes, invasive species, seed banks and declining biodiversity can constrain the restoration process. Although many outcomes can be explained post hoc, we have little ability to predict the path that sites will follow when restored in alternative ways, and no insurance that specific targets will be met. To become predictive, bolder approaches are now being developed, which rely more on field experimentation at multiple spatial and temporal scales, and in many restoration contexts.", "title": "" }, { "docid": "59a32ec5b88436eca75d8fa9aa75951b", "text": "A visual-relational knowledge graph (KG) is a multi-relational graph whose entities are associated with images. We introduce ImageGraph, a KG with 1,330 relation types, 14,870 entities, and 829,931 images. Visual-relational KGs lead to novel probabilistic query types where images are treated as first-class citizens. Both the prediction of relations between unseen images and multi-relational image retrieval can be formulated as query types in a visual-relational KG. We approach the problem of answering such queries with a novel combination of deep convolutional networks and models for learning knowledge graph embeddings. The resulting models can answer queries such as “How are these two unseen images related to each other?\" We also explore a zero-shot learning scenario where an image of an entirely new entity is linked with multiple relations to entities of an existing KG. The multi-relational grounding of unseen entity images into a knowledge graph serves as the description of such an entity. We conduct experiments to demonstrate that the proposed deep architectures in combination with KG embedding objectives can answer the visual-relational queries efficiently and accurately.", "title": "" }, { "docid": "7c0b45642b8f296c62fe60d6b734d205", "text": "In this paper we present an algorithm to determine the location of contact points to obtain force closure grasps on tree dimensional objects. The shape of the object is assumed to be given by a triangle mesh - a format widely used in CAD software. Our algorithm can handle an arbitrary number of contact points and does nor require any prior information about their initial locations. Through an iterative process, contact point locations are updated aiming at improving a commonly used grasp quality metric. The process is global in the sense that during the process the whole surface of the object can be explored, and contact point locations can cross sharp edges that usually represent a problem for optimization algorithms relying on smooth surface representations. Extensive simulation results illustrate the performance of the proposed method, outlining strengths and directions for further research.", "title": "" }, { "docid": "b2199b7be543f0f287e0cbdb7a477843", "text": "We developed a pneumatically powered orthosis for the human ankle joint. The orthosis consisted of a carbon fiber shell, hinge joint, and two artificial pneumatic muscles. One artificial pneumatic muscle provided plantar flexion torque and the second one provided dorsiflexion torque. Computer software adjusted air pressure in each artificial muscle independently so that artificial muscle force was proportional to rectified low-pass-filtered electromyography (EMG) amplitude (i.e., proportional myoelectric control). Tibialis anterior EMG activated the artificial dorsiflexor and soleus EMG activated the artificial plantar flexor. We collected joint kinematic and artificial muscle force data as one healthy participant walked on a treadmill with the orthosis. Peak plantar flexor torque provided by the orthosis was 70 Nm, and peak dorsiflexor torque provided by the orthosis was 38 Nm. The orthosis could be useful for basic science studies on human locomotion or possibly for gait rehabilitation after neurological injury.", "title": "" }, { "docid": "2f6ed4c2988391cc4ad95fe742994a1d", "text": "The negative effect of increasing atmospheric nitrogen (N) pollution on grassland biodiversity is now incontrovertible. However, the recent introduction of cleaner technologies in the UK has led to reductions in the emissions of nitrogen oxides, with concomitant decreases in N deposition. The degree to which grassland biodiversity can be expected to ‘bounce back’ in response to these improvements in air quality is uncertain, with a suggestion that long-term chronic N addition may lead to an alternative low biodiversity state. Here we present evidence from the 160-year-old Park Grass Experiment at Rothamsted Research, UK, that shows a positive response of biodiversity to reducing N addition from either atmospheric pollution or fertilizers. The proportion of legumes, species richness and diversity increased across the experiment between 1991 and 2012 as both wet and dry N deposition declined. Plots that stopped receiving inorganic N fertilizer in 1989 recovered much of the diversity that had been lost, especially if limed. There was no evidence that chronic N addition has resulted in an alternative low biodiversity state on the Park Grass plots, except where there has been extreme acidification, although it is likely that the recovery of plant communities has been facilitated by the twice-yearly mowing and removal of biomass. This may also explain why a comparable response of plant communities to reduced N inputs has yet to be observed in the wider landscape.", "title": "" }, { "docid": "21edf22bbe51ce6a6d429fee59985fc5", "text": "This paper details filtering subsystem for a tetra-vision based pedestrian detection system. The complete system is based on the use of both visible and far infrared cameras; in an initial phase it produces a list of areas of attention in the images which can contain pedestrians. This list is furtherly refined using symmetry-based assumptions. Then, this results is fed to a number of independent validators that evaluate the presence of human shapes inside the areas of attention. Histogram of oriented gradients and Support Vector Machines are used as a filter and demonstrated to be able to successfully classify up to 91% of pedestrians in the areas of attention.", "title": "" }, { "docid": "821eb8e80ce24e001aa6389a805a817b", "text": "Temperature-dependent dielectric properties (dielectric constant and dielectric loss factor) and thermal properties (thermal conductivity and specific heat capacity) of whey protein gel and mashed potato were measured from -20°C to 100°C. A dielectric properties measurement system and a multipoint temperature calibration protocol were developed. The system consists of an impedance analyzer, a high-temperature coaxial cable, a high-temperature coaxial probe, a micro-climatic chamber, and a metal sample holder. Calibrations at two temperatures (25°C and 85°C) were sufficient to accurately measure the dielectric properties of foods from frozen to hot temperatures. Dielectric constant and dielectric loss factor both rapidly increased from -20°C to 0°C. Thereafter, dielectric constant linearly decreased from 0°C to 100°C, while dielectric loss factor decreased first and then linearly increased. The thermal conductivity values of whey protein gel and mashed potato decreased with increasing temperature in the frozen range and did not change considerably after thawing. The latent heat of fusion values of whey protein gel and mashed potato were 219.1 and 186.8 kJ·kg, respectively. The temperature-dependent material properties can be used in microwave heat transfer models for improving heating performance of foods in domestic microwave ovens.", "title": "" }, { "docid": "872d589cd879dee7d88185851b9546ab", "text": "Considering few treatments are available to slow or stop neurodegenerative disorders, such as Alzheimer’s disease and related dementias (ADRD), modifying lifestyle factors to prevent disease onset are recommended. The Voice, Activity, and Location Monitoring system for Alzheimer’s disease (VALMA) is a novel ambulatory sensor system designed to capture natural behaviours across multiple domains to profile lifestyle risk factors related to ADRD. Objective measures of physical activity and sleep are provided by lower limb accelerometry. Audio and GPS location records provide verbal and mobility activity, respectively. Based on a familiar smartphone package, data collection with the system has proven to be feasible in community-dwelling older adults. Objective assessments of everyday activity will impact diagnosis of disease and design of exercise, sleep, and social interventions to prevent and/or slow disease progression.", "title": "" }, { "docid": "f4c3cd5706957ea3a27a6fd8285ae422", "text": "With the growth of mobile devices and applications, the number of malicious software, or malware, is rapidly increasing in recent years, which calls for the development of advanced and effective malware detection approaches. Traditional methods such as signature based ones cannot defend users from an increasing number of new types of malware or rapid malware behavior changes. In this paper, we propose a new Android malware detection approach based on deep learning and static analysis. Instead of using Application Programming Interfaces (APIs) only, we further analyze the source code of Android applications and create their higher-level graphical semantics, which makes it harder for attackers to evade detection. In particular, we use a call graph from method invocations in an Android application to represent the application, and further analyze method attributes to form a structured Program Representation Graph (PRG) with node attributes. Then, we use a graph convolutional network (GCN) to yield a graph representation of the application by embedding the entire graph into a dense vector, and classify whether it is a malware or not. To efficiently train such a graph convolutional network, we propose a batch training scheme that allows multiple heterogeneous graphs to be input as a batch. To the best of our knowledge, this is the first work to use graph representation learning for malware detection. We conduct extensive experiments from real-world sample collections and demonstrate that our developed system outperforms multiple other existing malware detection techniques.", "title": "" }, { "docid": "0b56a411692b4c0c051ef318d996511f", "text": "The pathophysiology of perinatal brain injury is multifactorial and involves hypoxia-ischemia (HI) and inflammation. N-methyl-d-aspartate receptors (NMDAR) are present on neurons and glia in immature rodents, and NMDAR antagonists are protective in HI models. To enhance clinical translation of rodent data, we examined protein expression of 6 NMDAR subunits in postmortem human brains without injury from 20 postconceptional weeks through adulthood and in cases of periventricular leukomalacia (PVL). We hypothesized that the developing brain is intrinsically vulnerable to excitotoxicity via maturation-specific NMDAR levels and subunit composition. In normal white matter, NR1 and NR2B levels were highest in the preterm period compared with adult. In gray matter, NR2A and NR3A expression were highest near term. NR2A was significantly elevated in PVL white matter, with reduced NR1 and NR3A in gray matter compared with uninjured controls. These data suggest increased NMDAR-mediated vulnerability during early brain development due to an overall upregulation of individual receptors subunits, in particular, the presence of highly calcium permeable NR2B-containing and magnesium-insensitive NR3A NMDARs. These data improve understanding of molecular diversity and heterogeneity of NMDAR subunit expression in human brain development and supports an intrinsic prenatal vulnerability to glutamate-mediated injury; validating NMDAR subunit-specific targeted therapies for PVL.", "title": "" }, { "docid": "0a0038a5c68f0d93287dcece9581e570", "text": "We use Multi-layer Perceptron and propose a hybrid model of fundamental and technical analysis by utilizing stock prices (from 2012–06 to 2017–12) and financial ratios of Technology companies listed on Nasdaq. Our model uses data discretization and feature selection preprocesses. The best results are obtained through topology optimizations using a three-hidden layer MLP. We examine the predictability of our hybrid model through a training/test split and cross-validation. It is found that the hybrid model successfully predicts the future stock movements. Our model results in the greatest average directional accuracy (65.87%) compared to the results obtained from the fundamental and technical analysis in isolation. The numerical results provide enough evidence to conclude that the market is not perfectly efficient.", "title": "" }, { "docid": "0c162c4f83294c4f701eabbd69f171f7", "text": "This paper aims to explore how the principles of a well-known Web 2.0 service, the world¿s largest social music service \"Last.fm\" (www.last.fm), can be applied to research, which potential it could have in the world of research (e.g. an open and interdisciplinary database, usage-based reputation metrics, and collaborative filtering) and which challenges such a model would face in academia. A real-world application of these principles, \"Mendeley\" (www.mendeley.com), will be demoed at the IEEE e-Science Conference 2008.", "title": "" }, { "docid": "3b223e17f557e1e1416bea70fe0d7e9b", "text": "Comments left by readers on Web documents contain valuable information that can be utilized in different information retrieval tasks including document search, visualization, and summarization. In this paper, we study the problem of comments-oriented document summarization and aim to summarize a Web document (e.g., a blog post) by considering not only its content, but also the comments left by its readers. We identify three relations (namely, topic, quotation, and mention) by which comments can be linked to one another, and model the relations in three graphs. The importance of each comment is then scored by: (i) graph-based method, where the three graphs are merged into a multi-relation graph; (ii) tensor-based method, where the three graphs are used to construct a 3rd-order tensor. To generate a comments-oriented summary, we extract sentences from the given Web document using either feature-biased approach or uniform-document approach. The former scores sentences to bias keywords derived from comments; while the latter scores sentences uniformly with comments. In our experiments using a set of blog posts with manually labeled sentences, our proposed summarization methods utilizing comments showed significant improvement over those not using comments. The methods using feature-biased sentence extraction approach were observed to outperform that using uniform-document approach.", "title": "" }, { "docid": "cab34efb913c222c12ea1aaf07dcd246", "text": "Engineered biological systems have been used to manipulate information, construct materials, process chemicals, produce energy, provide food, and help maintain or enhance human health and our environment. Unfortunately, our ability to quickly and reliably engineer biological systems that behave as expected remains quite limited. Foundational technologies that make routine the engineering of biology are needed. Vibrant, open research communities and strategic leadership are necessary to ensure that the development and application of biological technologies remains overwhelmingly constructive.", "title": "" }, { "docid": "4b38634071544a186ca1092801d89aa6", "text": "Pain is underdetected and undertreated in people with dementia. We aimed to investigate the prevalence of pain in people with dementia admitted to general hospitals and explore the association between pain and behavioural and psychiatric symptoms of dementia (BPSD). We conducted a longitudinal cohort study of 230 people, aged above 70, with dementia and unplanned medical admissions to 2 UK hospitals. Participants were assessed at baseline and every 4 days for self-reported pain (yes/no question and FACES scale) and observed pain (Pain Assessment in Advanced Dementia scale [PAINAD]) at movement and at rest, for agitation (Cohen-Mansfield Agitating Inventory [CMAI]) and BPSD (Behavioural Pathology in Alzheimer Disease Scale [BEHAVE-AD]). On admission, 27% of participants self-reported pain rising to 39% on at least 1 occasion during admission. Half of them were able to complete the FACES scale, this proportion decreasing with more severe dementia. Using the PAINAD, 19% had pain at rest and 57% had pain on movement on at least 1 occasion (in 16%, this was persistent throughout the admission). In controlled analyses, pain was not associated with CMAI scores but was strongly associated with total BEHAVE-AD scores, both when pain was assessed on movement (β = 0.20, 95% confidence interval [CI] = 0.07-0.32, P = 0.002) and at rest (β = 0.41, 95% CI = 0.14-0.69, P = 0.003). The association was the strongest for aggression and anxiety. Pain was common in people with dementia admitted to the acute hospital and associated with BPSD. Improved pain management may reduce distressing behaviours and improve the quality of hospital care for people with dementia.", "title": "" } ]
scidocsrr
cdc786df106729e30ef68e49af8e2b6c
The Goal Construct in Social Psychology
[ { "docid": "511db40bbc4d24ca8d09b5343aa8d91e", "text": "Increased risk taking may explain the link between bad moods and self-defeating behavior. In Study 1, personal recollections of self-defeating actions implicated bad moods and resultant risky decisions. In Study 2, embarrassment increased the preference for a long-shot (high-risk, high-payoff) lottery over a low-risk, low-payoff one. Anger had a similar effect in Study 3. Study 4 replicated this and showed that the effect could be eliminated by making participants analyze the lotteries rationally, suggesting that bad moods foster risk taking by impairing self-regulation instead of by altering subjective utilities. Studies 5 and 6 showed that the risky tendencies are limited to unpleasant moods accompanied by high arousal; neither sadness nor neutral arousal resulted in destructive risk taking.", "title": "" }, { "docid": "58eebe0e55f038fea268b6a7a6960939", "text": "The classic answer to what makes a decision good concerns outcomes. A good decision has high outcome benefits (it is worthwhile) and low outcome costs (it is worth it). I propose that, independent of outcomes or value from worth, people experience a regulatory fit when they use goal pursuit means that fit their regulatory orientation, and this regulatory fit increases the value of what they are doing. The following postulates of this value from fit proposal are examined: (a) People will be more inclined toward goal means that have higher regulatory fit, (b) people's motivation during goal pursuit will be stronger when regulatory fit is higher, (c) people's (prospective) feelings about a choice they might make will be more positive for a desirable choice and more negative for an undesirable choice when regulatory fit is higher, (d) people's (retrospective) evaluations of past decisions or goal pursuits will be more positive when regulatory fit was higher, and (e) people will assign higher value to an object that was chosen with higher regulatory fit. Studies testing each of these postulates support the value-from-fit proposal. How value from fit can enhance or diminish the value of goal pursuits and the quality of life itself is discussed.", "title": "" } ]
[ { "docid": "062f58c5edcebee25ba4e389944dba93", "text": "To increase the probability of destroying a maneuvering target (e.g. ballistic missile), a framework of multi-missiles interception is presented in this paper. Each intercepting missile is equipped with an IR Image seeker which can provide excellent stealth ability during its course of tracking the ballistic missile. Such intelligent ranging system integrates the Interacting Multiple Model (IMM) technique and the concept of reachable set to find the optimal interception results by minimizing the energy of pursuing the maneuvering target. The proposed guidance law of every missile interceptor is designed based on pursuit and evasion game theory while considering the motion of the target in 3-D space such that the distance between the missiles and the target is minimized. Finally, extensive computer simulations have been conducted to validate the performance of the proposed system.", "title": "" }, { "docid": "9b1e1e91b8aacd1ed5d1aee823de7fd3", "text": "—This paper presents a novel adaptive algorithm to detect the center of pupil in frontal view faces. This algorithm, at first, employs the viola-Jones face detector to find the approximate location of face in an image. The knowledge of the face structure is exploited to detect the eye region. The histogram of the detected region is calculated and its CDF is employed to extract the eyelids and iris region in an adaptive way. The center of this region is considered as the pupil center. The experimental results show ninety one percent's accuracy in detecting pupil center.", "title": "" }, { "docid": "101d36f875c1bdee99f14208fe016a5f", "text": "We are investigating automatic generation of a review (or survey) article in a specific subject domain. In a research paper, there are passages where the author describes the essence of a cited paper and the differences between the current paper and the cited paper (we call them citing areas). These passages can be considered as a kind of summary of the cited paper from the current author’s viewpoint. We can know the state of the art in a specific subject domain from the collection of citing areas. Further, if these citing areas are properly classified and organized, they can act as a kind of a review article. In our previous research, we proposed the automatic extraction of citing areas. Then, with the information in the citing areas, we automatically identified the types of citation relationships that indicate the reasons for citation (we call them citation types). Citation types offer a useful clue for organizing citing areas. In addition, to support writing a review article, it is necessary to take account of the contents of the papers together with the citation links and citation types. In this paper, we propose several methods for classifying papers automatically. We found that our proposed methods BCCT-C, the bibliographic coupling considering only type C citations, which pointed out the problems or gaps in related works, are more effective than others. We also implemented a prototype system to support writing a review article, which is based on our proposed method.", "title": "" }, { "docid": "d23d93fa41c98c0eafc98594b1a51aa0", "text": "Water stress caused by water scarcity has a negative impact on the wine industry. Several strategies have been implemented for optimizing water application in vineyards. In this regard, midday stem water potential (SWP) and thermal infrared (TIR) imaging for crop water stress index (CWSI) have been used to assess plant water stress on a vine-by-vine basis without considering the spatial variability. Unmanned Aerial Vehicle (UAV)-borne TIR images are used to assess the canopy temperature variability within vineyards that can be related to the vine water status. Nevertheless, when aerial TIR images are captured over canopy, internal shadow canopy pixels cannot be detected, leading to mixed information that negatively impacts the relationship between CWSI and SWP. This study proposes a methodology for automatic coregistration of thermal and multispectral images (ranging between 490 and 900 nm) obtained from a UAV to remove shadow canopy pixels using a modified scale invariant feature transformation (SIFT) computer vision algorithm and Kmeans++ clustering. Our results indicate that our proposed methodology improves the relationship between CWSI and SWP when shadow canopy pixels are removed from a drip-irrigated Cabernet Sauvignon vineyard. In particular, the coefficient of determination (R²) increased from 0.64 to 0.77. In addition, values of the root mean square error (RMSE) and standard error (SE) decreased from 0.2 to 0.1 MPa and 0.24 to 0.16 MPa, respectively. Finally, this study shows that the negative effect of shadow canopy pixels was higher in those vines with water stress compared with well-watered vines.", "title": "" }, { "docid": "b752e7513d4acbd0a0cd8991022f093e", "text": "One common strategy for dealing with large, complex models is to partition them into pieces that are easier to handle. While decomposition into convex components results in pieces that are easy to process, such decompositions can be costly to construct and often result in representations with an unmanageable number of components. In this paper, we propose an alternative partitioning strategy that decomposes a given polyhedron into “approximately convex” pieces. For many applications, the approximately convex components of this decomposition provide similar benefits as convex components, while the resulting decomposition is both significantly smaller and can be computed more efficiently. Indeed, for many models, an approximate convex decomposition can more accurately represent the important structural features of the model by providing a mechanism for ignoring insignificant features, such as wrinkles and other surface texture. We propose a simple algorithm to compute approximate convex decompositions of polyhedra of arbitrary genus to within a user specified tolerance. This algorithm measures the significance of the model’s features and resolves them in order of priority. As a by product, it also produces an elegant hierarchical representation of the model. We illustrate its utility in constructing an approximate skeleton of the model that results in significant performance gains over skeletons based on an exact convex decomposition. This research supported in part by NSF CAREER Award CCR-9624315, NSF Grants IIS-9619850, ACI-9872126, EIA-9975018, EIA-0103742, EIA-9805823, ACI-0113971, CCR-0113974, EIA-9810937, EIA-0079874, and by the Texas Higher Education Coordinating Board grant ARP-036327-017. Figure 1: Each component is approximately convex (concavity less than 10 by our measure). There are a total of 17 components.", "title": "" }, { "docid": "d18c77b3d741e1a7ed10588f6a3e75c0", "text": "Given only a few image-text pairs, humans can learn to detect semantic concepts and describe the content. For machine learning algorithms, they usually require a lot of data to train a deep neural network to solve the problem. However, it is challenging for the existing systems to generalize well to the few-shot multi-modal scenario, because the learner should understand not only images and texts but also their relationships from only a few examples. In this paper, we tackle two multi-modal problems, i.e., image captioning and visual question answering (VQA), in the few-shot setting.\n We propose Fast Parameter Adaptation for Image-Text Modeling (FPAIT) that learns to learn jointly understanding image and text data by a few examples. In practice, FPAIT has two benefits. (1) Fast learning ability. FPAIT learns proper initial parameters for the joint image-text learner from a large number of different tasks. When a new task comes, FPAIT can use a small number of gradient steps to achieve a good performance. (2) Robust to few examples. In few-shot tasks, the small training data will introduce large biases in Convolutional Neural Networks (CNN) and damage the learner's performance. FPAIT leverages dynamic linear transformations to alleviate the side effects of the small training set. In this way, FPAIT flexibly normalizes the features and thus reduces the biases during training. Quantitatively, FPAIT achieves superior performance on both few-shot image captioning and VQA benchmarks.", "title": "" }, { "docid": "25c59d905fc75d82b9c7ee1e8a17291e", "text": "The Path Ranking Algorithm (Lao and Cohen, 2010) is a general technique for performing link prediction in a graph. PRA has mainly been used for knowledge base completion (Lao et al., 2011; Gardner et al., 2013; Gardner et al., 2014), though the technique is applicable to any kind of link prediction task. To learn a prediction model for a particular edge type in a graph, PRA finds sequences of edge types (or paths) that frequently connect nodes that are instances of the edge type being predicted. PRA then uses those path types as features in a logistic regression model to infer missing edges in the graph. In this class project, we performed three separate experiments relating to different aspects of PRA: improving the efficiency of the algorithm, exploring the use of multiple languages in a knowledge base completion task, and using PRA-style features in sentencelevel prediction models. The first experiment looks at improving the efficiency and performance of link prediction in graphs by removing unnecessary steps from PRA. We introduce a simple technique that extracts features from the subgraph centered around a pair of nodes in the graph, and show that this method is an order of magnitude faster than PRA while giving significantly better performance. Additionally, this new model is more expressive than PRA, as it can handle arbitrary features extracted from the subgraphs, instead of only the relation sequences connecting the node pair. The new feature types we experimented with did not generally lead to better predictions, though further feature engineering may yield additional performance improvements. The second experiment we did with PRA extends recent work that performs knowledge base completion using a large parsed English corpus in conjunction with random walks over a knowledge base (Gardner et al., 2013; Gardner et al., 2014). This prior work showed significant performance gains when using the corpus along with the knowledge base, and even further gains by using abstract representations of the textual relations extracted from the corpus. In this experiment, we attempt to extend these results to a multilingual setting, with textual relations extracted from 10 different languages. We discuss the challenges that arise when dealing with data in languages for which parsers and entity linkers are not readily available, and show that previous techniques for obtaining abstract relation representations do not work in this setting. The final experiment takes a step towards a longstanding goal in artificial intelligence research: using a large collection of background knowledge to improve natural language understanding. We present a new technique for incorporating information from a knowledge base into sentence-level prediction tasks, and demonstrate its usefulness in one task in particular: relation extraction. We show that adding PRAstyle features generated from Freebase to an off-theshelf relation extraction model significantly improves its performance. This simple and general technique also outperforms prior work that learns knowledge base embeddings to improve prediction performance on the same task. In the remainder of this paper, we first give a brief introduction to the path ranking algorithm. Then we discuss each experiment in turn, with each section introducing the new methods, describing related work, and presenting experimental results.", "title": "" }, { "docid": "210a1dda2fc4390a5b458528b176341e", "text": "Many classic methods have shown non-local self-similarity in natural images to be an effective prior for image restoration. However, it remains unclear and challenging to make use of this intrinsic property via deep networks. In this paper, we propose a non-local recurrent network (NLRN) as the first attempt to incorporate non-local operations into a recurrent neural network (RNN) for image restoration. The main contributions of this work are: (1) Unlike existing methods that measure self-similarity in an isolated manner, the proposed non-local module can be flexibly integrated into existing deep networks for end-to-end training to capture deep feature correlation between each location and its neighborhood. (2) We fully employ the RNN structure for its parameter efficiency and allow deep feature correlation to be propagated along adjacent recurrent states. This new design boosts robustness against inaccurate correlation estimation due to severely degraded images. (3) We show that it is essential to maintain a confined neighborhood for computing deep feature correlation given degraded images. This is in contrast to existing practice [43] that deploys the whole image. Extensive experiments on both image denoising and super-resolution tasks are conducted. Thanks to the recurrent non-local operations and correlation propagation, the proposed NLRN achieves superior results to state-of-the-art methods with many fewer parameters. The code is available at https://github.com/Ding-Liu/NLRN.", "title": "" }, { "docid": "c5dee985cbfd6c22beca6e2dad895efa", "text": "Recently, convolutional neural networks (CNNs) have been used as a powerful tool to solve many problems of machine learning and computer vision. In this paper, we aim to provide insight on the property of convolutional neural networks, as well as a generic method to improve the performance of many CNN architectures. Specifically, we first examine existing CNN models and observe an intriguing property that the filters in the lower layers form pairs (i.e., filters with opposite phase). Inspired by our observation, we propose a novel, simple yet effective activation scheme called concatenated ReLU (CRelu) and theoretically analyze its reconstruction property in CNNs. We integrate CRelu into several state-of-the-art CNN architectures and demonstrate improvement in their recognition performance on CIFAR-10/100 and ImageNet datasets with fewer trainable parameters. Our results suggest that better understanding of the properties of CNNs can lead to significant performance improvement with a simple modification.", "title": "" }, { "docid": "58d406523b2951f2bf5514a775cd6c68", "text": "Donnai-Barrow syndrome [Faciooculoacousticorenal (FOAR) syndrome; DBS/FOAR] is a rare autosomal recessive disorder resulting from mutations in the LRP2 gene located on chromosome 2q31.1. We report a unique DBS/FOAR patient homozygous for a 4-bp LRP2 deletion secondary to paternal uniparental isodisomy for chromosome 2. The propositus inherited the mutation from his heterozygous carrier father, whereas the mother carried only wild-type LRP2 alleles. This is the first case of DBS/FOAR resulting from uniparental disomy (UPD) and the fourth published case of any paternal UPD 2 ascertained through unmasking of an autosomal recessive disorder. The absence of clinical symptoms above and beyond the classical phenotype in this and the other disorders suggests that paternal chromosome 2 is unlikely to contain imprinted genes notably affecting either growth or development. This report highlights the importance of parental genotyping in order to give accurate genetic counseling for autosomal recessive disorders.", "title": "" }, { "docid": "1b0ebf54bc1d534affc758ced7aef8de", "text": "We report our study of a silica-water interface using reactive molecular dynamics. This first-of-its-kind simulation achieves length and time scales required to investigate the detailed chemistry of the system. Our molecular dynamics approach is based on the ReaxFF force field of van Duin et al. [J. Phys. Chem. A 107, 3803 (2003)]. The specific ReaxFF implementation (SERIALREAX) and force fields are first validated on structural properties of pure silica and water systems. Chemical reactions between reactive water and dangling bonds on a freshly cut silica surface are analyzed by studying changing chemical composition at the interface. In our simulations, reactions involving silanol groups reach chemical equilibrium in approximately 250 ps. It is observed that water molecules penetrate a silica film through a proton-transfer process we call \"hydrogen hopping,\" which is similar to the Grotthuss mechanism. In this process, hydrogen atoms pass through the film by associating and dissociating with oxygen atoms within bulk silica, as opposed to diffusion of intact water molecules. The effective diffusion constant for this process, taken to be that of hydrogen atoms within silica, is calculated to be 1.68 x 10(-6) cm(2)/s. Polarization of water molecules in proximity of the silica surface is also observed. The subsequent alignment of dipoles leads to an electric potential difference of approximately 10.5 V between the silica slab and water.", "title": "" }, { "docid": "8f7f8bdcd23a50baf5fa77312a083218", "text": "Twenty five researchers from eight institutions and a variety of disciplines, viz. computer science, information security, knowledge management, law enforcement, psychology, organization science and system dynamics, found each other February 2004 in the “System Dynamics Modelling for Information Security: An Invitational Group Modeling Workshop” at Software Engineering Institute, Carnegie Mellon University. The exercise produced preliminary system dynamics models of insider and outsider cyber attacks that motivated five institutions, viz. Syracuse University, TECNUN at University of Navarra, CERT/CC at Carnegie Mellon University, University at Albany and Agder University College, to launch an interdisciplinary research proposal (Improving Organizational Security and Survivability by Suppression of Dynamic Triggers). This paper discusses the preliminary system dynamic maps of the insider cyber-threat and describes the main ideas behind the research proposal.", "title": "" }, { "docid": "63e052eb1d816aaff21ec2fa07b1c064", "text": "Diverse input data modalities can provide complementary cues for several tasks, usually leading to more robust algorithms and better performance. However, while a (training) dataset could be accurately designed to include a variety of sensory inputs, it is often the case that not all modalities could be available in real life (testing) scenarios, where a model has to be deployed. This raises the challenge of how to learn robust representations leveraging multimodal data in the training stage, while considering limitations at test time, such as noisy or missing modalities. This paper presents a new approach for multimodal video action recognition, developed within the unified frameworks of distillation and privileged information, named generalized distillation. Particularly, we consider the case of learning representations from depth and RGB videos, while relying on RGB data only at test time. We propose a new approach to train an hallucination network that learns to distill depth features through multiplicative connections of spatiotemporal representations, leveraging soft labels and hard labels, as well as distance between feature maps. We report state-of-the-art results on video action classification on the largest multimodal dataset available for this task, the NTU RGB+D. Code available at https://github.com/ncgarcia/ modality-distillation", "title": "" }, { "docid": "f53d8be1ec89cb8a323388496d45dcd0", "text": "While Processing-in-Memory has been investigated for decades, it has not been embraced commercially. A number of emerging technologies have renewed interest in this topic. In particular, the emergence of 3D stacking and the imminent release of Micron's Hybrid Memory Cube device have made it more practical to move computation near memory. However, the literature is missing a detailed analysis of a killer application that can leverage a Near Data Computing (NDC) architecture. This paper focuses on in-memory MapReduce workloads that are commercially important and are especially suitable for NDC because of their embarrassing parallelism and largely localized memory accesses. The NDC architecture incorporates several simple processing cores on a separate, non-memory die in a 3D-stacked memory package; these cores can perform Map operations with efficient memory access and without hitting the bandwidth wall. This paper describes and evaluates a number of key elements necessary in realizing efficient NDC operation: (i) low-EPI cores, (ii) long daisy chains of memory devices, (iii) the dynamic activation of cores and SerDes links. Compared to a baseline that is heavily optimized for MapReduce execution, the NDC design yields up to 15X reduction in execution time and 18X reduction in system energy.", "title": "" }, { "docid": "bceaded3710f8d6501aa1118d191aaaa", "text": "The human gut harbors a large and complex community of beneficial microbes that remain stable over long periods. This stability is considered critical for good health but is poorly understood. Here we develop a body of ecological theory to help us understand microbiome stability. Although cooperating networks of microbes can be efficient, we find that they are often unstable. Counterintuitively, this finding indicates that hosts can benefit from microbial competition when this competition dampens cooperative networks and increases stability. More generally, stability is promoted by limiting positive feedbacks and weakening ecological interactions. We have analyzed host mechanisms for maintaining stability—including immune suppression, spatial structuring, and feeding of community members—and support our key predictions with recent data.", "title": "" }, { "docid": "de54192eb6dc99512ff4c6b6461c9dcb", "text": "Maximum variance unfolding (MVU) is an effective heuristic for dimensionality reduction. It produces a low-dimensional representation of the data by maximizing the variance of their embeddings while preserving the local distances of the original data. We show that MVU also optimizes a statistical dependence measure which aims to retain the identity of individual observations under the distancepreserving constraints. This general view allows us to design “colored” variants of MVU, which produce low-dimensional representations for a given task, e.g. subject to class labels or other side information.", "title": "" }, { "docid": "53df34f620ed1073a373484e31045d69", "text": "Agricultural crop production depends on various factors such as biology, climate, economy and geography. Several factors have different impacts on agriculture, which can be quantified using appropriate statistical methodologies. Applying such methodologies and techniques on historical yield of crops, it is possible to obtain information or knowledge which can be helpful to farmers and government organizations for making better decisions and policies which lead to increased production. In this paper, our focus is on application of data mining techniques to extract knowledge from the agricultural data to estimate crop yield for major cereal crops in major districts of Bangladesh.", "title": "" }, { "docid": "b32cd3e2763400dfc96c61e489673a6b", "text": "This paper presents a hybrid cascaded multilevel inverter for electric vehicles (EV) / hybrid electric vehicles (HEV) and utility interface applications. The inverter consists of a standard 3-leg inverter (one leg for each phase) and H-bridge in series with each inverter leg. It can use only a single DC power source to supply a standard 3-leg inverter along with three full H-bridges supplied by capacitors or batteries. Both fundamental frequency and high switching frequency PWM methods are used for the hybrid multilevel inverter. An experimental 5 kW prototype inverter is built and tested. The above two switching control methods are validated and compared experimentally.", "title": "" }, { "docid": "79844bc05388cc1436bb5388e88f6daa", "text": "The growing number of Unmanned Aerial Vehicles (UAVs) is considerable in the last decades. Many flight test scenarios, including single and multi-vehicle formation flights, are demonstrated using different control algorithms with different test platforms. In this paper, we present a brief literature review on the development and key issues of current researches in the field of Fault-Tolerant Control (FTC) applied to UAVs. It consists of various intelligent or hierarchical control architectures for a single vehicle or a group of UAVs in order to provide potential solutions for tolerance to the faults, failures or damages in relevant to UAV components during flight. Among various UAV test-bed structures, a sample of every class of UAVs, including single-rotor, quadrotor, and fixed-wing types, are selected and briefly illustrated. Also, a short description of terms, definitions, and classifications of fault-tolerant control systems (FTCS) is presented before the main contents of review.", "title": "" }, { "docid": "c4d610eb523833a2ded2b0090d6c0337", "text": "In this paper, I argue that animal domestication, speciesism, and other modern human-animal interactions in North America are possible because of and through the erasure of Indigenous bodies and the emptying of Indigenous lands for settler-colonial expansion. That is, we cannot address animal oppression or talk about animal liberation without naming and subsequently dismantling settler colonialism and white supremacy as political machinations that require the simultaneous exploitation and/or erasure of animal and Indigenous bodies. I begin by re-framing animality as a politics of space to suggest that animal bodies are made intelligible in the settler imagination on stolen, colonized, and re-settled Indigenous lands. Thinking through Andrea Smith’s logics of white supremacy, I then re-center anthropocentrism as a racialized and speciesist site of settler coloniality to re-orient decolonial thought toward animality. To critique the ways in which Indigenous bodies and epistemologies are at stake in neoliberal re-figurings of animals as settler citizens, I reject the colonial politics of recognition developed in Sue Donaldson and Will Kymlicka’s recent monograph, Zoopolis: A Political Theory of Animal Rights (Oxford University Press 2011) because it militarizes settler-colonial infrastructures of subjecthood and governmentality. I then propose a decolonized animal ethic that finds legitimacy in Indigenous cosmologies to argue that decolonization can only be reified through a totalizing disruption of those power apparatuses (i.e., settler colonialism, anthropocentrism, white supremacy, and neoliberal pluralism) that lend the settler state sovereignty, normalcy, and futurity insofar as animality is a settler-colonial particularity.", "title": "" } ]
scidocsrr
25d272830d67bbb69950bae76e5bb27f
Neural Network Methods for Natural Language Processing
[ { "docid": "8caaea6ffb668c019977809773a6d8c5", "text": "In the past several years, a number of different language modeling improvements over simple trigram models have been found, including caching, higher-order n-grams, skipping, interpolated Kneser–Ney smoothing, and clustering. We present explorations of variations on, or of the limits of, each of these techniques, including showing that sentence mixture models may have more potential. While all of these techniques have been studied separately, they have rarely been studied in combination. We compare a combination of all techniques together to a Katz smoothed trigram model with no count cutoffs. We achieve perplexity reductions between 38 and 50% (1 bit of entropy), depending on training data size, as well as a word error rate reduction of 8 .9%. Our perplexity reductions are perhaps the highest reported compared to a fair baseline. c © 2001 Academic Press", "title": "" } ]
[ { "docid": "ab83a1395132b843ff63642b4f8841be", "text": "A `region' is an important concept in interpreting 3D point cloud data since regions may correspond to objects in a scene. To correctly interpret 3D point cloud data, we need to partition the dataset into regions that correspond to objects or parts of an object. In this paper, we present a region growing approach that combines global (topological) and local (color, surface normal) information to segment 3D point cloud data. Using ideas from persistent homology theory, our algorithm grows a simplicial complex representation of the point cloud dataset. At each step in the growth process we compute the zeroth homology group of the complex, which corresponds to the number of connected components, and use color and surface normal statistics to build regions. Lastly, we extract out the segmented regions of the dataset. We show that this method provides a stable segmentation of point cloud data in the presence of noise and poorly sampled data, thus providing advantages over contemporary region-based segmentation techniques.", "title": "" }, { "docid": "7e02da9e8587435716db99396c0fbbc7", "text": "To examine thrombus formation in a living mouse, new technologies involving intravital videomicroscopy have been applied to the analysis of vascular windows to directly visualize arterioles and venules. After vessel wall injury in the microcirculation, thrombus development can be imaged in real time. These systems have been used to explore the role of platelets, blood coagulation proteins, endothelium, and the vessel wall during thrombus formation. The study of biochemistry and cell biology in a living animal offers new understanding of physiology and pathology in complex biologic systems.", "title": "" }, { "docid": "ab50f458d919ba3ac3548205418eea62", "text": "Department of Microbiology, School of Life Sciences, Bharathidasan University, Tiruchirappali 620 024, Tamilnadu, India. Department of Medical Biotechnology, Sri Ramachandra University, Porur, Chennai 600 116, Tamilnadu, India. CAS Marine Biology, Annamalai University, Parangipettai 608 502, Tamilnadu, India. Department of Zoology, DDE, Annamalai University, Annamalai Nagar 608 002, Tamilnadu, India Asian Pacific Journal of Tropical Disease (2012)S291-S295", "title": "" }, { "docid": "adac9cbc59aea46821aaebad3bcc1772", "text": "Multidetector computed tomography (MDCT) has emerged as an effective imaging technique to augment forensic autopsy. Postmortem change and decomposition are always present at autopsy and on postmortem MDCT because they begin to occur immediately upon death. Consequently, postmortem change and decomposition on postmortem MDCT should be recognized and not mistaken for a pathologic process or injury. Livor mortis increases the attenuation of vasculature and dependent tissues on MDCT. It may also produce a hematocrit effect with fluid levels in the large caliber blood vessels and cardiac chambers from dependent layering erythrocytes. Rigor mortis and algor mortis have no specific MDCT features. In contrast, decomposition through autolysis, putrefaction, and insect and animal predation produce dramatic alterations in the appearance of the body on MDCT. Autolysis alters the attenuation of organs. The most dramatic autolytic changes on MDCT are seen in the brain where cerebral sulci and ventricles are effaced and gray-white matter differentiation is lost almost immediately after death. Putrefaction produces a pattern of gas that begins with intravascular gas and proceeds to gaseous distension of all anatomic spaces, organs, and soft tissues. Knowledge of the spectrum of postmortem change and decomposition is an important component of postmortem MDCT interpretation.", "title": "" }, { "docid": "09925a9676d78a0a7f44570988ae20c7", "text": "Cocultures of two human cell lines, Caco-2 and HT29-MTX mucus-producing cells, have been incorporated into an in vitro digestion/cell culture model used to predict iron bioavailability. A range of different foods were subjected to in vitro digestion, and iron bioavailability from digests was assessed with Caco-2, Caco-2 overlaid with porcine mucin, HT29-MTX or cocultures of Caco-2 and HT29-MTX at varying ratios. It was found that increasing the ratio of HT29-MTX cells decreased the amount of ferritin formed and resulted in an overall decline in the ability of the model to detect differences in iron bioavailability. At the physiologically relevant ratios of 90% Caco-2/10% HT29-MTX and 75% Caco-2/25% HT29-MTX, however, a mucus layer completely covered the cell monolayer and the in vitro digestion model was nearly as responsive to changes in sample iron bioavailability as pure Caco-2 cultures. The in vitro digestion/Caco-2 cell culture model correlates well with human iron bioavailability studies, but, as mucus appears to play a role in iron absorption, the addition of a physiologically realistic mucus layer and goblet-type cells to this model may give more accurate iron bioavailability predictions.", "title": "" }, { "docid": "29e07bf313daaa3f6bf1d67224f6e4b6", "text": "An overview of the high-frequency reflectometer technology deployed in Anritsu’s VectorStar Vector Network Analyzer (VNA) family is given, leading to a detailed description of the architecture used to extend the frequency range of VectorStar into the high millimeter waves. It is shown that this technology results in miniature frequency-extension modules that provide unique capabilities such as direct connection to wafer probes, dense multi-port measurements, test-port power leveling, enhanced raw directivity, and reduced measurement complexity when compared with existing solutions. These capabilities, combined with the frequency-scalable nature of the reflectometers provide users with a unique and compelling solution for their current and future high-frequency measurement needs.", "title": "" }, { "docid": "eb22a8448b82f6915850fe4d60440b3b", "text": "In story-based games or other interactive systems, a drama manager (DM) is an omniscient agent that acts to bring about a particular sequence of plot points for the player to experience. Traditionally, the DM's narrative evaluation criteria are solely derived from a human designer. We present a DM that learns a model of the player's storytelling preferences and automatically recommends a narrative experience that is predicted to optimize the player's experience while conforming to the human designer's storytelling intentions. Our DM is also capable of manipulating the space of narrative trajectories such that the player is more likely to make choices that result in the recommended experience. Our DM uses a novel algorithm, called prefix-based collaborative filtering (PBCF), that solves the sequential recommendation problem to find a sequence of plot points that maximizes the player's rating of his or her experience. We evaluate our DM in an interactive storytelling environment based on choose-your-own-adventure novels. Our experiments show that our algorithms can improve the player's experience over the designer's storytelling intentions alone and can deliver more personalized experiences than other interactive narrative systems while preserving players' agency.", "title": "" }, { "docid": "f6a5f4280a8352157164d6abc1259a45", "text": "A new robust lane marking detection algorithm for monocular vision is proposed. It is designed for the urban roads with disturbances and with the weak lane markings. The primary contribution of the paper is that it supplies a robust adaptive method of image segmentation, which employs jointly prior knowledge, statistical information and the special geometrical features of lane markings in the bird's-eye view. This method can eliminate many disturbances while keep points of lane markings effectively. Road classification can help us extract more accurate and simple characteristics of lane markings, so the second contribution of the paper is that it uses the row information of image to classify road conditions into three kinds and uses different strategies to complete lane marking detection. The experimental results have shown the high performance of our algorithm in various road scenes.", "title": "" }, { "docid": "3dec58b53bbf19a43fdb81d09e2614cb", "text": "This study is part of a doctoral thesis on the topic of Hyperfiction: Past, Present and Future of Storytelling through Hypertext. It explores in depth the impact of transmedia storytelling and the role of hypertext in the realm of the currently popular social media phenomenon Pokémon GO. Storytelling is a powerful method to engage and unite people. Moreover, the technology progress adds a whole new angle to the method, with hypertext and cross-platform sharing that enhance the traditional storytelling so much that transmedia storytelling gives unlimited opportunities to affect the everyday life of people across the globe. This research aims at examining the transmedia storytelling approach in Pokémon GO, and explaining how that contributed to its establishment as a massive worldwide hit in less than a week. The social engagement is investigated in all major media platforms, including traditional and online media channels. Observation and content analyses are reported in this paper to form the conclusion that transmedia storytelling with the input of hypertext has a promising future as a method of establishing a productive and rewarding communication strategy. Keywords—Communication, hypertext, Pokémon GO, storytelling, transmedia.", "title": "" }, { "docid": "b73526f1fb0abb4373421994dbd07822", "text": "in our country around 2.78% of peoples are not able to speak (dumb). Their communications with others are only using the motion of their hands and expressions. We proposed a new technique called artificial speaking mouth for dumb people. It will be very helpful to them for conveying their thoughts to others. Some peoples are easily able to get the information from their motions. The remaining is not able to understand their way of conveying the message. In order to overcome the complexity the artificial mouth is introduced for the dumb peoples. This system is based on the motion sensor. According to dumb people, for every motion they have a meaning. That message is kept in a database. Likewise all templates are kept in the database. In the real time the template database is fed into a microcontroller and the motion sensor is fixed in their hand. For every action the motion sensors get accelerated and give the signal to the microcontroller. The microcontroller matches the motion with the database and produces the speech signal. The output of the system is using the speaker. By properly updating the database the dumb will speak like a normal person using the artificial mouth. The system also includes a text to speech conversion (TTS) block that interprets the matched gestures.", "title": "" }, { "docid": "4d70f4c4bd83e2ee531071ef99cac317", "text": "Image features such as step edges, lines and Mach bands all give rise to points where the Fourier components of the image are maximally in phase. The use of phase congruency for marking features has signiicant advantages over gradient based methods. It is a dimension-less quantity that is invariant to changes in image brightness or contrast, hence it provides an absolute measure of the signiicance of feature points. This allows the use of universal threshold values that can be applied over wide classes of images. This paper presents a new way of calculating phase congruency through the use of wavelets. The existing theory that has been developed for 1D signals is extended to allow the calculation of phase congruency in 2D images. It is shown that for good localization it is important to consider the spread of frequencies present at a point of phase congruency. An eeective method for identifying, and compensating for, the level of noise in an image is presented. Finally, it is argued that high-pass ltering should be used to obtain image information at diierent scales. With this approach the choice of scale only aaects the relative signiicance of features without degrading their localization. Abstract Image features such as step edges, lines and Mach bands all give rise to points where the Fourier components of the image are maximally in phase. The use of phase congruency for marking features has signiicant advantages over gradient based methods. It is a dimensionless quantity that is invariant to changes in image brightness or contrast, hence it provides an absolute measure of the signiicance of feature points. This allows the use of universal threshold values that can be applied over wide classes of images. This paper presents a new way of calculating phase congruency through the use of wavelets. The existing theory that has been developed for 1D signals is extended to allow the calculation of phase congruency in 2D images. It is shown that for good localization it is important to consider the spread of frequencies present at a point of phase congruency. An eeective method for identifying, and compensating for, the level of noise in an image is presented. Finally, it is argued that high-pass ltering should be used to obtain image information at diierent scales. With this approach the choice of scale only aaects the relative signiicance of features without degrading their localization.", "title": "" }, { "docid": "7550ec8917588a6adb629e3d1beabd76", "text": "This paper describes the algorithm for deriving the total column ozone from spectral radiances and irradiances measured by the Ozone Monitoring Instrument (OMI) on the Earth Observing System Aura satellite. The algorithm is based on the differential optical absorption spectroscopy technique. The main characteristics of the algorithm as well as an error analysis are described. The algorithm has been successfully applied to the first available OMI data. First comparisons with ground-based instruments are very encouraging and clearly show the potential of the method.", "title": "" }, { "docid": "ce34bb39b5048f80e849ddf7a476d89d", "text": "We propose a method to find the community structure in complex networks based on an extremal optimization of the value of modularity. The method outperforms the optimal modularity found by the existing algorithms in the literature giving a better understanding of the community structure. We present the results of the algorithm for computer-simulated and real networks and compare them with other approaches. The efficiency and accuracy of the method make it feasible to be used for the accurate identification of community structure in large complex networks.", "title": "" }, { "docid": "08aa9d795464d444095bbb73c067c2a9", "text": "Next-generation sequencing (NGS) is a rapidly evolving set of technologies that can be used to determine the sequence of an individual's genome​ 1​ by calling genetic variants present in an individual using billions of short, errorful sequence reads​ 2​ . Despite more than a decade of effort and thousands of dedicated researchers, the hand-crafted and parameterized statistical models used for variant calling still produce thousands of errors and missed variants in each genome​ 3,4​ . Here we show that a deep convolutional neural network​ 5​ can call genetic variation in aligned next-generation sequencing read data by learning statistical relationships (likelihoods) between images of read pileups around putative variant sites and ground-truth genotype calls. This approach, called DeepVariant, outperforms existing tools, even winning the \"highest performance\" award for SNPs in a FDA-administered variant calling challenge. The learned model generalizes across genome builds and even to other species, allowing non-human sequencing projects to benefit from the wealth of human ground truth data. We further show that, unlike existing tools which perform well on only a specific technology, DeepVariant can learn to call variants in a variety of sequencing technologies and experimental designs, from deep whole genomes from 10X Genomics to Ion Ampliseq exomes. DeepVariant represents a significant step from expert-driven statistical modeling towards more automatic deep learning approaches for developing software to interpret biological instrumentation data. Main Text Calling genetic variants from NGS data has proven challenging because NGS reads are not only errorful (with rates from ~0.1-10%) but result from a complex error process that depends on properties of the instrument, preceding data processing tools, and the genome sequence itself​. State-of-the-art variant callers use a variety of statistical techniques to model these error processes and thereby accurately identify differences between the reads and the reference genome caused by real genetic variants and those arising from errors in the reads​. For example, the widely-used GATK uses logistic regression to model base errors, hidden Markov models to compute read likelihoods, and naive Bayes classification to identify variants, which are then filtered to remove likely false positives using a Gaussian mixture model peer-reviewed) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/092890 doi: bioRxiv preprint first posted online Dec. 14, 2016; Poplin et al. Creating a universal SNP and small indel variant caller with deep neural networks. with hand-crafted features capturing common error modes​ 6​ . These techniques allow the GATK to achieve high but still imperfect accuracy on the Illumina sequencing platform​ . Generalizing these models to other sequencing technologies has proven difficult due to the need for manual retuning or extending these statistical models (see e.g. Ion Torrent​ 8,9​ ), a major problem in an area with such rapid technological progress​ 1​ . Here we describe a variant caller for NGS data that replaces the assortment of statistical modeling components with a single, deep learning model. Deep learning is a revolutionary machine learning technique applicable to a variety of domains, including image classification​ 10​ , translation​ , gaming​ , and the life sciences​ 14–17​ . This toolchain, which we call DeepVariant, (Figure 1) begins by finding candidate SNPs and indels in reads aligned to the reference genome with high-sensitivity but low specificity. The deep learning model, using the Inception-v2 architecture​ , emits probabilities for each of the three diploid genotypes at a locus using a pileup image of the reference and read data around each candidate variant (Figure 1). The model is trained using labeled true genotypes, after which it is frozen and can then be applied to novel sites or samples. Throughout the following experiments, DeepVariant was trained on an independent set of samples or variants to those being evaluated. This deep learning model has no specialized knowledge about genomics or next-generation sequencing, and yet can learn to call genetic variants more accurately than state-of-the-art methods. When applied to the Platinum Genomes Project NA12878 data​ 18​ , DeepVariant produces a callset with better performance than the GATK when evaluated on the held-out chromosomes of the Genome in a Bottle ground truth set (Figure 2A). For further validation, we sequenced 35 replicates of NA12878 using a standard whole-genome sequencing protocol and called variants on 27 replicates using a GATK best-practices pipeline and DeepVariant using a model trained on the other eight replicates (see methods). Not only does DeepVariant produce more accurate results but it does so with greater consistency across a variety of quality metrics (Figure 2B). To further confirm the performance of DeepVariant, we submitted variant calls for a blinded sample, NA24385, to the Food and Drug Administration-sponsored variant calling ​ Truth Challenge​ in May 2016 and won the \"highest performance\" award for SNPs by an independent team using a different evaluation methodology. Like many variant calling algorithms, GATK relies on a model that assumes read errors are independent​ . Though long-recognized as an invalid assumption​ 2​ , the true likelihood function that models multiple reads simultaneously is unknown​ 6,19,20​ . Because DeepVariant presents an image of all of the reads relevant for a putative variant together, the convolutional neural network (CNN) is able to account for the complex dependence among the reads by virtue of being a universal approximator​ 21​ . This manifests itself as a tight concordance between the estimated probability of error from the likelihood function and the observed error rate, as seen in Figure 2C where DeepVariant's CNN is well calibrated, strikingly more so than the GATK. That the CNN has approximated this true, but unknown, inter-dependent likelihood function is the essential technical advance enabling us to replace the hand-crafted statistical models in other approaches with a single deep learning model and still achieve such high performance in variant calling. 2 peer-reviewed) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/092890 doi: bioRxiv preprint first posted online Dec. 14, 2016; Poplin et al. Creating a universal SNP and small indel variant caller with deep neural networks. We further explored how well DeepVariant’s CNN generalizes beyond its training data. First, a model trained with read data aligned to human genome build GRCh37 and applied to reads aligned to GRCh38 has similar performance (overall F1 = 99.45%) to one trained on GRCh38 and then applied to GRCh38 (overall F1 = 99.53%), thereby demonstrating that a model learned from one version of the human genome reference can be applied to other versions with effectively no loss in accuracy (Table S1). Second, models learned using human reads and ground truth data achieve high accuracy when applied to a mouse dataset​ 22​ (F1 = 98.29%), out-performing training on the mouse data itself (F1 = 97.84%, Table S4). This last experiment is especially demanding as not only do the species differ but nearly all of the sequencing parameters do as well: 50x 2x148bp from an Illumina TruSeq prep sequenced on a HiSeq 2500 for the human sample and 27x 2x100bp reads from a custom sequencing preparation run on an Illumina Genome Analyzer II for mouse​ . Thus, DeepVariant is robust to changes in sequencing depth, preparation protocol, instrument type, genome build, and even species. The practical benefits of this capability is substantial, as DeepVariant enables resequencing projects in non-human species, which often have no ground truth data to guide their efforts​ , to leverage the large and growing ground truth data in humans. To further assess its capabilities, we trained DeepVariant to call variants in eight datasets from Genome in a Bottle​ 24​ that span a variety of sequencing instruments and protocols, including whole genome and exome sequencing technologies, with read lengths from fifty to many thousands of basepairs (Table 1 and S6). We used the already processed BAM files to introduce additional variability as these BAMs differ in their alignment and cleaning steps. The results of this experiment all exhibit a characteristic pattern: the candidate variants have the highest sensitivity but a low PPV (mean 57.6%), which varies significantly by dataset. After retraining, all of the callsets achieve high PPVs (mean of 99.3%) while largely preserving the candidate callset sensitivity (mean loss of 2.3%). The high PPVs and low loss of sensitivity indicate that DeepVariant can learn a model that captures the technology-specific error processes in sufficient detail to separate real variation from false positives with high fidelity for many different sequencing technologies. As we already shown above that DeepVariant performs well on Illumina WGS data, we analyze here the behavior of DeepVariant on two non-Illumina WGS datasets and two exome datasets from Illumina and Ion Torrent. The SOLID and Pacific Biosciences (PacBio) WGS datasets have high error rates in the candidate callsets. SOLID (13.9% PPV for SNPs, 96.2% for indels, and 14.3% overall) has many SNP artifacts from the mapping short, color-space reads. The PacBio dataset is the opposite, with many false indels (79.8% PPV for SNPs, 1.4% for indels, and 22.1% overall) due to this technology's high indel error rate. Training DeepVariant to call variants in an exome is likely to be particularly challenging. Exomes have far fewer variants (~20-30k)​ than found in a whole-genome (~4-5M)​ 26​ . T", "title": "" }, { "docid": "26a6ba8cba43ddfd3cac0c90750bf4ad", "text": "Mobile applications usually need to be provided for more than one operating system. Developing native apps separately for each platform is a laborious and expensive undertaking. Hence, cross-platform approaches have emerged, most of them based on Web technologies. While these enable developers to use a single code base for all platforms, resulting apps lack a native look & feel. This, however, is often desired by users and businesses. Furthermore, they have a low abstraction level. We propose MD2, an approach for model-driven cross-platform development of apps. With MD2, developers specify an app in a high-level (domain-specific) language designed for describing business apps succinctly. From this model, purely native apps for Android and iOS are automatically generated. MD2 was developed in close cooperation with industry partners and provides means to develop data-driven apps with a native look and feel. Apps can access the device hardware and interact with remote servers.", "title": "" }, { "docid": "ca384725ef293d63e700d0a31fd8e7dd", "text": "Attaching next-generation non-volatile memories (NVMs) to the main memory bus provides low-latency, byte-addressable access to persistent data that should significantly improve performance for a wide range of storage-intensive workloads. We present an analysis of storage application performance with non-volatile main memory (NVMM) using a hardware NVMM emulator that allows fine-grain tuning of NVMM performance parameters. Our evaluation results show that NVMM improves storage application performance significantly over flash-based SSDs and HDDs. We also compare the performance of applications running on realistic NVMM with the performance of the same applications running on idealized NVMM with the same performance as DRAM. We find that although NVMM is projected to have higher latency and lower bandwidth than DRAM, these difference have only a modest impact on application performance. A much larger drag on NVMM performance is the cost of ensuring data resides safely in the NVMM (rather than the volatile caches) so that applications can make strong guarantees about persistence and consistency. In response, we propose an optimized approach to flushing data from CPU caches that minimizes this cost. Our evaluation shows that this technique significantly improves performance for applications that require strict durability and consistency guarantees over large regions of memory.", "title": "" }, { "docid": "b18261d40726ad4b4c950f86ad19293a", "text": "The role mining problem has received considerable attention recently. Among the many solutions proposed, the Boolean matrix decomposition (BMD) formulation has stood out, which essentially discovers roles by decomposing the binary matrix representing user-to-permission assignment (UPA) into two matrices-user-to-role assignment (UA) and permission-to-role assignment (PA). However, supporting certain embedded constraints, such as separation of duty (SoD) and exceptions, is critical to the role mining process. Otherwise, the mined roles may not capture the inherent constraints of the access control policies of the organization. None of the previously proposed role mining solutions, including BMD, take into account these underlying constraints while mining. In this paper, we extend the BMD so that it reflects such embedded constraints by proposing to allow negative permissions in roles or negative role assignments for users. Specifically, by allowing negative permissions in roles, we are often able to use less roles to reconstruct the same given user-permission assignments. Moreover, from the resultant roles we can discover underlying constraints such as separation of duty constraints. This feature is not supported by any existing role mining approaches. Hence, we call the role mining problem with negative authorizations the constraint-aware role mining problem (CRM). We also explore other interesting variants of the CRM, which may occur in real situations. To enable CRM and its variants, we propose a novel approach, extended Boolean matrix decomposition (EBMD), which addresses the ineffectiveness of BMD in its ability of capturing underlying constraints. We analyze the computational complexity for each of CRM variants and present heuristics for problems that are proven to be NP-hard.", "title": "" }, { "docid": "d602cafe18d720f024da1b36c9283ba5", "text": "Associations between materialism and peer relations are likely to exist in elementary school children but have not been studied previously. The first two studies introduce a new Perceived Peer Group Pressures (PPGP) Scale suitable for this age group, demonstrating that perceived pressure regarding peer culture (norms for behavioral, attitudinal, and material characteristics) can be reliably measured and that it is connected to children's responses to hypothetical peer pressure vignettes. Studies 3 and 4 evaluate the main theoretical model of associations between peer relations and materialism. Study 3 supports the hypothesis that peer rejection is related to higher perceived peer culture pressure, which in turn is associated with greater materialism. Study 4 confirms that the endorsement of social motives for materialism mediates the relationship between perceived peer pressure and materialism.", "title": "" }, { "docid": "2ef92113a901df268261be56f5110cfa", "text": "This paper studies the problem of finding a priori shortest paths to guarantee a given likelihood of arriving on-time in a stochastic network. Such ‘‘reliable” paths help travelers better plan their trips to prepare for the risk of running late in the face of stochastic travel times. Optimal solutions to the problem can be obtained from local-reliable paths, which are a set of non-dominated paths under first-order stochastic dominance. We show that Bellman’s principle of optimality can be applied to construct local-reliable paths. Acyclicity of local-reliable paths is established and used for proving finite convergence of solution procedures. The connection between the a priori path problem and the corresponding adaptive routing problem is also revealed. A label-correcting algorithm is proposed and its complexity is analyzed. A pseudo-polynomial approximation is proposed based on extreme-dominance. An extension that allows travel time distribution functions to vary over time is also discussed. We show that the time-dependent problem is decomposable with respect to arrival times and therefore can be solved as easily as its static counterpart. Numerical results are provided using typical transportation networks. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "91713d85bdccb2c06d7c50365bd7022c", "text": "A 1 Mbit MRAM, a nonvolatile memory that uses magnetic tunnel junction (MJT) storage elements, has been characterized for total ionizing dose (TID) and single event latchup (SEL). Our results indicate that these devices show no single event latchup up to an effective LET of 84 MeV-cm2/mg (where our testing ended) and no bit failures to a TID of 75 krad (Si).", "title": "" } ]
scidocsrr
6334b24b7cde7d9cf52a7c597fcd83bd
Dynamic Graph Convolutional Networks
[ { "docid": "05a4ec72afcf9b724979802b22091fd4", "text": "Convolutional neural networks (CNNs) have greatly improved state-of-the-art performances in a number of fields, notably computer vision and natural language processing. In this work, we are interested in generalizing the formulation of CNNs from low-dimensional regular Euclidean domains, where images (2D), videos (3D) and audios (1D) are represented, to high-dimensional irregular domains such as social networks or biological networks represented by graphs. This paper introduces a formulation of CNNs on graphs in the context of spectral graph theory. We borrow the fundamental tools from the emerging field of signal processing on graphs, which provides the necessary mathematical background and efficient numerical schemes to design localized graph filters efficient to learn and evaluate. As a matter of fact, we introduce the first technique that offers the same computational complexity than standard CNNs, while being universal to any graph structure. Numerical experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs, as long as the graph is well-constructed.", "title": "" } ]
[ { "docid": "029687097e06ed2d0132ca2fce393129", "text": "The V-band systems have been widely used in the aerospace industry for securing spacecraft inside the launch vehicle payload fairing. Separation is initiated by firing pyro-devices to rapidly release the tension bands. A significant shock transient is expected as a result of the band separation. The shock environment is defined with the assumption that the shock events due to the band separation are associated with the rapid release of the strain energy from the preload tension of the restraining band.", "title": "" }, { "docid": "9bbb8ff8e8d498709ee68c6797b00588", "text": "Studies often report that bilingual participants possess a smaller vocabulary in the language of testing than monolinguals, especially in research with children. However, each study is based on a small sample so it is difficult to determine whether the vocabulary difference is due to sampling error. We report the results of an analysis of 1,738 children between 3 and 10 years old and demonstrate a consistent difference in receptive vocabulary between the two groups. Two preliminary analyses suggest that this difference does not change with different language pairs and is largely confined to words relevant to a home context rather than a school context.", "title": "" }, { "docid": "f3abf5a6c20b6fff4970e1e63c0e836b", "text": "We demonstrate a physically-based technique for predicting the drape of a wide variety of woven fabrics. The approach exploits a theoretical model that explicitly represents the microstructure of woven cloth with interacting particles, rather than utilizing a continuum approximation. By testing a cloth sample in a Kawabata fabric testing device, we obtain data that is used to tune the model's energy functions, so that it reproduces the draping behavior of the original material. Photographs, comparing the drape of actual cloth with visualizations of simulation results, show that we are able to reliably model the unique large-scale draping characteristics of distinctly different fabric types.", "title": "" }, { "docid": "427838c8fb3c97a12350a61ea10db350", "text": "A novel PAM4 receiver with adaptive threshold voltage, adaptive decision feedback equalizer and fixed linear equalizer has been presented. The proposed techniques enable threshold voltage to be adjusted automatically depending on data rate, signal swing and loss of channel. Consequently the receiver can be used in various situations without being manually calibrated. Adaptive decision feedback equalizer for PAM4 signaling is proposed, incorporated with sign-sign least mean square algorithm. Simulation results across lossy channel show proper convergence of threshold voltage and decision feedback equalizer values with the proposed receiver.", "title": "" }, { "docid": "5637bed8be75d7e79a2c2adb95d4c28e", "text": "BACKGROUND\nLimited evidence exists to show that adding a third agent to platinum-doublet chemotherapy improves efficacy in the first-line advanced non-small-cell lung cancer (NSCLC) setting. The anti-PD-1 antibody pembrolizumab has shown efficacy as monotherapy in patients with advanced NSCLC and has a non-overlapping toxicity profile with chemotherapy. We assessed whether the addition of pembrolizumab to platinum-doublet chemotherapy improves efficacy in patients with advanced non-squamous NSCLC.\n\n\nMETHODS\nIn this randomised, open-label, phase 2 cohort of a multicohort study (KEYNOTE-021), patients were enrolled at 26 medical centres in the USA and Taiwan. Patients with chemotherapy-naive, stage IIIB or IV, non-squamous NSCLC without targetable EGFR or ALK genetic aberrations were randomly assigned (1:1) in blocks of four stratified by PD-L1 tumour proportion score (<1% vs ≥1%) using an interactive voice-response system to 4 cycles of pembrolizumab 200 mg plus carboplatin area under curve 5 mg/mL per min and pemetrexed 500 mg/m2 every 3 weeks followed by pembrolizumab for 24 months and indefinite pemetrexed maintenance therapy or to 4 cycles of carboplatin and pemetrexed alone followed by indefinite pemetrexed maintenance therapy. The primary endpoint was the proportion of patients who achieved an objective response, defined as the percentage of patients with radiologically confirmed complete or partial response according to Response Evaluation Criteria in Solid Tumors version 1.1 assessed by masked, independent central review, in the intention-to-treat population, defined as all patients who were allocated to study treatment. Significance threshold was p<0·025 (one sided). Safety was assessed in the as-treated population, defined as all patients who received at least one dose of the assigned study treatment. This trial, which is closed for enrolment but continuing for follow-up, is registered with ClinicalTrials.gov, number NCT02039674.\n\n\nFINDINGS\nBetween Nov 25, 2014, and Jan 25, 2016, 123 patients were enrolled; 60 were randomly assigned to the pembrolizumab plus chemotherapy group and 63 to the chemotherapy alone group. 33 (55%; 95% CI 42-68) of 60 patients in the pembrolizumab plus chemotherapy group achieved an objective response compared with 18 (29%; 18-41) of 63 patients in the chemotherapy alone group (estimated treatment difference 26% [95% CI 9-42%]; p=0·0016). The incidence of grade 3 or worse treatment-related adverse events was similar between groups (23 [39%] of 59 patients in the pembrolizumab plus chemotherapy group and 16 [26%] of 62 in the chemotherapy alone group). The most common grade 3 or worse treatment-related adverse events in the pembrolizumab plus chemotherapy group were anaemia (seven [12%] of 59) and decreased neutrophil count (three [5%]); an additional six events each occurred in two (3%) for acute kidney injury, decreased lymphocyte count, fatigue, neutropenia, and sepsis, and thrombocytopenia. In the chemotherapy alone group, the most common grade 3 or worse events were anaemia (nine [15%] of 62) and decreased neutrophil count, pancytopenia, and thrombocytopenia (two [3%] each). One (2%) of 59 patients in the pembrolizumab plus chemotherapy group experienced treatment-related death because of sepsis compared with two (3%) of 62 patients in the chemotherapy group: one because of sepsis and one because of pancytopenia.\n\n\nINTERPRETATION\nCombination of pembrolizumab, carboplatin, and pemetrexed could be an effective and tolerable first-line treatment option for patients with advanced non-squamous NSCLC. This finding is being further explored in an ongoing international, randomised, double-blind, phase 3 study.\n\n\nFUNDING\nMerck & Co.", "title": "" }, { "docid": "00c78a8e51268322e1ae2009e0221c38", "text": "This survey attempts to provide a comprehensive and structured overview of the existing research for the problem of detecting anomalies in discrete/symbolic sequences. The objective is to provide a global understanding of the sequence anomaly detection problem and how existing techniques relate to each other. The key contribution of this survey is the classification of the existing research into three distinct categories, based on the problem formulation that they are trying to solve. These problem formulations are: 1) identifying anomalous sequences with respect to a database of normal sequences; 2) identifying an anomalous subsequence within a long sequence; and 3) identifying a pattern in a sequence whose frequency of occurrence is anomalous. We show how each of these problem formulations is characteristically distinct from each other and discuss their relevance in various application domains. We review techniques from many disparate and disconnected application domains that address each of these formulations. Within each problem formulation, we group techniques into categories based on the nature of the underlying algorithm. For each category, we provide a basic anomaly detection technique, and show how the existing techniques are variants of the basic technique. This approach shows how different techniques within a category are related or different from each other. Our categorization reveals new variants and combinations that have not been investigated before for anomaly detection. We also provide a discussion of relative strengths and weaknesses of different techniques. We show how techniques developed for one problem formulation can be adapted to solve a different formulation, thereby providing several novel adaptations to solve the different problem formulations. We also highlight the applicability of the techniques that handle discrete sequences to other related areas such as online anomaly detection and time series anomaly detection.", "title": "" }, { "docid": "f213bc5b5a16b381262aefe842babc59", "text": "Optogenetic methodology enables direct targeting of specific neural circuit elements for inhibition or excitation while spanning timescales from the acute (milliseconds) to the chronic (many days or more). Although the impact of this temporal versatility and cellular specificity has been greater for basic science than clinical research, it is natural to ask whether the dynamic patterns of neural circuit activity discovered to be causal in adaptive or maladaptive behaviors could become targets for treatment of neuropsychiatric diseases. Here, we consider the landscape of ideas related to therapeutic targeting of circuit dynamics. Specifically, we highlight optical, ultrasonic, and magnetic concepts for the targeted control of neural activity, preclinical/clinical discovery opportunities, and recently reported optogenetically guided clinical outcomes.", "title": "" }, { "docid": "4731a95b14335a84f27993666b192bba", "text": "Blockchain has been applied to study data privacy and network security recently. In this paper, we propose a punishment scheme based on the action record on the blockchain to suppress the attack motivation of the edge servers and the mobile devices in the edge network. The interactions between a mobile device and an edge server are formulated as a blockchain security game, in which the mobile device sends a request to the server to obtain real-time service or launches attacks against the server for illegal security gains, and the server chooses to perform the request from the device or attack it. The Nash equilibria (NEs) of the game are derived and the conditions that each NE exists are provided to disclose how the punishment scheme impacts the adversary behaviors of the mobile device and the edge server.", "title": "" }, { "docid": "c5bc0cd14aa51c24a00107422fc8ca10", "text": "This paper proposes a new high-voltage Pulse Generator (PG), fed from low voltage dc supply Vs. This input supply voltage is utilized to charge two arms of N series-connected modular multilevel converter sub-module capacitors sequentially through a resistive-inductive branch, such that each arm is charged to NVS. With a step-up nano-crystalline transformer of n turns ratio, the proposed PG is able to generate bipolar rectangular pulses of peak ±nNVs, at high repetition rates. However, equal voltage-second area of consecutive pulse pair polarities should be assured to avoid transformer saturation. Not only symmetrical pulses can be generated, but also asymmetrical pulses with equal voltage-second areas are possible. The proposed topology is tested via simulations and a scaled-down experimentation, which establish the viability of the topology for water treatment applications.", "title": "" }, { "docid": "3a466fd05c021b8bd48600246086aaa2", "text": "Recent empirical work has examined the extent to which international trade fosters international “spillovers” of technological information. FDI is an alternate, potentially equally important channel for the mediation of such knowledge spillovers. I introduce a framework for measuring international knowledge spillovers at the firm level, and I use this framework to directly test the hypothesis that FDI is a channel of knowledge spillovers for Japanese multinationals undertaking direct investments in the United States. Using an original firm-level panel data set on Japanese firms’ FDI and innovative activity, I find evidence that FDI increases the flow of knowledge spillovers both from and to the investing Japanese firms. ∗ This paper is a revision of Branstetter (2000a). I would like to thank Natasha Hsieh, Masami Imai,Yoko Kusaka, Grace Lin, Kentaro Minato, Kaoru Nabeshima, and Yoshiaki Ogura for excellent research assistance. I also thank Paul Almeida, Jonathan Eaton, Bronwyn Hall, Takatoshi Ito, Adam Jaffe, Wolfgang Keller, Yoshiaki Nakamura, James Rauch, Mariko Sakakibara, Ryuhei Wakasugi, two anonymous referees, and seminar participants at UC-Davis, UC-Berkeley, Boston University, UC-Boulder, Brandeis University, Columbia University, Cornell University, Northwestern University, UC-San Diego, the World Bank, the University of Michigan, the Research Institute of Economy, Trade, and Industry, and the NBER for valuable comments. Funding was provided by a University of California Faculty Research Grant, a grant from the Japan Foundation Center for Global Partnership, and the NBER Project on Industrial Technology and Productivity. Note that parts of this paper borrow from Branstetter (2000b) and from Branstetter and Nakamura (2003). I am solely responsible for any errors. ** Lee Branstetter, Columbia Business School, Uris Hall 815, 3022 Broadway, New York, NY 10027; TEL 212-854-2722; FAX 212-854-9895; E-mail lgb2001@columbia.edu", "title": "" }, { "docid": "8df49a873585755ec3a23a314846e851", "text": "We propose a reparameterization of LSTM that brings the benefits of batch normalization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneficial to batch-normalize the hidden-to-hidden transition, thereby reducing internal covariate shift between time steps. We evaluate our proposal on various sequential problems such as sequence classification, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and improved generalization.", "title": "" }, { "docid": "dbe84ebcf821995c6d7eb64fcbde5381", "text": "Researchers occasionally have to work with an extremely small sample size, defined herein as N ≤ 5. Some methodologists have cautioned against using the t-test when the sample size is extremely small, whereas others have suggested that using the t-test is feasible in such a case. The present simulation study estimated the Type I error rate and statistical power of the oneand two-sample ttests for normally distributed populations and for various distortions such as unequal sample sizes, unequal variances, the combination of unequal sample sizes and unequal variances, and a lognormal population distribution. Ns per group were varied between 2 and 5. Results show that the t-test provides Type I error rates close to the 5% nominal value in most of the cases, and that acceptable power (i.e., 80%) is reached only if the effect size is very large. This study also investigated the behavior of the Welch test and a rank-transformation prior to conducting the t-test (t-testR). Compared to the regular t-test, the Welch test tends to reduce statistical power and the t-testR yields false positive rates that deviate from 5%. This study further shows that a paired t-test is feasible with extremely small Ns if the within-pair correlation is high. It is concluded that there are no principal objections to using a t-test with Ns as small as 2. A final cautionary note is made on the credibility of research findings when sample sizes are small.", "title": "" }, { "docid": "cb0803dfd3763199519ff3f4427e1298", "text": "Motion deblurring is a long standing problem in computer vision and image processing. In most previous approaches, the blurred image is modeled as the convolution of a latent intensity image with a blur kernel. However, for images captured by a real camera, the blur convolution should be applied to scene irradiance instead of image intensity and the blurred results need to be mapped back to image intensity via the camera’s response function (CRF). In this paper, we present a comprehensive study to analyze the effects of CRFs on motion deblurring. We prove that the intensity-based model closely approximates the irradiance model at low frequency regions. However, at high frequency regions such as edges, the intensity-based approximation introduces large errors and directly applying deconvolution on the intensity image will produce strong ringing artifacts even if the blur kernel is invertible. Based on the approximation error analysis, we further develop a dualimage based solution that captures a pair of sharp/blurred images for both CRF estimation and motion deblurring. Experiments on synthetic and real images validate our theories and demonstrate the robustness and accuracy of our approach.", "title": "" }, { "docid": "02a12d06ff649c6b959facfd913c417b", "text": "Many people who are mobility impaired are, for a variety of reasons, incapable of using an ordinary wheelchair. In some instances, a power wheelchair also cannot be used, usually because of the difficulty the person has in controlling it (often due to additional disabilities). This paper describes two lowcost robotic wheelchair prototypes that assist the operator of the chair in avoiding obstacles, going to pre-designated places, and maneuvering through doorways and other narrow or crowded areas. These systems can be interfaced to a variety of input devices, and can give the operator as much or as little moment by moment control of the chair as they wish. This paper describes both systems, the evolution from one system to another, and the lessons learned.", "title": "" }, { "docid": "62319a41108f8662f6237a3935ffa8c6", "text": "This interpretive study examined how the marriage renewal ritual reflects the social construction of marriage in the United States. Two culturally prominent ideologies of marriage were interwoven in our interviews of 25 married persons who had renewed their marriage vows: (a) a dominant ideology of community and (b) a more muted ideology of individualism. The ideology of community was evidenced by a construction of marriage featuring themes of public accountability, social embeddedness, and permanence. By contrast, the ideology of individualism constructed marriage around themes of love, choice, and individual growth. Most interpersonal communication scholars approach the study of marriage in one of two ways: (a) marriage as context, or (b) marriage as outcome. In contrast, in the present study we adopt an alternative way to envision marriage: marriage as cultural performance. We frame this study using two complementary theoretical perspectives: social constructionism and ritual performance theory. In particular, we examine how the cultural performance of marriage renewal rituals reflects the social construction of marriage in the United States. In an interpretive analysis of interviews with marital partners who had recently renewed their marriage vows, we examine the extent to which the two most prominent ideological perspectives on marriage—individualism and community—organize the meaning of marriage for our participants. B AXTE R AND B RAITHWAITE , SOUTHE RN C OM M UNICA TION J OURNA L 6 7 (2 0 0 2 ) 2 The Socially Contested Construction of Marriage Communication scholars interested in face-to-face interaction tend to adopt one of two general approaches to the study of marriage, what Whitchurch and Dickson (1999) have called the interpersonal communication approach and the family communication approach. The family communication approach, with which the present study is aligned, views communication as constitutive of the family. That is, through their communicative practices, parties construct their social reality of who their family is and the meanings that organize it. From this constitutive, or social constructionist perspective, social reality is an ongoing process of producing and reproducing meanings and social patterns through the interchanges among people (Berger & Luckmann, 1966; Burr, 1995; Gergen, 1994). From a family communication perspective, marriage is thus an ongoing discursive accomplishment. It is achieved through a myriad of interaction practices, including but not limited to, private exchanges between husbands and wives, exchanges between the couple and their extended kinship and friendship networks, public and private rituals such as weddings and anniversaries, and public discourse by politicians and others surrounding family values. Whitchurch and Dickson (1999) argued that, by contrast, the interpersonal communication approach views marriage as an independent or a dependent variable whose functioning in the cause-and-effect world of human behavior can be determined. For example, interpersonal communication scholars often frame marriage as an antecedent contextual variable in examining how various communicative phenomena are enacted in married couples compared with nonmarried couples, or in the premarital compared with postmarital stages of relationship development. Interpersonal communication scholars often also consider marriage as a dependent variable in examining which causal variables lead courtship pairs to marry or keep married couples from breaking up, such as the extent to which such communication phenomena as conflict or disclosive openness during courtship predict whether a couple will wed. Advocates of a constitutive or social constructionist perspective argue that the discursive production and reproduction of the social order is far from the univocal, consensually based model that scholars once envisioned (Baxter & Montgomery, 1996). Instead, the social world is a cross-current of multiple, often competing, conflictual perspectives. The social order is wrought from multivocal negotiations in which different interests, ideologies, and beliefs interact on an ongoing basis. The process of “social ordering” is not a monologic conversation of seamless coherence and consensus; rather, it is a pluralistic cacophony of discursive renderings, a multiplicity of negotiations in which different lived experiences and different systems of meaning are at stake (Billig, Condor, Edwards, Gane, Middleton, & Radley, 1988; Shotter, 1993). As Bakhtin (1981) expressed: “Every concrete utterance . . . serves as a point where centrifugal as well as centripetal forces are brought to bear. The processes of centralization and decentralization, of unification and disunification, intersect in the utterance” (p. 272). Thus, interaction events are enacted dialogically, with multiple “voices,” or perspectives, competing for discursive dominance or privilege as the hegemonic, centripetal center of a given cultural conversation in the moment. Social life is a collection of dialogues between centripetal and centrifugal groups, beliefs, ideologies, and perspectives. B AXTE R AND B RAITHWAITE , SOUTHE RN C OM M UNICA TION J OURNA L 6 7 (2 0 0 2 ) 3 In modern American society, the institution of marriage is subject to endless negotiation by those who enact and discuss it. Existing research suggests that marriage is a contested terrain whose boundary is disputed by scholars and laypersons alike. One belief is that marriage is essentially the isolated domain of the two married spouses, a private haven separate from the obligations and constraints of the broader social order. The other belief is that marriage is a social institution that is embedded practically and morally in the broader society. Bellah and his colleagues (Bellah, Madsen, Sullivan, Swidler, & Tipton, 1985) have argued that this “boundary dispute” surrounding marriage reflects an omnipresent ideological tension in the American society that can be traced to precolonial times—a tension between the cultural strands of utilitarian/expressive individualism and moral/ social community. The marriage of utilitarian/expressive individualism emphasizes freedom from societal traditions and obligations, privileging instead its private existence in fulfilling the emotional and psychological needs of the two spouses. Marriage, according to this ideology, is not conceived as a binding obligation; rather, it is viewed as existing only as the expression of the choices of the free selves who constitute the union. Marriage is built on love for partner, expressive openness between partners, self-development, and self-gratification. It is a psychological contract negotiated between self-fulfilled individuals acting in their own self-interests. Should marriage cease to be gratifying to the selves in it, it should naturally end. Bellah et al. (1985) argue that this conception of marriage dominates the discursive landscape of modern American society, occupying, in Bakhtin’s (1981) terms, the centripetal center. By contrast, the moral/social community view of marriage emphasizes its existence as a social institution with obligations to uphold traditional values of life-long commitment and duty, and to cohere with other social institutions in maintaining the existing moral and social order. According to this second ideology, marriage is anchored by social obligation—expectations, duties, and accountabilities to others. In this way, marriage is grounded in its ties to the larger society and is not simply a private haven for emotional gratification and intimacy for the two spouses. Bellah et al. (1985) argue that this view of marriage, although clearly distinguishable in the discursive landscape of modern American society, occupies the centrifugal margin rather than the hegemonic center in modern social constructions of marriage in the United States. These two cultural ideologies of marriage also are readily identifiable in existing social scientific research on marital communication (Allan, 1993). The “private haven” ideology is the one that dominates existing research on communication in marriage (Milardo & Wellman, 1992). In this sort of research on marital communication, scholars draw a clear boundary demarcation around the spousal unit and proceed to understand how marriage works by directing their empirical gaze inward to the psychological characteristics of the two married persons and the interactions that take place within this dyad (Duck, 1993). By contrast, other more sociologically oriented scholars who study communication in marriage emphasize that the marital relationship is different from its nonmarital counterparts of romantic and cohabiting couples precisely because of its status as an institutionalized social unit (e.g., McCall, McCall, Denzin, Suttles, & Kurth, 1970). Scholars who adopt the latter view direct their empirical gaze outside marital dyads to examine how marriage is B AXTE R AND B RAITHWAITE , SOUTHE RN C OM M UNICA TION J OURNA L 6 7 (2 0 0 2 ) 4 enacted in the presence of societal influences, such as legitimization and acceptance of a pair by their kinship and friendship networks, and societal barriers to marital dissolution (e.g., Milardo, 1988). A third approach to the study of marriage is identifiable in the growing number of dialogically oriented scholars interested in communication in personal relationships who are pointing to the status of marriage as simultaneously a private culture of two as well as an institutionalized element of the broader social order (e.g., Brown, Airman, Werner, 1992; Montgomery, 1992). According to Shotter (1993) and Bellah et al. (1985), couples face this dilemma of double accountability on an ongoing basis. Although the ideology of utilitarian/expressive individualism is given dominance, “most Americans are, in fact, caught between ideals of freedom and obligation” (Bellah et al., p. 102). For example, Shotter", "title": "" }, { "docid": "34bf7fb014f5b511943526c28407cb4b", "text": "Mobile devices can be maliciously exploited to violate the privacy of people. In most attack scenarios, the adversary takes the local or remote control of the mobile device, by leveraging a vulnerability of the system, hence sending back the collected information to some remote web service. In this paper, we consider a different adversary, who does not interact actively with the mobile device, but he is able to eavesdrop the network traffic of the device from the network side (e.g., controlling a Wi-Fi access point). The fact that the network traffic is often encrypted makes the attack even more challenging. In this paper, we investigate to what extent such an external attacker can identify the specific actions that a user is performing on her mobile apps. We design a system that achieves this goal using advanced machine learning techniques. We built a complete implementation of this system, and we also run a thorough set of experiments, which show that our attack can achieve accuracy and precision higher than 95%, for most of the considered actions. We compared our solution with the three state-of-the-art algorithms, and confirming that our system outperforms all these direct competitors.", "title": "" }, { "docid": "c84a0f630b4fb2e547451d904e1c63a5", "text": "Deep neural network training spends most of the computation on examples that are properly handled, and could be ignored. We propose to mitigate this phenomenon with a principled importance sampling scheme that focuses computation on “informative” examples, and reduces the variance of the stochastic gradients during training. Our contribution is twofold: first, we derive a tractable upper bound to the persample gradient norm, and second we derive an estimator of the variance reduction achieved with importance sampling, which enables us to switch it on when it will result in an actual speedup. The resulting scheme can be used by changing a few lines of code in a standard SGD procedure, and we demonstrate experimentally, on image classification, CNN fine-tuning, and RNN training, that for a fixed wall-clock time budget, it provides a reduction of the train losses of up to an order of magnitude and a relative improvement of test errors between 5% and 17%.", "title": "" }, { "docid": "eec49660659bb9b60173ababb3a8435f", "text": "Control-Flow Integrity (CFI) is a defense which prevents control-flow hijacking attacks. While recent research has shown that coarse-grained CFI does not stop attacks, fine-grained CFI is believed to be secure. We argue that assessing the effectiveness of practical CFI implementations is non-trivial and that common evaluation metrics fail to do so. We then evaluate fullyprecise static CFI — the most restrictive CFI policy that does not break functionality — and reveal limitations in its security. Using a generalization of non-control-data attacks which we call Control-Flow Bending (CFB), we show how an attacker can leverage a memory corruption vulnerability to achieve Turing-complete computation on memory using just calls to the standard library. We use this attack technique to evaluate fully-precise static CFI on six real binaries and show that in five out of six cases, powerful attacks are still possible. Our results suggest that CFI may not be a reliable defense against memory corruption vulnerabilities. We further evaluate shadow stacks in combination with CFI and find that their presence for security is necessary: deploying shadow stacks removes arbitrary code execution capabilities of attackers in three of six cases.", "title": "" }, { "docid": "c1e177c1d79001a46e8e47c51c78efcc", "text": "This paper will present details of a system that allows for an evolutionary introduction of depth perception into the existing 2D digital TV framework. The work is part of the European Information Society Technologies (IST) project “Advanced Three-Dimensional Television System Technologies” (ATTEST), an activity, where industries, research centers and universities have joined forces to design a backwards-compatible, flexible and modular broadcast 3D-TV system [1]. In contrast to former proposals, which often relied on the basic concept of “stereoscopic” video, this new idea is based on a more flexible joint transmission of monoscopic video and associated per-pixel depth information. From this data representation, one or more “virtual” views of the 3D scene can then be synthesized in real-time at the receiver side by means of so-called depthimage-based rendering (DIBR) techniques. This paper (a) highlights the advantages of this new approach on 3DTV and (b) develops an efficient algorithm for the generation of “virtual” 3D views that can be reproduced on any stereoscopicor autostereoscopic 3D-TV display.", "title": "" }, { "docid": "ba4600c9c8e4c1bfcec9fa8fcde0f05c", "text": "While things (i.e., technologies) play a crucial role in creating and shaping meaningful, positive experiences, their true value lies only in the resulting experiences. It is about what we can do and experience with a thing, about the stories unfolding through using a technology, not about its styling, material, or impressive list of features. This paper explores the notion of \"experiences\" further: from the link between experiences, well-being, and people's developing post-materialistic stance to the challenges of the experience market and the experience-driven design of technology.", "title": "" } ]
scidocsrr
392ef3975bc406ffcc06aa161c292839
High-Efficiency Current-Regulated Charge Pump for a White LED Driver
[ { "docid": "6a850977378e1d371003174b511b833e", "text": "DC to DC converters which are realized as a switched capacitor type are very common in display and lighting low power solutions. This applications requires a lot of different supply voltage rails with different demand on the output power. E.g. a thin-film-transistor (TFT) LCD panel requires 24V - 35V at 10mA - 50mA to switch pixel transistors. A current-mode regulated charge pump generates this rail from a 12V -18V input supply. The selected charge pump topology uses a simple \"gear box\" controlled by a single feedback amplifier to select between doubler and tripler modes of operation. This feedback amplifier simultaneously regulates output voltage and selects the operating mode. The new topology allows less circuit complexity compare to common \"gear box\" solutions. Therefore this converter system with flexible conversion ratios to support different VIN/VOUT ratios and load currents is the ideal solution to be most efficient. Also the advantages of current-mode control where one transistor in the output stage operates as a regulated current source on the low voltage rail which can theoretically support any power stage topology will be demonstrated. The proposed charge pump is realized in a BICMOS process with HV extension. Finally the measurement results especially regarding mode transitions will be shown.", "title": "" } ]
[ { "docid": "46fb68fc33453605c14e36d378c5e23e", "text": "This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, redistribution , reselling , loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material. Meaning in life is thought to be important to well-being throughout the human life span. We assessed the structure, levels, and correlates of the presence of meaning in life, and the search for meaning, within four life stage groups: emerging adulthood, young adulthood, middle-age adulthood, and older adulthood. Results from a sample of Internet users (N ¼ 8756) demonstrated the structural invariance of the meaning measure used across life stages. Those at later life stages generally reported a greater presence of meaning in their lives, whereas those at earlier life stages reported higher levels of searching for meaning. Correlations revealed that the presence of meaning has similar relations to well-being across life stages, whereas searching for meaning is more strongly associated with well-being deficits at later life stages. Introduction Meaning in life has enjoyed a renaissance of interest in recent years, and is considered to be an important component of broader well-being (e. Perceptions of meaning in life are thought to be related to the development of a coherent sense of one's identity (Heine, Proulx, & Vohs, 2006), and the process of creating a sense of meaning theoretically begins in adolescence, continuing throughout life (Fry, 1998). Meaning creation should then be linked to individual development, and is likely to unfold in conjunction with other processes, such as the development of identity, relationships, and goals. Previous research has revealed that people experience different levels of the presence of meaning at different ages (e.g., Ryff & Essex, 1992), although these findings have been inconsistent, and inquiries have generally focused on limited age ranges (e.g., Pinquart, 2002). The present study aimed to integrate research on dimensions of meaning in life across the life span by providing an analysis …", "title": "" }, { "docid": "1a6ece40fa87e787f218902eba9b89f7", "text": "Learning a similarity function between pairs of objects is at the core of learning to rank approaches. In information retrieval tasks we typically deal with query-document pairs, in question answering -- question-answer pairs. However, before learning can take place, such pairs needs to be mapped from the original space of symbolic words into some feature space encoding various aspects of their relatedness, e.g. lexical, syntactic and semantic. Feature engineering is often a laborious task and may require external knowledge sources that are not always available or difficult to obtain. Recently, deep learning approaches have gained a lot of attention from the research community and industry for their ability to automatically learn optimal feature representation for a given task, while claiming state-of-the-art performance in many tasks in computer vision, speech recognition and natural language processing. In this paper, we present a convolutional neural network architecture for reranking pairs of short texts, where we learn the optimal representation of text pairs and a similarity function to relate them in a supervised way from the available training data. Our network takes only words in the input, thus requiring minimal preprocessing. In particular, we consider the task of reranking short text pairs where elements of the pair are sentences. We test our deep learning system on two popular retrieval tasks from TREC: Question Answering and Microblog Retrieval. Our model demonstrates strong performance on the first task beating previous state-of-the-art systems by about 3\\% absolute points in both MAP and MRR and shows comparable results on tweet reranking, while enjoying the benefits of no manual feature engineering and no additional syntactic parsers.", "title": "" }, { "docid": "4eb205978a12b780dc26909bee0eebaa", "text": "This paper introduces CPE, the CIRCE Plugin for Eclipse. The CPE adds to the open-source development environment Eclipse the ability of writing and analysing software requirements written in natural language. Models of the software described by the requirements can be examined on-line during the requirements writing process. Initial UML models and skeleton Java code can be generated from the requirements, and imported into Eclipse for further editing and analysis.", "title": "" }, { "docid": "21daaa29b6ff00af028f3f794b0f04b7", "text": "During the last years, we are experiencing the mushrooming and increased use of web tools enabling Internet users to both create and distribute content (multimedia information). These tools referred to as Web 2.0 technologies-applications can be considered as the tools of mass collaboration, since they empower Internet users to actively participate and simultaneously collaborate with other Internet users for producing, consuming and diffusing the information and knowledge being distributed through the Internet. In other words, Web 2.0 tools do nothing more than realising and exploiting the full potential of the genuine concept and role of the Internet (i.e. the network of the networks that is created and exists for its users). The content and information generated by users of Web 2.0 technologies are having a tremendous impact not only on the profile, expectations and decision making behaviour of Internet users, but also on e-business model that businesses need to develop and/or adapt. The tourism industry is not an exception from such developments. On the contrary, as information is the lifeblood of the tourism industry the use and diffusion of Web 2.0 technologies have a substantial impact of both tourism demand and supply. Indeed, many new types of tourism cyber-intermediaries have been created that are nowadays challenging the e-business model of existing cyberintermediaries that only few years ago have been threatening the existence of intermediaries!. In this vein, the purpose of this article is to analyse the major applications of Web 2.0 technologies in the tourism and hospitality industry by presenting their impact on both demand and supply.", "title": "" }, { "docid": "af6b26efef62f3017a0eccc5d2ae3c33", "text": "Universal, intelligent, and multifunctional devices controlling power distribution and measurement will become the enabling technology of the Smart Grid ICT. In this paper, we report on a novel automation architecture which supports distributed multiagent intelligence, interoperability, and configurability and enables efficient simulation of distributed automation systems. The solution is based on the combination of IEC 61850 object-based modeling and interoperable communication with IEC 61499 function block executable specification. Using the developed simulation environment, we demonstrate the possibility of multiagent control to achieve self-healing grid through collaborative fault location and power restoration.", "title": "" }, { "docid": "df5c2e6b4f2137bb078264629cbc7c40", "text": "Pregnant women in malarious areas may experience a variety of adverse consequences from malaria infection including maternal anemia, placental accumulation of parasites, low birth weight (LBW) from prematurity and intrauterine growth retardation (IUGR), fetal parasite exposure and congenital infection, and infant mortality (IM) linked to preterm-LBW and IUGR-LBW. We reviewed studies between 1985 and 2000 and summarized the malaria population attributable risk (PAR) that accounts for both the prevalence of the risk factors in the population and the magnitude of the associated risk for anemia, LBW, and IM. Consequences from anemia and human immunodeficiency virus infection in these studies were also considered. Population attributable risks were substantial: malaria was associated with anemia (PAR range = 3-15%), LBW (8-14%), preterm-LBW (8-36%), IUGR-LBW (13-70%), and IM (3-8%). Human immunodeficiency virus was associated with anemia (PAR range = 12-14%), LBW (11-38%), and direct transmission in 20-40% of newborns, with direct mortality consequences. Maternal anemia was associated with LBW (PAR range = 7-18%), and fetal anemia was associated with increased IM (PAR not available). We estimate that each year 75,000 to 200,000 infant deaths are associated with malaria infection in pregnancy. The failure to apply known effective antimalarial interventions through antenatal programs continues to contribute substantially to infant deaths globally.", "title": "" }, { "docid": "fd0dccac0689390e77a0cc1fb14e5a34", "text": "Chromatin remodeling is a complex process shaping the nucleosome landscape, thereby regulating the accessibility of transcription factors to regulatory regions of target genes and ultimately managing gene expression. The SWI/SNF (switch/sucrose nonfermentable) complex remodels the nucleosome landscape in an ATP-dependent manner and is divided into the two major subclasses Brahma-associated factor (BAF) and Polybromo Brahma-associated factor (PBAF) complex. Somatic mutations in subunits of the SWI/SNF complex have been associated with different cancers, while germline mutations have been associated with autism spectrum disorder and the neurodevelopmental disorders Coffin–Siris (CSS) and Nicolaides–Baraitser syndromes (NCBRS). CSS is characterized by intellectual disability (ID), coarsening of the face and hypoplasia or absence of the fifth finger- and/or toenails. So far, variants in five of the SWI/SNF subunit-encoding genes ARID1B, SMARCA4, SMARCB1, ARID1A, and SMARCE1 as well as variants in the transcription factor-encoding gene SOX11 have been identified in CSS-affected individuals. ARID2 is a member of the PBAF subcomplex, which until recently had not been linked to any neurodevelopmental phenotypes. In 2015, mutations in the ARID2 gene were associated with intellectual disability. In this study, we report on two individuals with private de novo ARID2 frameshift mutations. Both individuals present with a CSS-like phenotype including ID, coarsening of facial features, other recognizable facial dysmorphisms and hypoplasia of the fifth toenails. Hence, this study identifies mutations in the ARID2 gene as a novel and rare cause for a CSS-like phenotype and enlarges the list of CSS-like genes.", "title": "" }, { "docid": "7b947253c48c50917cb3f116f0dc6b64", "text": "Ebola outbreaks occur on a frequent basis, with the 2014-2015 outbreak in West Africa being the largest one ever recorded. This outbreak has resulted in over 11,000 deaths in four African countries and has received international attention and intervention. Although there are currently no approved therapies or vaccines, many promising candidates are undergoing clinical trials, and several have had success in promoting recovery from Ebola. However, these prophylactics and therapeutics have been designed and tested only against the same species of Ebola virus as the one causing the current outbreak. Future outbreaks involving other species would require reformulation and possibly redevelopment. Therefore, a broad-spectrum alternative is highly desirable. We have found that a flavonoid derivative called quercetin 3-β-O-d-glucoside (Q3G) has the ability to protect mice from Ebola even when given as little as 30 min prior to infection. Furthermore, we have demonstrated that this compound targets the early steps of viral entry. Most promisingly, antiviral activity against two distinct species of Ebola virus was seen. This study serves as a proof of principle that Q3G has potential as a prophylactic against Ebola virus infection.", "title": "" }, { "docid": "4702fceea318c326856cc2a7ae553e1f", "text": "The Institute of Medicine identified “timeliness” as one of six key “aims for improvement” in its most recent report on quality. Yet patient delays remain prevalent, resulting in dissatisfaction, adverse clinical consequences, and often, higher costs. This tutorial describes several areas in which patients routinely experience significant and potentially dangerous delays and presents operations research (OR) models that have been developed to help reduce these delays, often at little or no cost. I also describe the difficulties in developing and implementing models as well as the factors that increase the likelihood of success. Finally, I discuss the opportunities, large and small, for using OR methodologies to significantly impact practices and policies that will affect timely access to healthcare.", "title": "" }, { "docid": "13afc7b4786ee13c6b0bfb1292f50153", "text": "Heavy metals are discharged into water from various industries. They can be toxic or carcinogenic in nature and can cause severe problems for humans and aquatic ecosystems. Thus, the removal of heavy metals fromwastewater is a serious problem. The adsorption process is widely used for the removal of heavy metals from wastewater because of its low cost, availability and eco-friendly nature. Both commercial adsorbents and bioadsorbents are used for the removal of heavy metals fromwastewater, with high removal capacity. This review article aims to compile scattered information on the different adsorbents that are used for heavy metal removal and to provide information on the commercially available and natural bioadsorbents used for removal of chromium, cadmium and copper, in particular. This is an Open Access article distributed under the terms of the Creative Commons Attribution Licence (CC BY-NC-ND 4.0), which permits copying and redistribution for non-commercial purposes with no derivatives, provided the original work is properly cited (http://creativecommons.org/ licenses/by-nc-nd/4.0/). doi: 10.2166/wrd.2016.104 Renu Madhu Agarwal (corresponding author) K. Singh Department of Chemical Engineering, Malaviya National Institute of Technology, JLN Marg, Jaipur 302017, India E-mail: madhunaresh@gmail.com", "title": "" }, { "docid": "6ccb3015999dc06095033366712580b4", "text": "This paper deals with the problem of taking random samples over the surface of a 3D mesh describing and evaluating efficient algorithms for generating different distributions. We discuss first the problem of generating a Monte Carlo distribution in an efficient and practical way avoiding common pitfalls. Then, we propose Constrained Poisson-disk sampling, a new Poisson-disk sampling scheme for polygonal meshes which can be easily tweaked in order to generate customized set of points such as importance sampling or distributions with generic geometric constraints. In particular, two algorithms based on this approach are presented. An in-depth analysis of the frequency characterization and performance of the proposed algorithms are also presented and discussed.", "title": "" }, { "docid": "04fd45380cc99b4b650318c0df7627a6", "text": "Research and development of recommender systems has been a vibrant field for over a decade, having produced proven metho ds for “preference-aware” computing. Recommenders use commu nity opinion histories to help users identify interesting i tems from a considerably large search space (e.g., inventory from Amaz on [7], movies from Netflix [9]). Personalization, recommendation , a d the “human side\" of data-centric applications are even beco ming important topics in the data management community [3]. A popular recommendation method used heavily in practice is collaborative filtering, consisting of two phases: (1) An offline model-buildingphase that uses community opinions of items (e.g., movie ratings, “Diggs” [6]) to build a model storing meaning ful correlations between users and items. (2) An on-demandrecommendationphase that uses the model to produce a set of recommended items when requested from a user or application. To be effective, recommender systems must evolve with their content. In current update-intensive systems (e.g., socia l networks, online news sites), the restriction that a model be generate d offline is a significant drawback, as it hinders the system’s ability to evolve quickly. For instance, new users enter the system cha nging the collective opinions over items, or the system adds ne w items quickly (e.g., news posts, Facebook postings), which w dens the recommendation pool. These updates affect the recommen der model, that in turn affect the system’s recommendation qual ity in terms of providing accurate answers to recommender queries . In such systems, a completely real-time recommendation process is paramount. Unfortunately, most traditional state-of-the -art recommenders are “hand-built\", implemented as custom software notbuilt for a real-time recommendation process [1]. Further, for so me", "title": "" }, { "docid": "c6bdd8d88dd2f878ddc6f2e8be39aa78", "text": "A wide variety of non-photorealistic rendering techniques make use of random variation in the placement or appearance of primitives. In order to avoid the \"shower-door\" effect, this random variation should move with the objects in the scene. Here we present coherent noise tailored to this purpose. We compute the coherent noise with a specialized filter that uses the depth and velocity fields of a source sequence. The computation is fast and suitable for interactive applications like games.", "title": "" }, { "docid": "4636d13d277b3bc33485f72375f0a30f", "text": "The human papillomavirus (HPV) has an affinity for squamous cells of stratified keratinized epithelium, thus affecting the lower genital, nasal, and oral tracts. In the oral cavity, HPV is associated with pathoses such as the verruca vulgaris (common wart), squamous cell papilloma, condyloma acuminatum (venereal wart), and focal epithelial hyperplasia (Heck disease). Among the treatments available for these lesions are cryotherapy, electrosurgery, surgical removal, laser therapy, and trichloroacetic acid (TCA). The objective of this research was to determine the behavior of HPV-associated oral pathoses treated with TCA. A prospective cohort study was performed in 20 patients who attended a dental consultation at 2 universities in Cartagena, Colombia. Among the patients, 65% were diagnosed as having focal epithelial hyperplasia, 20% as having verrucae vulgares, and 15% as having condylomata acuminata. Application of TCA to HPV-associated oral lesions proved to be a useful nonsurgical alternative treatment, as the resolution of the lesions was achieved atraumatically in a span of 45 days with 3 applications of 30-60 seconds each.", "title": "" }, { "docid": "1450c2025de3ea31271c9d6c56be016f", "text": "The vast increase in clinical data has the potential to bring about large improvements in clinical quality and other aspects of healthcare delivery. However, such benefits do not come without cost. The analysis of such large datasets, particularly where the data may have to be merged from several sources and may be noisy and incomplete, is a challenging task. Furthermore, the introduction of clinical changes is a cyclical task, meaning that the processes under examination operate in an environment that is not static. We suggest that traditional methods of analysis are unsuitable for the task, and identify complexity theory and machine learning as areas that have the potential to facilitate the examination of clinical quality. By its nature the field of complex adaptive systems deals with environments that change because of the interactions that have occurred in the past. We draw parallels between health informatics and bioinformatics, which has already started to successfully use machine learning methods.", "title": "" }, { "docid": "7b45559be60b099de0bcf109c9a539b7", "text": "The split-heel technique has distinct advantages over the conventional medial or lateral approach in the operative debridement of extensive and predominantly plantar chronic calcaneal osteomyelitis in children above 5 years of age. We report three cases (age 5.5-11 years old) of chronic calcaneal osteomyelitis in children treated using the split-heel approach with 3-10 years follow-up showing excellent functional and cosmetic results.", "title": "" }, { "docid": "00d14c0c07d04c9bd6995ff0ee065ab9", "text": "The pathways for olfactory learning in the fruitfly Drosophila have been extensively investigated, with mounting evidence that that the mushroom body is the site of the olfactory associative memory trace (Heisenberg, Nature 4:266–275, 2003; Gerber et al., Curr Opin Neurobiol 14:737–744, 2004). Heisenberg’s description of the mushroom body as an associative learning device is a testable hypothesis that relates the mushroom body’s function to its neural structure and input and output pathways. Here, we formalise a relatively complete computational model of the network interactions in the neural circuitry of the insect antennal lobe and mushroom body, to investigate their role in olfactory learning, and specifically, how this might support learning of complex (non-elemental; Giurfa, Curr Opin Neuroethol 13:726–735, 2003) discriminations involving compound stimuli. We find that the circuit is able to learn all tested non-elemental paradigms. This does not crucially depend on the number of Kenyon cells but rather on the connection strength of projection neurons to Kenyon cells, such that the Kenyon cells require a certain number of coincident inputs to fire. As a consequence, the encoding in the mushroom body resembles a unique cue or configural representation of compound stimuli (Pearce, Psychol Rev 101:587–607, 1994). Learning of some conditions, particularly negative patterning, is strongly affected by the assumption of normalisation effects occurring at the level of the antennal lobe. Surprisingly, the learning capacity of this circuit, which is a simplification of the actual circuitry in the fly, seems to be greater than the capacity expressed by the fly in shock-odour association experiments (Young et al. 2010).", "title": "" }, { "docid": "70ea4bbe03f2f733ff995dc4e8fea920", "text": "The spread of malicious or accidental misinformation in social media, especially in time-sensitive situations, such as real-world emergencies, can have harmful effects on individuals and society. In this work, we developed models for automated verification of rumors (unverified information) that propagate through Twitter. To predict the veracity of rumors, we identified salient features of rumors by examining three aspects of information spread: linguistic style used to express rumors, characteristics of people involved in propagating information, and network propagation dynamics. The predicted veracity of a time series of these features extracted from a rumor (a collection of tweets) is generated using Hidden Markov Models. The verification algorithm was trained and tested on 209 rumors representing 938,806 tweets collected from real-world events, including the 2013 Boston Marathon bombings, the 2014 Ferguson unrest, and the 2014 Ebola epidemic, and many other rumors about various real-world events reported on popular websites that document public rumors. The algorithm was able to correctly predict the veracity of 75% of the rumors faster than any other public source, including journalists and law enforcement officials. The ability to track rumors and predict their outcomes may have practical applications for news consumers, financial markets, journalists, and emergency services, and more generally to help minimize the impact of false information on Twitter.", "title": "" }, { "docid": "e2e99eca77da211cac64ab69931ed1f4", "text": "Cross-site scripting (XSS) and SQL injection errors are two prominent examples of taint-based vulnerabilities that have been responsible for a large number of security breaches in recent years. This paper presents QED, a goal-directed model-checking system that automatically generates attacks exploiting taint-based vulnerabilities in large Java web applications. This is the first time where model checking has been used successfully on real-life Java programs to create attack sequences that consist of multiple HTTP requests. QED accepts any Java web application that is written to the standard servlet specification. The analyst specifies the vulnerability of interest in a specification that looks like a Java code fragment, along with a range of values for form parameters. QED then generates a goal-directed analysis from the specification to perform session-aware tests, optimizes to eliminate inputs that are not of interest, and feeds the remainder to a model checker. The checker will systematically explore the remaining state space and report example attacks if the vulnerability specification is matched. QED provides better results than traditional analyses because it does not generate any false positive warnings. It proves the existence of errors by providing an example attack and a program trace showing how the code is compromised. Past experience suggests this is important because it makes it easy for the application maintainer to recognize the errors and to make the necessary fixes. In addition, for a class of applications, QED can guarantee that it has found all the potential bugs in the program. We have run QED over 3 Java web applications totaling 130,000 lines of code. We found 10 SQL injections and 13 cross-site scripting errors.", "title": "" }, { "docid": "c0f138d3bf0626100e0d1d702da90eac", "text": "Building a theory on extant species, as Ackermann et al. do, is a useful contribution to the field of language evolution. Here, I add another living model that might be of interest: human language ontogeny in the first year of life. A better knowledge of this phase might help in understanding two more topics among the \"several building blocks of a comprehensive theory of the evolution of spoken language\" indicated in their conclusion by Ackermann et al., that is, the foundation of the co-evolution of linguistic motor skills with the auditory skills underlying speech perception, and the possible phylogenetic interactions of protospeech production with referential capabilities.", "title": "" } ]
scidocsrr
159e822fac559e28c91a5a0ce72d155e
Beyond Turing: Intelligent Agents Centered on the User
[ { "docid": "489c19077caa00680764b1c352e2146b", "text": "In this paper, we describe a system that reacts to both possible system breakdowns and low user engagement with a set of conversational strategies. These general strategies reduce the number of inappropriate responses and produce better user engagement. We also found that a system that reacts to both possible system breakdowns and low user engagement is rated by both experts and non-experts as having better overall user engagement compared to a system that only reacts to possible system breakdowns. We argue that for non-task-oriented systems we should optimize on both system response appropriateness and user engagement. We also found that apart from making the system response appropriate, funny and provocative responses can also lead to better user engagement. On the other hand, short appropriate responses, such as “Yes” or “No” can lead to decreased user engagement. We will use these findings to further improve our system.", "title": "" } ]
[ { "docid": "665fcc17971dc34ed6f89340e3b7bfe2", "text": "Central to the development of computer vision systems is the collection and use of annotated images spanning our visual world. Annotations may include information about the identity, spatial extent, and viewpoint of the objects present in a depicted scene. Such a database is useful for the training and evaluation of computer vision systems. Motivated by the availability of images on the Internet, we introduced a web-based annotation tool that allows online users to label objects and their spatial extent in images. To date, we have collected over 400 000 annotations that span a variety of different scene and object classes. In this paper, we show the contents of the database, its growth over time, and statistics of its usage. In addition, we explore and survey applications of the database in the areas of computer vision and computer graphics. Particularly, we show how to extract the real-world 3-D coordinates of images in a variety of scenes using only the user-provided object annotations. The output 3-D information is comparable to the quality produced by a laser range scanner. We also characterize the space of the images in the database by analyzing 1) statistics of the co-occurrence of large objects in the images and 2) the spatial layout of the labeled images.", "title": "" }, { "docid": "7d14fa7ae531f22512fa46621e72200c", "text": "In recent years we have seen a fast change in the networking industry: leading by the Software Defined Networking (SDN) paradigm that separates the control plane from the data plane to enable programmability and centralized control of the network infrastructure, the SDN design not only simplifies the network management but also accelerates the innovation speed of deploying advanced network applications. Meanwhile, the landscape of the wireless and mobile industry is changing dramatically as well. Given the advance of wireless technologies such as 4G and WiFi offering a pervasive Internet access, the traffic growth from the smartphone-alike devices has placed an increasing strain on the mobile network infrastructure and infringed the profit. Since the demand is increasing together with the growth of mobile users, the incumbent legacy infrastructure is already calling for an upgrade to overcome its existing limitations in terms of network management and security. In this paper, we advocate that the way forward is to integrate SDN and fully utilize its feature to solve the problem. As the security issue has raise serious concern in the networking community recently, we focus on the security aspect and investigate how to enhance the security with SDN for the wireless mobile networks. Crown Copyright 2014 Published by Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "6cbdb95791cc214a1b977e92e69904bb", "text": "We study reinforcement learning of chat-bots with recurrent neural network architectures when the rewards are noisy and expensive to obtain. For instance, a chat-bot used in automated customer service support can be scored by quality assurance agents, but this process can be expensive, time consuming and noisy. Previous reinforcement learning work for natural language processing uses onpolicy updates and/or is designed for on-line learning settings. We demonstrate empirically that such strategies are not appropriate for this setting and develop an off-policy batch policy gradient method (BPG). We demonstrate the efficacy of our method via a series of synthetic experiments and an Amazon Mechanical Turk experiment on a restaurant recommendations dataset.", "title": "" }, { "docid": "099ced7b083a6610305587a17392cb5d", "text": "In activity recognition, one major challenge is how to reduce the labeling effort one needs to make when recognizing a new set of activities. In this paper, we analyze the possibility of transferring knowledge from the available labeled data on a set of existing activities in one domain to help recognize the activities in another different but related domain. We found that such a knowledge transfer process is possible, provided that the recognized activities from the two domains are related in some way. We develop a bridge between the activities in two domains by learning a similarity function via Web search, under the condition that the sensor readings are from the same feature space. Based on the learned similarity measure, our algorithm interprets the data from the source domain as ‘‘pseudo training data’’ in the target domain with different confidence levels, which are in turn fed into supervised learning algorithms for training the classifier. We show that after using this transfer learning approach, the performance of activity recognition in the new domain is increased several fold as compared to when no knowledge transfer is done. Our algorithm is evaluated on several real-world datasets to demonstrate its effectiveness. In the experiments, our algorithm could achieve a 60% accuracy most of the time with no or very few training data in the target domain, which easily outperforms the supervised learning methods. © 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "42db6bf64c23ae2a052adaee6586ac6b", "text": "Brain Computer Interfaces (BCI) is an area of research that is rapidly growing in the neuroscience and bioengineering fields. One popular approach to the generation of a BCI system consist in the recognition by a computer of the patterns of electrical activity on the scalp gathered from a series of electrodes. One of the problems related to the use of surface EEG is the blurring effect due to the smearing of the skull on the transmission of the potential distribution from the cerebral cortex toward the scalp electrodes. This happens since the skull has a very low electric conductivity when compared with the scalp or the brain ones. The blurring effect makes the EEG data gathered from the scalp electrodes rather correlated, a problem not observed in the cortical EEG data recorded from the invasive implants in monkeys and man. Such correlation makes problematic the work of the classifiers, since the features extracted from the different scalp electrodes tends to be rather similar and this correlation is hard to be disentangled with blind methods like Principal Component Analysis. In the last decade, high-resolution EEG technologies have been developed to enhance the spatial information content of EEG activity. Furthermore, since the ultimate goal of any EEG recording is to provide useful information about the brain activity, a body of mathematical techniques, known as inverse procedures, has been developed to estimate the cortical activity from the raw EEG recordings. Examples of these inverse procedures are the dipole localization, the distributed source and the cortical imaging techniques. Inverse procedures could use linear and non linear techniques to localize putative cortical sources from EEG data, by using mathematical models of the head as volume conductor. More recently, it has been suggested that with the use of the modern high resolution EEG technologies it could be possible to estimate the cortical activity associated to the mental imagery of the upper limbs movements in humans better than with the scalp electrodes. In this presentation we will review main achievements in the field of the Brain Computer Interfaces and we will demonstrate how it is possible run a BCI system able to drive and control several electronic and robotic devices in a house environment. In particular, we first describe a BCI system used on a group of normal subjects in which the technology of the estimation of the cortical activity is illustrated. Then, we used the BCI system for the command of several electronic devices within a three-room environment employed for the neurorehabilitation.", "title": "" }, { "docid": "e47ec55000621d81f665f7d01a1a8553", "text": "Plant pest recognition and detection is vital for f od security, quality of life and a stable agricult ural economy. This research demonstrates the combination of the k -m ans clustering algorithm and the correspondence filter to achieve pest detection and recognition. The detecti on of the dataset is achieved by partitioning the d ata space into Voronoi cells, which tends to find clusters of comparable spatial extents, thereby separating the obj cts (pests) from the background (pest habitat). The det ction is established by extracting the variant dis inctive attributes between the pest and its habitat (leaf, stem) and using the correspondence filter to identi fy the plant pests to obtain correlation peak values for differe nt datasets. This work further establishes that the recognition probability from the pest image is directly proport i nal to the height of the output signal and invers ely proportional to the viewing angles, which further c onfirmed that the recognition of plant pests is a f unction of their position and viewing angle. It is encouraging to note that the correspondence filter can achieve rotational invariance of pests up to angles of 360 degrees, wh ich proves the effectiveness of the algorithm for t he detection and recognition of plant pests.", "title": "" }, { "docid": "dffe88b5b659033e7dfd8d7dd3ee28b6", "text": "We describe a novel approach for automatically predicting the hidden demographic properties of social media users. Building on prior work in common-sense knowledge acquisition from third-person text, we first learn the distinguishing attributes of certain classes of people. For example, we learn that people in the Female class tend to have maiden names and engagement rings. We then show that this knowledge can be used in the analysis of first-person communication; knowledge of distinguishing attributes allows us to both classify users and to bootstrap new training examples. Our novel approach enables substantial improvements on the widelystudied task of user gender prediction, obtaining a 20% relative error reduction over the current state-of-the-art.", "title": "" }, { "docid": "cddd8adea2d507d937db4052627136fd", "text": "For the reception of Satellite Digital Audio Radio Services (SDARS) and Global Positioning Systems (GPS) transmitted via satellite an invisible antenna combination embedded in the roof of a car is presented. Without changing the surface of the vehicle the antenna combination can be completely embedded in a metal cavity and covered by a thick dielectric part of the roof. The measurement results show a high efficiency and a large bandwidth which exceeds the necessary bandwidth significantly for both services. The antenna combination offers a radiation pattern which is tailored to the reception of SDARS signals transmitted via highly-elliptical-orbit (HEO) satellites, geostationary earth orbit (GEO) satellites and terrestrial repeaters and for GPS signals transmitted via medium earth orbit (MEO) satellites. Although the antennas are mounted in such a small mounting volume, the antennas are decoupled optimally.", "title": "" }, { "docid": "4f29effabf3e7c166b29eec240ac556a", "text": "The training algorithm of classical twin support vector regression (TSVR) can be attributed to the solution of a pair of quadratic programming problems (QPPs) with inequality constraints in the dual space. However, this solution is affected by time and memory constraints when dealing with large datasets. In this paper, we present a least squares version for TSVR in the primal space, termed primal least squares TSVR (PLSTSVR). By introducing the least squares method, the inequality constraints of TSVR are transformed into equality constraints. Furthermore, we attempt to directly solve the two QPPs with equality constraints in the primal space instead of the dual space; thus, we need only to solve two systems of linear equations instead of two QPPs. Experimental results on artificial and benchmark datasets show that PLSTSVR has comparable accuracy to TSVR but with considerably less computational time. We further investigate its validity in predicting the opening price of stock.", "title": "" }, { "docid": "ffaa8edb1fccf68e6b7c066fb994510a", "text": "A fast and precise determination of the DOA (direction of arrival) for immediate object classification becomes increasingly important for future automotive radar generations. Hereby, the elevation angle of an object is considered as a key parameter especially in complex urban environments. An antenna concept allowing the determination of object angles in azimuth and elevation is proposed and discussed in this contribution. This antenna concept consisting of a linear patch array and a cylindrical dielectric lens is implemented into a radar sensor and characterized in terms of angular accuracy and ambiguities using correlation algorithms and the CRLB (Cramer Rao Lower Bound).", "title": "" }, { "docid": "d8dd68593fd7bd4bdc868634deb9661a", "text": "We present a low-cost IoT based system able to monitor acoustic, olfactory, visual and thermal comfort levels. The system is provided with different ambient sensors, computing, control and connectivity features. The integration of the device with a smartwatch makes it possible the analysis of the personal comfort parameters.", "title": "" }, { "docid": "2aefddf5e19601c8338f852811cebdee", "text": "This paper presents a system that allows online building of 3D wireframe models through a combination of user interaction and automated methods from a handheld camera-mouse. Crucially, the model being built is used to concurrently compute camera pose, permitting extendable tracking while enabling the user to edit the model interactively. In contrast to other model building methods that are either off-line and/or automated but computationally intensive, the aim here is to have a system that has low computational requirements and that enables the user to define what is relevant (and what is not) at the time the model is being built. OutlinAR hardware is also developed which simply consists of the combination of a camera with a wide field of view lens and a wheeled computer mouse.", "title": "" }, { "docid": "ed0f4616a36a2dffb6120bccd7539d0c", "text": "Many accounts of decision making and reinforcement learning posit the existence of two distinct systems that control choice: a fast, automatic system and a slow, deliberative system. Recent research formalizes this distinction by mapping these systems to \"model-free\" and \"model-based\" strategies in reinforcement learning. Model-free strategies are computationally cheap, but sometimes inaccurate, because action values can be accessed by inspecting a look-up table constructed through trial-and-error. In contrast, model-based strategies compute action values through planning in a causal model of the environment, which is more accurate but also more cognitively demanding. It is assumed that this trade-off between accuracy and computational demand plays an important role in the arbitration between the two strategies, but we show that the hallmark task for dissociating model-free and model-based strategies, as well as several related variants, do not embody such a trade-off. We describe five factors that reduce the effectiveness of the model-based strategy on these tasks by reducing its accuracy in estimating reward outcomes and decreasing the importance of its choices. Based on these observations, we describe a version of the task that formally and empirically obtains an accuracy-demand trade-off between model-free and model-based strategies. Moreover, we show that human participants spontaneously increase their reliance on model-based control on this task, compared to the original paradigm. Our novel task and our computational analyses may prove important in subsequent empirical investigations of how humans balance accuracy and demand.", "title": "" }, { "docid": "8dd51b6119394c9a08c196a3731cbaec", "text": "Autonomous agents optimize the reward function we give them. What they don’t know is how hard it is for us to design a reward function that actually captures what we want. When designing the reward, we might think of some specific training scenarios, and make sure that the reward will lead to the right behavior in those scenarios. Inevitably, agents encounter new scenarios (e.g., new types of terrain) where optimizing that same reward may lead to undesired behavior. Our insight is that reward functions are merely observations about what the designer actually wants, and that they should be interpreted in the context in which they were designed. We introduce inverse reward design (IRD) as the problem of inferring the true objective based on the designed reward and the training MDP. We introduce approximate methods for solving IRD problems, and use their solution to plan risk-averse behavior in test MDPs. Empirical results suggest that this approach can help alleviate negative side effects of misspecified reward functions and mitigate reward hacking.", "title": "" }, { "docid": "9bcba1b3d4e63c026d1bd16bfd2c8d7b", "text": "Developmental robotics is an emerging field located at the intersection of robotics, cognitive science and developmental sciences. This paper elucidates the main reasons and key motivations behind the convergence of fields with seemingly disparate interests, and shows why developmental robotics might prove to be beneficial for all fields involved. The methodology advocated is synthetic and two-pronged: on the one hand, it employs robots to instantiate models originating from developmental sciences; on the other hand, it aims to develop better robotic systems by exploiting insights gained from studies on ontogenetic development. This paper gives a survey of the relevant research issues and points to some future research directions. 1. Introduction Developmental robotics is an emergent area of research at the intersection of robotics and developmental sciences—in particular developmental psychology and developmental neuroscience. It constitutes an interdisciplinary and two-pronged approach to robotics, which on one side employs robots to instantiate and investigate models originating from developmental sciences, and on the other side seeks to design better robotic systems by applying insights gained from studies on ontogenetic development.", "title": "" }, { "docid": "ae5fef3ebb145761efe9bca44a9cc154", "text": "Social media has become an integral part of people’s lives. People share their daily activities, experiences, interests, and opinions on social networking websites, opening the floodgates of information that can be analyzed by marketers as well as consumers. However, low barriers to publication and easy-to-use interactive interfaces have contributed to various information quality (IQ) problems in the social media that has made obtaining timely, accurate and relevant information a challenge. Approaches such as data mining and machine learning have only begun to address these challenges. Social media has its own distinct characteristics that warrant specialized approaches. In this paper, we study the unique characteristics of social media and address how existing methods fall short in mitigating the IQ issues it faces. Despite being extensively studied, IQ theories have yet to be embraced in tackling IQ challenges in social media. We redefine social media challenges as IQ challenges. We propose an IQ and Total Data Quality Management (TDQM) approach to the Social media challenges. We map the IQ dimensions, social media categories, social media challenges, and IQ tools in order to bridge the gap between the IQ framework and its application in addressing IQ challenges in social media.", "title": "" }, { "docid": "c2baa873bc2850b14b3868cdd164019f", "text": "It is expensive to obtain labeled real-world visual data for use in training of supervised algorithms. Therefore, it is valuable to leverage existing databases of labeled data. However, the data in the source databases is often obtained under conditions that differ from those in the new task. Transfer learning provides techniques for transferring learned knowledge from a source domain to a target domain by finding a mapping between them. In this paper, we discuss a method for projecting both source and target data to a generalized subspace where each target sample can be represented by some combination of source samples. By employing a low-rank constraint during this transfer, the structure of source and target domains are preserved. This approach has three benefits. First, good alignment between the domains is ensured through the use of only relevant data in some subspace of the source domain in reconstructing the data in the target domain. Second, the discriminative power of the source domain is naturally passed on to the target domain. Third, noisy information will be filtered out during knowledge transfer. Extensive experiments on synthetic data, and important computer vision problems such as face recognition application and visual domain adaptation for object recognition demonstrate the superiority of the proposed approach over the existing, well-established methods.", "title": "" }, { "docid": "d29cca7c16b0e5b43c85e1a8701d735f", "text": "The sparse matrix solver by LU factorization is a serious bottleneck in Simulation Program with Integrated Circuit Emphasis (SPICE)-based circuit simulators. The state-of-the-art Graphics Processing Units (GPU) have numerous cores sharing the same memory, provide attractive memory bandwidth and compute capability, and support massive thread-level parallelism, so GPUs can potentially accelerate the sparse solver in circuit simulators. In this paper, an efficient GPU-based sparse solver for circuit problems is proposed. We develop a hybrid parallel LU factorization approach combining task-level and data-level parallelism on GPUs. Work partitioning, number of active thread groups, and memory access patterns are optimized based on the GPU architecture. Experiments show that the proposed LU factorization approach on NVIDIA GTX580 attains an average speedup of 7.02× (geometric mean) compared with sequential PARDISO, and 1.55× compared with 16-threaded PARDISO. We also investigate bottlenecks of the proposed approach by a parametric performance model. The performance of the sparse LU factorization on GPUs is constrained by the global memory bandwidth, so the performance can be further improved by future GPUs with larger memory bandwidth.", "title": "" }, { "docid": "bc37250f9421f6657252ce286703e85c", "text": "This paper introduces a method for producing high quality hand motion using a small number of markers. The proposed \"handover\" animation technique constructs joint angle trajectories with the help of a reference database. Utilizing principle component analysis (PCA) applied to the database, the system automatically determines the sparse marker set to record. Further, to produce hand animation, PCA is used along with a locally weighted regression (LWR) model to reconstruct joint angles. The resulting animation is a full-resolution hand which reflects the original motion without the need for capturing a full marker set. Comparing the technique to other methods reveals improvement over the state of the art in terms of the marker set selection. In addition, the results highlight the ability to generalize the motion synthesized, both by extending the use of a single reference database to new motions, and from distinct reference datasets, over a variety of freehand motions.", "title": "" }, { "docid": "b01e3b03cd418b9748de7546ef7a9ca2", "text": "We describe a lightweight protocol for oblivious evaluation of a pseudorandom function (OPRF) in the presence of semihonest adversaries. In an OPRF protocol a receiver has an input r; the sender gets output s and the receiver gets output F(s; r), where F is a pseudorandom function and s is a random seed. Our protocol uses a novel adaptation of 1-out-of-2 OT-extension protocols, and is particularly efficient when used to generate a large batch of OPRF instances. The cost to realize m OPRF instances is roughly the cost to realize 3:5m instances of standard 1-out-of-2 OTs (using state-of-the-art OT extension). We explore in detail our protocol's application to semihonest secure private set intersection (PSI). The fastest state-of- the-art PSI protocol (Pinkas et al., Usenix 2015) is based on efficient OT extension. We observe that our OPRF can be used to remove their PSI protocol's dependence on the bit-length of the parties' items. We implemented both PSI protocol variants and found ours to be 3.1{3.6 faster than Pinkas et al. for PSI of 128-bit strings and sufficiently large sets. Concretely, ours requires only 3.8 seconds to securely compute the intersection of 220-size sets, regardless of the bitlength of the items. For very large sets, our protocol is only 4:3 slower than the insecure naive hashing approach for PSI.", "title": "" } ]
scidocsrr
7a86d6c3ac3b65150c85a4df5aea3fc8
Time Series Classification Using Multi-Channels Deep Convolutional Neural Networks
[ { "docid": "510a43227819728a77ff0c7fa06fa2d0", "text": "The ubiquity of time series data across almost all human endeavors has produced a great interest in time series data mining in the last decade. While there is a plethora of classification algorithms that can be applied to time series, all of the current empirical evidence suggests that simple nearest neighbor classification is exceptionally difficult to beat. The choice of distance measure used by the nearest neighbor algorithm depends on the invariances required by the domain. For example, motion capture data typically requires invariance to warping. In this work we make a surprising claim. There is an invariance that the community has missed, complexity invariance. Intuitively, the problem is that in many domains the different classes may have different complexities, and pairs of complex objects, even those which subjectively may seem very similar to the human eye, tend to be further apart under current distance measures than pairs of simple objects. This fact introduces errors in nearest neighbor classification, where complex objects are incorrectly assigned to a simpler class. We introduce the first complexity-invariant distance measure for time series, and show that it generally produces significant improvements in classification accuracy. We further show that this improvement does not compromise efficiency, since we can lower bound the measure and use a modification of triangular inequality, thus making use of most existing indexing and data mining algorithms. We evaluate our ideas with the largest and most comprehensive set of time series classification experiments ever attempted, and show that complexity-invariant distance measures can produce improvements in accuracy in the vast majority of cases.", "title": "" } ]
[ { "docid": "aea93496d1ff9638af76150c2cfaaa1a", "text": "This study pursues the optimization of the brain responses to small reversing patterns in a Steady-State Visual Evoked Potentials (SSVEP) paradigm, which could be used to maximize the efficiency of applications such as Brain-Computer Interfaces (BCI). We investigated the SSVEP frequency response for 32 frequencies (5-84 Hz), and the time dynamics of the brain response at 8, 14 and 28 Hz, to aid the definition of the optimal neurophysiological parameters and to outline the onset-delay and other limitations of SSVEP stimuli in applications such as our previously described four-command BCI system. Our results showed that the 5.6-15.3 Hz pattern reversal stimulation evoked the strongest responses, peaking at 12 Hz, and exhibiting weaker local maxima at 28 and 42 Hz. After stimulation onset, the long-term SSVEP response was highly non-stationary and the dynamics, including the first peak, was frequency-dependent. The evaluation of the performance of a frequency-optimized eight-command BCI system with dynamic neurofeedback showed a mean success rate of 98%, and a time delay of 3.4s. Robust BCI performance was achieved by all subjects even when using numerous small patterns clustered very close to each other and moving rapidly in 2D space. These results emphasize the need for SSVEP applications to optimize not only the analysis algorithms but also the stimuli in order to maximize the brain responses they rely on.", "title": "" }, { "docid": "d42f5fdbcaf8933dc97b377a801ef3e0", "text": "Bodyweight supported treadmill training has become a prominent gait rehabilitation method in leading rehabilitation centers. This type of locomotor training has many functional benefits but the labor costs are considerable. To reduce therapist effort, several groups have developed large robotic devices for assisting treadmill stepping. A complementary approach that has not been adequately explored is to use powered lower limb orthoses for locomotor training. Recent advances in robotic technology have made lightweight powered orthoses feasible and practical. An advantage to using powered orthoses as rehabilitation aids is they allow practice starting, turning, stopping, and avoiding obstacles during overground walking.", "title": "" }, { "docid": "730084162281f2645c1a978cc4ad4074", "text": "IMPORTANCE\nThe appropriate clinical setting for the application of sentinel lymph node biopsy (SLNB) in the management of cutaneous squamous cell carcinoma (cSCC) is not well characterized. Numerous case reports and case series examine SLNB findings in patients who were considered to have high-risk cSCC, but no randomized clinical trials have been performed.\n\n\nOBJECTIVE\nTo analyze which stages in the American Joint Committee on Cancer (AJCC) criteria and a recently proposed alternative staging system are most closely associated with positive SLNB findings in nonanogenital cSCC.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nMedical literature review and case data extraction from private and institutional practices to identify patients with nonanogenital cSCC who underwent SLNB. Patients were eligible if sufficient tumor characteristics were available to classify tumors according to AJCC staging criteria and a proposed alternative staging system. One hundred thirty patients had sufficient data for AJCC staging, whereas 117 had sufficient data for the alternative system.\n\n\nEXPOSURE\nNonanogenital cSCC and SLNB.\n\n\nMAIN OUTCOMES AND MEASURES\nPositive SLNB findings by cSCC stage, quantified as the number and percentage of positive nodes.\n\n\nRESULTS\nA positive SLN was identified in 12.3% of all patients. All cSCCs with positive SLNs were greater than 2 cm in diameter. The AJCC criteria identifed positive SLNB findings in 0 of 9 T1 lesions (0%), 13 of 116 T2 lesions (11.2%), and 3 of 5 T4 lesions (60.0%). No T3 lesions were identified. The alternative staging system identified positive SNLB findings in 0 of 9 T1 lesions (0%), 6 of 85 T2a lesions (7.1%), 5 of 17 T2b lesions (29.4%), and 3 of 6 T3 lesions (50.0%). Rates of positive SLNB findings in patients with T2b lesions were statistically higher than those with T2a lesions (P = .02, Fisher exact test) in the alternative staging system.\n\n\nCONCLUSIONS AND RELEVANCE\nOur findings suggest that most cSCCs associated with positive SLNB findings occur in T2 lesions (in both staging systems) that are greater than 2 cm in diameter. The alternative staging system appears to more precisely delineate high-risk lesions in the T2b category that may warrant consideration of SLNB. Future prospective studies are necessary to validate the relationship between tumor stage and positive SLNB findings and to identify the optimal staging system.", "title": "" }, { "docid": "56acc9fd9d211a4c644398f40492392d", "text": "Internet of Things (IoT) is a concept that envisions all objects around us as part of internet. IoT coverage is very wide and include variety of objects like smart phones, tablets, digital cameras, sensors, etc. Once all these devices are connected with each other, they enable more and more smart processes and services that support our basic needs, economies, environment and health. Such enormous number of devices connected to internet provides many kinds of services and produce huge amount of data and information. Cloud computing is a model for on-demand access to a shared pool of configurable resources (e.g. compute, networks, servers, storage, applications, services, and software) that can be easily provisioned as Infrastructure (IaaS), software and applications (SaaS). Cloud based platforms help to connect to the things (IaaS) around us so that we can access anything at any time and any place in a user friendly manner using customized portals and in built applications (SaaS). Hence, cloud acts as a front end to access Internet of Things. Applications that interact with devices like sensors have special requirements of massive storage to storage big data, huge computation power to enable the real time processing of the data, and high speed network to stream audio or video. In this paper, we describe how Internet of Things and Cloud computing can work together can address the Big Data issues. We also illustrate about Sensing as a service on cloud using few applications like Augmented Reality, Agriculture and Environment monitoring. Finally, we also propose a prototype model for providing sensing as a service on cloud.", "title": "" }, { "docid": "25a56d4fb311baca45a56eef92576cc4", "text": "All bank marketing campaigns are dependent on customers&apos; huge electronic data. The size of these data sources is impossible for a human analyst to come up with interesting information that will help in the decision-making process. Data mining models are completely helping in the performance of these campaigns. This paper introduces analysis and applications of the most important techniques in data mining; multilayer perception neural network (MLPNN), tree augmented Naïve Bayes (TAN) known as Bayesian networks, Nominal regression or logistic regression (LR), and Ross Quinlan new decision tree model (C5. 0). The objective is to examine the performance of MLPNN, TAN, LR and C5. 0 techniques on a real-world data of bank deposit subscription. The purpose is increasing the campaign effectiveness by identifying the main characteristics that affect a success (the deposit subscribed by the client) based on MLPNN, TAN, LR and C5. 0. The experimental results demonstrate, with higher accuracies, the success of these models in predicting the best campaign contact with the clients for subscribing deposit. The performances are calculated by three statistical measures; classification accuracy, sensitivity, and specificity.", "title": "" }, { "docid": "54d54094acea1900e183144d32b1910f", "text": "A large body of work has been devoted to address corporate-scale privacy concerns related to social networks. Most of this work focuses on how to share social networks owned by organizations without revealing the identities or the sensitive relationships of the users involved. Not much attention has been given to the privacy risk of users posed by their daily information-sharing activities.\n In this article, we approach the privacy issues raised in online social networks from the individual users’ viewpoint: we propose a framework to compute the privacy score of a user. This score indicates the user’s potential risk caused by his or her participation in the network. Our definition of privacy score satisfies the following intuitive properties: the more sensitive information a user discloses, the higher his or her privacy risk. Also, the more visible the disclosed information becomes in the network, the higher the privacy risk. We develop mathematical models to estimate both sensitivity and visibility of the information. We apply our methods to synthetic and real-world data and demonstrate their efficacy and practical utility.", "title": "" }, { "docid": "ab793edc212dc2a537dbcb4ac9736f9f", "text": "Much of the abusive supervision research has focused on the supervisor– subordinate dyad when examining the effects of abusive supervision on employee outcomes. Using data from a large multisource field study, we extend this research by testing a trickle-down model of abusive supervision across 3 hierarchical levels (i.e., managers, supervisors, and employees). Drawing on social learning theory and social information processing theory, we find general support for the study hypotheses. Specifically, we find that abusive manager behavior is positively related to abusive supervisor behavior, which in turn is positively related to work group interpersonal deviance. In addition, hostile climate moderates the relationship between abusive supervisor behavior and work group interpersonal deviance such that the relationship is stronger when hostile climate is high. The results provide support for our trickle-down model in that abusive manager behavior was not only related to abusive supervisor behavior but was also associated with employees’ behavior 2 hierarchical levels below the manager.", "title": "" }, { "docid": "957e7e88aa5056a7dd512fcd56ce71f2", "text": "This paper investigates macroeconomic determinants of the unemployment for India, China and Pakistan for the period 1980 to 2009. The investigation was conducted through co integration, granger causality and regression analysis. The variables selected for the study are unemployment, inflation, gross domestic product, exchange rate and the increasing rate of population. The results of regression analysis showed significant impact of all the variables for all three countries. GDP of Pakistan showed positive relation with the unemployment rate and the reason of that is the poverty level and underutilization of foreign investment. The result of granger causality showed that bidirectional causality does not exist between any of the variable for all three countries. Co integration result explored that long term relationship do exist among the variables for all the models. It is recommended that distribution of income needs to be improved for Pakistan in order to have positive impact of growth on the employment rate.", "title": "" }, { "docid": "653f7e6f8aac3464eeac88a5c2f21f2e", "text": "The decentralized electronic currency system Bitcoin gives the possibility to execute transactions via direct communication between users, without the need to resort to third parties entrusted with legitimizing the concerned monetary value. In its current state of development a recent, fast-changing, volatile and highly mediatized technology the discourses that unfold within spaces of information and discussion related to Bitcoin can be analysed in light of their ability to produce at once the representations of value, the practices according to which it is transformed and evolves, and the devices allowing for its implementation. The literature on the system is a testament to how the Bitcoin debates do not merely spread, communicate and diffuse representation of this currency, but are closely intertwined with the practice of the money itself. By focusing its attention on a specific corpus, that of expert discourse, the article shows how, introducing and discussing a specific device, dynamic or operation as being in some way related to trust, this expert knowledge contributes to the very definition and shaping of this trust within the Bitcoin system ultimately contributing to perform the shared definition of its value as a currency.", "title": "" }, { "docid": "5632d79f37b4bc774cd3bdf7f1cd5c71", "text": "Switching devices based on wide band gap materials as SiC offer a significant performance improvement on the switch level compared to Si devices. A well known example are SiC diodes employed e.g. in PFC converters. In this paper, the impact on the system level performance, i.e. efficiency/power density, of a PFC and of a DC-DC converter resulting with the new SiC devices is evaluated based on analytical optimisation procedures and prototype systems. There, normally-on JFETs by SiCED and normally-off JFETs by SemiSouth are considered.", "title": "" }, { "docid": "fb37da1dc9d95501e08d0a29623acdab", "text": "This study evaluates various evolutionary search methods to direct neural controller evolution in company with policy (behavior) transfer across increasingly complex collective robotic (RoboCup keep-away) tasks. Robot behaviors are first evolved in a source task and then transferred for further evolution to more complex target tasks. Evolutionary search methods tested include objective-based search (fitness function), behavioral and genotypic diversity maintenance, and hybrids of such diversity maintenance and objective-based search. Evolved behavior quality is evaluated according to effectiveness and efficiency. Effectiveness is the average task performance of transferred and evolved behaviors, where task performance is the average time the ball is controlled by a keeper team. Efficiency is the average number of generations taken for the fittest evolved behaviors to reach a minimum task performance threshold given policy transfer. Results indicate that policy transfer coupled with hybridized evolution (behavioral diversity maintenance and objective-based search) addresses the bootstrapping problem for increasingly complex keep-away tasks. That is, this hybrid method (coupled with policy transfer) evolves behaviors that could not otherwise be evolved. Also, this hybrid evolutionary search was demonstrated as consistently evolving topologically simple neural controllers that elicited high-quality behaviors.", "title": "" }, { "docid": "e0fe79c4df207826ae9946031884a603", "text": "Document image processing is a crucial process in office automation and begins at the ‘OCR’ phase with difficulties in document ‘analysis’ and ‘understanding’. This paper presents a hybrid and comprehensive approach to document structure analysis. Hybrid in the sense that it makes use of layout (geometrical) as well as textual features of a given document. These features are the base for potential conditions which in turn are used to express fuzzy matched rules of an underlying rule base. Rules can be formulated based on features which might be observed within one specific layout object. However, rules can also express dependencies between different layout objects. In addition to its rule driven analysis, which allows an easy adaptation to specific domains with their specific logical objects, the system contains domain-independent markup algorithms for common objects (e.g., lists).", "title": "" }, { "docid": "a900a7b1b6eff406fa42906ec5a31597", "text": "From wearables to smart appliances, the Internet of Things (IoT) is developing at a rapid pace. The challenge is to find the best fitting solution within a range of different technologies that all may be appropriate at the first sight to realize a specific embedded device. A single tool for measuring power consumption of various wireless technologies and low power modes helps to optimize the development process of modern IoT systems. In this paper, we present an accurate but still cost-effective measurement solution for tracking the highly dynamic power consumption of wireless embedded systems. We extended the conventional measurement of a single shunt resistor's voltage drop by using a dual shunt resistor stage with an automatic switch-over between two stages, which leads to a large dynamic measurement range from μA up to several hundreds mA. To demonstrate the usability of our simple-to-use power measurement system different use cases are presented. Using two independent current measurement channels allows to evaluate the timing relation of proprietary RF communication. Furthermore a forecast is given on the expected battery lifetime of a Wifi-based data acquisition system using measurement results of the presented tool.", "title": "" }, { "docid": "f3d474cb9b6fe14b575c7002d7bd5581", "text": "PURPOSE\nTo investigate the efficacy of the projection onto convex sets (POCS) algorithm at Gd-EOB-DTPA-enhanced hepatobiliary-phase MRI.\n\n\nMETHODS\nIn phantom study, we scanned a phantom and obtained images by conventional means (P1 images), by partial-Fourier image reconstruction (PF, P2 images) and by PF with the POCS algorithm (P3 images). Then we acquired and compared subtraction images (P2-P1 images and P3-P1 images). In clinical study, 55 consecutive patients underwent Gd-EOB-DTPA (EOB)-enhanced 3D hepatobiliary-phase MRI on a 1.5T scanner. Images were obtained using conventional method (C1 images), PF (C2 images), and PF with POCS (C3 images). The acquisition time was 17-, 14-, and 14 s for protocols C1, C2 and C3, respectively. Two radiologists assigned grades for hepatic vessel sharpness and we compared the visual grading among the 3 protocols. And one radiologist compared signal-to-noise-ratio (SNR) of the hepatic parenchyma.\n\n\nRESULTS\nIn phantom study, there was no difference in signal intensity on a peripheral phantom column on P3-P1 images. In clinical study, there was no significant difference between C1 and C3 images (2.62 ± 0.49 vs. 2.58 ± 0.49, p = 0.70) in the score assigned for vessel sharpness nor in SNR (13.3 ± 2.67 vs. 13.1 ± 2.51, p = 0.18).\n\n\nCONCLUSION\nThe POCS algorithm makes it possible to reduce the scan time of hepatobiliary phase (from 17 to 14 s) without reducing SNR and without increasing artifacts.", "title": "" }, { "docid": "bedf74b33efb1c42c252ef6363d26ca0", "text": "BACKGROUND\nAssessment is a crucial and complex process. Thus quality assurance of assessment Methods is essential when assessment is used for the purposes of certification.\n\n\nAIM\nTo evaluate the effect of continuous well-structured process of the question bank revision, enlightened by item analysis, in improving the quality of Multiple-Choice Questions (MCQs).\n\n\nSETTING\nThe Family Medicine (FM) training certification exam for the Egyptian Board accredited for International Membership of the Royal College of General Practitioners (MRCGP[INT]).\n\n\nMETHOD\nThe results of the item analysis of Applied Knowledge Tests (AKTs) of two academic years (2009, 2013) were included in the study. The AKT test consisted of two papers, each of 100 MCQS, and blue printed against the FM training curriculum. A total of 226 candidates sat for the two exams; 102 in 2009 and 124 in 2013.\n\n\nRESULTS\nThere were more MCQs with moderate difficulty and higher discrimination in 2013. Significant improvement was found in the discrimination index (DI) values in 2013 (p < 0.001). and questions with a high facility and classified as easy decreased from 40.5 to 28.7%. The average number of functioning distractors per item increased from 1.99 in 2009 to 2.19 in 2013 (p = 0.015).\n\n\nCONCLUSION\nRevision of well-constructed MCQs on a regular basis, and in a structured manner, improved the quality of the MCQs and consequently improved the validity of the examination.", "title": "" }, { "docid": "640ba15172b56373b3a6bdfe9f5f6cd4", "text": "This work considers the problem of learning cooperative policies in complex, partially observable domains without explicit communication. We extend three classes of single-agent deep reinforcement learning algorithms based on policy gradient, temporal-difference error, and actor-critic methods to cooperative multi-agent systems. To effectively scale these algorithms beyond a trivial number of agents, we combine them with a multi-agent variant of curriculum learning. The algorithms are benchmarked on a suite of cooperative control tasks, including tasks with discrete and continuous actions, as well as tasks with dozens of cooperating agents. We report the performance of the algorithms using different neural architectures, training procedures, and reward structures. We show that policy gradient methods tend to outperform both temporal-difference and actor-critic methods and that curriculum learning is vital to scaling reinforcement learning algorithms in complex multiagent domains.", "title": "" }, { "docid": "41c890e5c5925769962713de3f84b948", "text": "In recent years, with the development of 3D technologies, 3D model retrieval has become a hot topic. The key point of 3D model retrieval is to extract robust feature for 3D model representation. In order to improve the effectiveness of method on 3D model retrieval, this paper proposes a feature extraction model based on convolutional neural networks (CNN). First, we extract a set of 2D images from 3D model to represent each 3D object. SIFT detector is utilized to detect interesting points from each 2D image and extract interesting patches to represent local information of each 3D model. X-means is leveraged to generate the CNN filters. Second, a single CNN layer learns low-level features which are then given as inputs to multiple recursive neural networks (RNN) in order to compose higher order features. RNNs can generate the final feature for 2D image representation. Finally, nearest neighbor is used to compute the similarity between different 3D models in order to handle the retrieval problem. Extensive comparison experiments were on the popular ETH and MV-RED 3D model datasets. The results demonstrate the superiority of the proposed method.", "title": "" }, { "docid": "55158927c639ed62b53904b97a0f7a97", "text": "Speech comprehension and production are governed by control processes. We explore their nature and dynamics in bilingual speakers with a focus on speech production. Prior research indicates that individuals increase cognitive control in order to achieve a desired goal. In the adaptive control hypothesis we propose a stronger hypothesis: Language control processes themselves adapt to the recurrent demands placed on them by the interactional context. Adapting a control process means changing a parameter or parameters about the way it works (its neural capacity or efficiency) or the way it works in concert, or in cascade, with other control processes (e.g., its connectedness). We distinguish eight control processes (goal maintenance, conflict monitoring, interference suppression, salient cue detection, selective response inhibition, task disengagement, task engagement, opportunistic planning). We consider the demands on these processes imposed by three interactional contexts (single language, dual language, and dense code-switching). We predict adaptive changes in the neural regions and circuits associated with specific control processes. A dual-language context, for example, is predicted to lead to the adaptation of a circuit mediating a cascade of control processes that circumvents a control dilemma. Effective test of the adaptive control hypothesis requires behavioural and neuroimaging work that assesses language control in a range of tasks within the same individual.", "title": "" }, { "docid": "8fa8e875a948aed94b7682b86fcbc171", "text": "Do teams show stable conflict interaction patterns that predict their performance hours, weeks, or even months in advance? Two studies demonstrate that two of the same patterns of emotional interaction dynamics that distinguish functional from dysfunctional marriages also distinguish high from low-performance design teams in the field, up to 6 months in advance, with up to 91% accuracy, and based on just 15minutes of interaction data: Group Affective Balance, the balance of positive to negative affect during an interaction, and Hostile Affect, the expression of a set of specific negative behaviors were both found as predictors of team performance. The research also contributes a novel method to obtain a representative sample of a team's conflict interaction. Implications for our understanding of design work in teams and for the design of groupware and feedback intervention systems are discussed.", "title": "" }, { "docid": "c0a9e0cb0e3c0ffa6409e5020795f059", "text": "Credit-card-based purchases can be categorized into two types: 1) physical card and 2) virtual card. In a physical-card based purchase, the cardholder presents his card physically to a merchant for making a payment. To carry out fraudulent transactions in this kind of purchase, an attacker has to steal the credit card. If the cardholder does not realize the loss of card, it can lead to a substantial financial loss to the credit card company. In the second kind of purchase, only some important information about a card (card number, expiration date, secure code) is required to make the payment. Such purchases are normally done on the Internet or over the telephone. To commit fraud in these types of purchases, a fraudster simply needs to know the card details. Most of the time, the genuine cardholder is not aware that someone else has seen or stolen his card information.The only way to detect this kind of fraud is to analyze the spending patterns on every card and to figure out any inconsistency with respect to the “usual” spending patterns. Fraud detection based on the analysis of existing purchase data of cardholder is a promising way to reduce the rate of successful credit card frauds. The existing nondata mining detection system of business rules and scorecards, and known fraud matching have limitations. To address these limitations and combat identity crime in real time, this paper proposes a new multilayered detection system complemented withtwo additional layers: communal detection (CD) and spike detection (SD).CD finds realsocial relationships to reduce the suspicion score, and is tamper resistant to synthetic social relationships. It is the whitelist-oriented approach on a fixed set of attributes. SD finds spikes in duplicates to increase the suspicion score, and is probe-resistant for attributes. Key words— communal detection, spike detection, fraud detection, support vector machine", "title": "" } ]
scidocsrr
3352635e92b1f48e5db49953433c3205
Single Image 3D without a Single 3D Image
[ { "docid": "b4ab47d8ec52d7a8e989bfc9d6c0d173", "text": "In this paper, we consider the problem of recovering the spatial layout of indoor scenes from monocular images. The presence of clutter is a major problem for existing single-view 3D reconstruction algorithms, most of which rely on finding the ground-wall boundary. In most rooms, this boundary is partially or entirely occluded. We gain robustness to clutter by modeling the global room space with a parameteric 3D “box” and by iteratively localizing clutter and refitting the box. To fit the box, we introduce a structured learning algorithm that chooses the set of parameters to minimize error, based on global perspective cues. On a dataset of 308 images, we demonstrate the ability of our algorithm to recover spatial layout in cluttered rooms and show several examples of estimated free space.", "title": "" }, { "docid": "92cc028267bc3f8d44d11035a8212948", "text": "The limitations of current state-of-the-art methods for single-view depth estimation and semantic segmentations are closely tied to the property of perspective geometry, that the perceived size of the objects scales inversely with the distance. In this paper, we show that we can use this property to reduce the learning of a pixel-wise depth classifier to a much simpler classifier predicting only the likelihood of a pixel being at an arbitrarily fixed canonical depth. The likelihoods for any other depths can be obtained by applying the same classifier after appropriate image manipulations. Such transformation of the problem to the canonical depth removes the training data bias towards certain depths and the effect of perspective. The approach can be straight-forwardly generalized to multiple semantic classes, improving both depth estimation and semantic segmentation performance by directly targeting the weaknesses of independent approaches. Conditioning the semantic label on the depth provides a way to align the data to their physical scale, allowing to learn a more discriminative classifier. Conditioning depth on the semantic class helps the classifier to distinguish between ambiguities of the otherwise ill-posed problem. We tested our algorithm on the KITTI road scene dataset and NYU2 indoor dataset and obtained obtained results that significantly outperform current state-of-the-art in both single-view depth and semantic segmentation domain.", "title": "" }, { "docid": "2a56585a288405b9adc7d0844980b8bf", "text": "In this paper we propose the first exact solution to the problem of estimating the 3D room layout from a single image. This problem is typically formulated as inference in a Markov random field, where potentials count image features (e.g ., geometric context, orientation maps, lines in accordance with vanishing points) in each face of the layout. We present a novel branch and bound approach which splits the label space in terms of candidate sets of 3D layouts, and efficiently bounds the potentials in these sets by restricting the contribution of each individual face. We employ integral geometry in order to evaluate these bounds in constant time, and as a consequence, we not only obtain the exact solution, but also in less time than approximate inference tools such as message-passing. We demonstrate the effectiveness of our approach in two benchmarks and show that our bounds are tight, and only a few evaluations are necessary.", "title": "" } ]
[ { "docid": "70991373ae71f233b0facd2b5dd1a0d3", "text": "Information communications technology systems are facing an increasing number of cyber security threats, the majority of which are originated by insiders. As insiders reside behind the enterprise-level security defence mechanisms and often have privileged access to the network, detecting and preventing insider threats is a complex and challenging problem. In fact, many schemes and systems have been proposed to address insider threats from different perspectives, such as intent, type of threat, or available audit data source. This survey attempts to line up these works together with only three most common types of insider namely traitor, masquerader, and unintentional perpetrator, while reviewing the countermeasures from a data analytics perspective. Uniquely, this survey takes into account the early stage threats which may lead to a malicious insider rising up. When direct and indirect threats are put on the same page, all the relevant works can be categorised as host, network, or contextual data-based according to audit data source and each work is reviewed for its capability against insider threats, how the information is extracted from the engaged data sources, and what the decision-making algorithm is. The works are also compared and contrasted. Finally, some issues are raised based on the observations from the reviewed works and new research gaps and challenges identified.", "title": "" }, { "docid": "6ce8648c194a73fccf9352a74faa405c", "text": "Recently, social media such as Facebook has been more popular. Receiving information from Facebook and generating or spreading information on Facebook every day has become a general lifestyle. This new information-exchanging platform contains a lot of meaningful messages including users' emotions and preferences. Using messages on Facebook or in general social media to predict the election result and political affiliation has been a trend. In Taiwan, for example, almost every politician tries to have public opinion polls by using social media; almost every politician has his or her own fan page on Facebook, and so do the parties. We make an effort to predict to what party, DPP or KMT, two major parties in Taiwan, a post would be related or affiliated. We design features and models for the prediction, and we evaluate as well as compare them with the data collected from several political fan pages on Facebook. The results show that we can obtain accuracy higher than 90% when the text and interaction features are used with a nearest neighbor classifier.", "title": "" }, { "docid": "213c393635b8a7bb341fd1cc05e23d2d", "text": "Vegetables and fruits are the most important export agricultural products of Thailand. In order to obtain more value-added products, a product quality control is essentially required. Many studies show that quality of agricultural products may be reduced from many causes. One of the most important factors of such quality is plant diseases. Consequently, minimizing plant diseases allows substantially improving quality of the products. This work presents automatic plant disease diagnosis using multiple artificial intelligent techniques. The system can diagnose plant leaf disease without maintaining any expertise once the system is trained. Mainly, the grape leaf disease is focused in this work. The proposed system consists of three main parts: (i) grape leaf color segmentation, (ii) grape leaf disease segmentation, and (iii) analysis & classification of diseases. The grape leaf color segmentation is pre-processing module which segments out any irrelevant background information. A self-organizing feature map together with a back-propagation neural network is deployed to recognize colors of grape leaf. This information is used to segment grape leaf pixels within the image. Then the grape leaf disease segmentation is performed using modified self-organizing feature map with genetic algorithms for optimization and support vector machines for classification. Finally, the resulting segmented image is filtered by Gabor wavelet which allows the system to analyze leaf disease color features more efficient. The support vector machines are then again applied to classify types of grape leaf diseases. The system can be able to categorize the image of grape leaf into three classes: scab disease, rust disease and no disease. The proposed system shows desirable results which can be further developed for any agricultural product analysis/inspection system.", "title": "" }, { "docid": "de48b60276b27861d58aaaf501606d69", "text": "Many environmental variables that are important for the development of chironomid larvae (such as water temperature, oxygen availability, and food quantity) are related to water depth, and a statistically strong relationship between chironomid distribution and water depth is therefore expected. This study focuses on the distribution of fossil chironomids in seven shallow lakes and one deep lake from the Plymouth Aquifer (Massachusetts, USA) and aims to assess the influence of water depth on chironomid assemblages within a lake. Multiple samples were taken per lake in order to study the distribution of fossil chironomid head capsules within a lake. Within each lake, the chironomid assemblages are diverse and the changes that are seen in the assemblages are strongly related to changes in water depth. Several thresholds (i.e., where species turnover abruptly changes) are identified in the assemblages, and most lakes show abrupt changes at about 1–2 and 5–7 m water depth. In the deep lake, changes also occur at 9.6 and 15 m depth. The distribution of many individual taxa is significantly correlated to water depth, and we show that the identification of different taxa within the genus Tanytarsus is important because different morphotypes show different responses to water depth. We conclude that the chironomid fauna is sensitive to changes in lake level, indicating that fossil chironomid assemblages can be used as a tool for quantitative reconstruction of lake level changes.", "title": "" }, { "docid": "df6d4e6d74d96b7ab1951cc869caad59", "text": "A broadband commonly fed antenna with dual polarization is proposed in this letter. The main radiator of the antenna is designed as a loop formed by four staircase-like branches. In this structure, the 0° polarization and 90° polarization share the same radiator and reflector. Measurement shows that the proposed antenna obtains a broad impedance bandwidth of 70% (1.5–3.1 GHz) with <inline-formula><tex-math notation=\"LaTeX\">$\\vert {{S}}_{11}\\vert < -{\\text{10 dB}}$</tex-math></inline-formula> and a high port-to-port isolation of 35 dB. The antenna gain within the operating frequency band is between 7.2 and 9.5 dBi, which indicates a stable broadband radiation performance. Moreover, a high cross-polarization discrimination of 25 dB is achieved across the whole operating frequency band.", "title": "" }, { "docid": "a90be1b83ad475a50dcb82ae616d4f23", "text": "Historically, lower eyelid blepharoplasty has been a challenging surgery fraught with many potential complications, ranging from ocular irritation to full-blown lower eyelid malposition and a poor cosmetic outcome. The prevention of these complications requires a detailed knowledge of lower eyelid anatomy and a focused examination of the factors that may predispose to poor outcome. A thorough preoperative evaluation of lower eyelid skin, muscle, tone, laxity, fat prominence, tear trough deformity, and eyelid vector are critical for surgical planning. When these factors are analyzed appropriately, a natural and aesthetically pleasing outcome is more likely to occur. I have found that performing lower eyelid blepharoplasty in a bilamellar fashion (transconjunctivally to address fat prominence and transcutaneously for skin excision only), along with integrating contemporary concepts of volume preservation/augmentation, canthal eyelid support, and eyelid vector analysis, has been an integral part of successful surgery. In addition, this approach has significantly increased my confidence in attaining more consistent and reproducible results.", "title": "" }, { "docid": "89cb3d192b0439b7e9022837acd19396", "text": "Computational science has led to exciting new developments, but the nature of the work has exposed limitations in our ability to evaluate published findings. Reproducibility has the potential to serve as a minimum standard for judging scientific claims when full independent replication of a study is not possible.", "title": "" }, { "docid": "f9a3645848af9620d35c2163e3b4cbf9", "text": "Our professional services was released with a hope to function as a complete on-line digital catalogue that gives access to multitude of PDF file e-book collection. You might find many different types of e-publication as well as other literatures from your papers data base. Particular preferred subject areas that distribute on our catalog are popular books, answer key, examination test question and solution, information paper, training information, test sample, end user manual, user manual, support instructions, fix guide, and many others.", "title": "" }, { "docid": "dcdd23d3f87a58ada72e1a30668c799b", "text": "The ultimate goal of this study is to afford enhanced video object detection and tracking by eliminating the limitations which are existing nowadays. Although high performance ratio for video object detection and tracking is achieved in the earlier work it takes more time for computation. Consequently we are in need to propose a novel video object detection and tracking technique so as to minimize the computational complexity. Our proposed technique covers five stages they are preprocessing, segmentation, feature extraction, background subtraction and hole filling. Originally the video clip in the database is split into frames. Then preprocessing is performed so as to get rid of noise, an adaptive median filter is used in this stage to eliminate the noise. The preprocessed image then undergoes segmentation by means of modified region growing algorithm. The segmented image is subjected to feature extraction phase so as to extract the multi features from the segmented image and the background image, the feature value thus obtained are compared so as to attain optimal value, consequently a foreground image is attained in this stage. The foreground image is then subjected to morphological operations of erosion and dilation so as to fill the holes and to get the object accurately as these foreground image contains holes and discontinuities. Thus the moving object is tracked in this stage. This method will be employed in MATLAB platform and the outcomes will be studied and compared with the existing techniques so as to reveal the performance of the novel video object detection and tracking technique.", "title": "" }, { "docid": "5f109b71bf1e39030db2594e54718ce5", "text": "Following the hierarchical Bayesian framework for blind deconvolution problems, in this paper, we propose the use of simultaneous autoregressions as prior distributions for both the image and blur, and gamma distributions for the unknown parameters (hyperparameters) of the priors and the image formation noise. We show how the gamma distributions on the unknown hyperparameters can be used to prevent the proposed blind deconvolution method from converging to undesirable image and blur estimates and also how these distributions can be inferred in realistic situations. We apply variational methods to approximate the posterior probability of the unknown image, blur, and hyperparameters and propose two different approximations of the posterior distribution. One of these approximations coincides with a classical blind deconvolution method. The proposed algorithms are tested experimentally and compared with existing blind deconvolution methods", "title": "" }, { "docid": "bf78bfc617dfe5a152ad018dacbd5488", "text": "Identifying and fixing defects is a crucial and expensive part of the software lifecycle. Measuring the quality of bug-fixing patches is a difficult task that affects both functional correctness and the future maintainability of the code base. Recent research interest in automatic patch generation makes a systematic understanding of patch maintainability and understandability even more critical. \n We present a human study involving over 150 participants, 32 real-world defects, and 40 distinct patches. In the study, humans perform tasks that demonstrate their understanding of the control flow, state, and maintainability aspects of code patches. As a baseline we use both human-written patches that were later reverted and also patches that have stood the test of time to ground our results. To address any potential lack of readability with machine-generated patches, we propose a system wherein such patches are augmented with synthesized, human-readable documentation that summarizes their effects and context. Our results show that machine-generated patches are slightly less maintainable than human-written ones, but that trend reverses when machine patches are augmented with our synthesized documentation. Finally, we examine the relationship between code features (such as the ratio of variable uses to assignments) with participants' abilities to complete the study tasks and thus explain a portion of the broad concept of patch quality.", "title": "" }, { "docid": "a3e5608e2f9dcca3e7ee310733bb44f5", "text": "LEGO® presented the following problem at the SCAI’01 conference in February 2001: Given any 3D body, how can it be built from LEGO bricks? We apply methods of Evolutionary Algorithms (EA) to solve this optimization problem. For this purpose several specific operators are defined, applied, and their use is compared. In addition, mutation operators with dynamic rate of mutation updated based on their contribution to progress of evolution are proposed. Different population organization strategies are compared. Early results indicate that EA is suitable for solving this hard optimization problem.", "title": "" }, { "docid": "2cb298a8fc8102d61964a884c20e7d78", "text": "In this paper, the concept of data mining was summarized and its significance towards its methodologies was illustrated. The data mining based on Neural Network and Genetic Algorithm is researched in detail and the key technology and ways to achieve the data mining on Neural Network and Genetic Algorithm are also surveyed. This paper also conducts a formal review of the area of rule extraction from ANN and GA.", "title": "" }, { "docid": "e0597a2bc955598ca31209bd6eb82c88", "text": "Lateral skin stretch is a promising technology for haptic display of information between an autonomous or semi-autonomous car and a driver. We present the design of a steering wheel with an embedded lateral skin stretch display and report on the results of tests (N=10) conducted in a driving vehicle in suburban traffic. Results are generally consistent with previous results utilizing skin stretch in stationary applications, but a slightly higher, and particularly a faster rate of stretch application is preferred for accurate detection of direction and approximate magnitude.", "title": "" }, { "docid": "aa6c54a142442ee1de03c57f9afe8972", "text": "Objectives: We present our 3 years experience with alar batten grafts, using a modified technique, for non-iatrogenic nasal valve/alar", "title": "" }, { "docid": "a09d704c018cbdb9e67d6c7cfc127af3", "text": "A review of research on job performance suggests 3 broad components: task, citizenship, and counterproductive performance. This study examined the relative importance of each component to ratings of overall performance by using an experimental policy-capturing design. Managers in 5 jobs read hypothetical profiles describing employees' task, citizenship, and counterproductive performance and provided global ratings of performance. Within-subjects regression analyses indicated that the weights given to the 3 performance components varied across raters. Hierarchical cluster analyses indicated that raters' policies could be grouped into 3 homogeneous clusters: (a) task performance weighted highest, (b) counterproductive performance weighted highest, and (c) equal and large weights given to task and counterproductive performance. Hierarchical linear modeling indicated that demographic variables were not related to raters' weights.", "title": "" }, { "docid": "0fafa2597726dfeb4d35721c478f1038", "text": "Visual saliency models have enjoyed a big leap in performance in recent years, thanks to advances in deep learning and large scale annotated data. Despite enormous effort and huge breakthroughs, however, models still fall short in reaching human-level accuracy. In this work, I explore the landscape of the field emphasizing on new deep saliency models, benchmarks, and datasets. A large number of image and video saliency models are reviewed and compared over two image benchmarks and two large scale video datasets. Further, I identify factors that contribute to the gap between models and humans and discuss the remaining issues that need to be addressed to build the next generation of more powerful saliency models. Some specific questions that are addressed include: in what ways current models fail, how to remedy them, what can be learned from cognitive studies of attention, how explicit saliency judgments relate to fixations, how to conduct fair model comparison, and what are the emerging applications of saliency models.", "title": "" }, { "docid": "35756d57b4d322de9326aa0f71b49352", "text": "A 32-Gb/s data-interpolator receiver for electrical chip-to-chip communications is introduced. The receiver front-end samples incoming data by using a blind clock signal, which has a plesiochronous frequency-phase relation with the data. Phase alignment between the data and decision timing is achieved by interpolating the input-signal samples in the analog domain. The receiver has a continuous-time linear equalizer and a two-tap loop unrolled DFE using adjustable-threshold comparators. The receiver occupies 0.24 mm2 and consumes 308.4 mW from a 0.9-V supply when it is implemented with a 28-nm CMOS process.", "title": "" }, { "docid": "3102b35747011acf0a0d7038eca8522b", "text": "Ca2+-induced Ca2+ release is a general mechanism that most cells use to amplify Ca2+ signals. In heart cells, this mechanism is operated between voltage-gated L-type Ca2+ channels (LCCs) in the plasma membrane and Ca2+ release channels, commonly known as ryanodine receptors, in the sarcoplasmic reticulum. The Ca2+ influx through LCCs traverses a cleft of roughly 12 nm formed by the cell surface and the sarcoplasmic reticulum membrane, and activates adjacent ryanodine receptors to release Ca2+ in the form of Ca2+ sparks. Here we determine the kinetics, fidelity and stoichiometry of coupling between LCCs and ryanodine receptors. We show that the local Ca2+ signal produced by a single opening of an LCC, named a ‘Ca2+ sparklet’, can trigger about 4–6 ryanodine receptors to generate a Ca2+ spark. The coupling between LCCs and ryanodine receptors is stochastic, as judged by the exponential distribution of the coupling latency. The fraction of sparklets that successfully triggers a spark is less than unity and declines in a use-dependent manner. This optical analysis of single-channel communication affords a powerful means for elucidating Ca2+-signalling mechanisms at the molecular level.", "title": "" }, { "docid": "708915f99102f80b026b447f858e3778", "text": "One of the main obstacles to broad application of reinforcement learning methods is the parameter sensitivity of our core learning algorithms. In many large-scale applications, online computation and function approximation represent key strategies in scaling up reinforcement learning algorithms. In this setting, we have effective and reasonably well understood algorithms for adapting the learning-rate parameter, online during learning. Such meta-learning approaches can improve robustness of learning and enable specialization to current task, improving learning speed. For temporaldifference learning algorithms which we study here, there is yet another parameter, λ, that similarly impacts learning speed and stability in practice. Unfortunately, unlike the learning-rate parameter, λ parametrizes the objective function that temporal-difference methods optimize. Different choices of λ produce different fixed-point solutions, and thus adapting λ online and characterizing the optimization is substantially more complex than adapting the learningrate parameter. There are no meta-learning method for λ that can achieve (1) incremental updating, (2) compatibility with function approximation, and (3) maintain stability of learning under both on and off-policy sampling. In this paper we contribute a novel objective function for optimizing λ as a function of state rather than time. We derive a new incremental, linear complexity λ-adaption algorithm that does not require offline batch updating or access to a model of the world, and present a suite of experiments illustrating the practicality of our new algorithm in three different settings. Taken together, our contributions represent a concrete step towards black-box application of temporaldifference learning methods in real world problems.", "title": "" } ]
scidocsrr
27a365471fac18a25a8a35c332caf806
AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks
[ { "docid": "dc3495ec93462e68f606246205a8416d", "text": "State-of-the-art methods for zero-shot visual recognition formulate learning as a joint embedding problem of images and side information. In these formulations the current best complement to visual features are attributes: manually-encoded vectors describing shared characteristics among categories. Despite good performance, attributes have limitations: (1) finer-grained recognition requires commensurately more attributes, and (2) attributes do not provide a natural language interface. We propose to overcome these limitations by training neural language models from scratch, i.e. without pre-training and only consuming words and characters. Our proposed models train end-to-end to align with the fine-grained and category-specific content of images. Natural language provides a flexible and compact way of encoding only the salient visual aspects for distinguishing categories. By training on raw text, our model can do inference on raw text as well, providing humans a familiar mode both for annotation and retrieval. Our model achieves strong performance on zero-shot text-based image retrieval and significantly outperforms the attribute-based state-of-the-art for zero-shot classification on the Caltech-UCSD Birds 200-2011 dataset.", "title": "" }, { "docid": "f32e8f005d277652fe691216e96e7fd8", "text": "PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup O(log N) sampling instead of O(N) enabling the practical generation of 512× 512 images. We evaluate the model on class-conditional image generation, text-toimage synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.", "title": "" }, { "docid": "6fc870c703611e07519ce5fe956c15d1", "text": "Severe weather conditions such as rain and snow adversely affect the visual quality of images captured under such conditions thus rendering them useless for further usage and sharing. In addition, such degraded images drastically affect performance of vision systems. Hence, it is important to solve the problem of single image de-raining/de-snowing. However, this is a difficult problem to solve due to its inherent ill-posed nature. Existing approaches attempt to introduce prior information to convert it into a well-posed problem. In this paper, we investigate a new point of view in addressing the single image de-raining problem. Instead of focusing only on deciding what is a good prior or a good framework to achieve good quantitative and qualitative performance, we also ensure that the de-rained image itself does not degrade the performance of a given computer vision algorithm such as detection and classification. In other words, the de-rained result should be indistinguishable from its corresponding clear image to a given discriminator. This criterion can be directly incorporated into the optimization framework by using the recently introduced conditional generative adversarial networks (GANs). To minimize artifacts introduced by GANs and ensure better visual quality, a new refined loss function is introduced. Based on this, we propose a novel single image de-raining method called Image De-raining Conditional General Adversarial Network (ID-CGAN), which considers quantitative, visual and also discriminative performance into the objective function. Experiments evaluated on synthetic images and real images show that the proposed method outperforms many recent state-of-the-art single image de-raining methods in terms of quantitative and visual performance.", "title": "" } ]
[ { "docid": "285a1c073ec4712ac735ab84cbcd1fac", "text": "During a survey of black yeasts of marine origin, some isolates of Hortaea werneckii were recovered from scuba diving equipment, such as silicone masks and snorkel mouthpieces, which had been kept under poor storage conditions. These yeasts were unambiguously identified by phenotypic and genotypic methods. Phylogenetic analysis of both the D1/D2 regions of 26S rRNA gene and ITS-5.8S rRNA gene sequences showed three distinct genetic types. This species is the agent of tinea nigra which is a rarely diagnosed superficial mycosis in Europe. In fact this mycosis is considered an imported fungal infection being much more prevalent in warm, humid parts of the world such as the Central and South Americas, Africa, and Asia. Although H. werneckii has been found in hypersaline environments in Europe, this is the first instance of the isolation of this halotolerant species from scuba diving equipment made with silicone rubber which is used in close contact with human skin and mucous membranes. The occurrence of this fungus in Spain is also an unexpected finding because cases of tinea nigra in this country are practically not seen.", "title": "" }, { "docid": "f9f1cf949093c41a84f3af854a2c4a8b", "text": "Modern TCP implementations are capable of very high point-to-point bandwidths. Delivered performance on the fastest networks is often limited by the sending and receiving hosts, rather than by the network hardware or the TCP protocol implementation itself. In this case, systems can achieve higher bandwidth by reducing host overheads through a variety of optimizations above and below the TCP protocol stack, given support from the network interface. This paper surveys the most important of these optimizations, and illustrates their effects quantitatively with empirical results from a an experimental network delivering up to two gigabits per second of point-to-point TCP bandwidth.", "title": "" }, { "docid": "fff0d8210d1ec328337c483dccb7f3eb", "text": "Linear drive technologies are steadily expanded in various applications, especially in industry, where high precision electrical direct drive systems are required. In this paper a double-sided variant of a novel direct driven modular permanent magnet linear motor is presented. Its characteristics are computed by means of 3D FEM magnetic field analysis. An interesting industrial application in which it can be used is also presented", "title": "" }, { "docid": "31a036aa43ef218e3c223ab9ab19329c", "text": "Natural Language Processing (NLP) is the most important field of computer science which deals with human language and computers. In India, most of the people live in the rural areas, so they have the problem of understanding English language. To address this issue, we discussed here, about various factors which affect Hindi natural language processing. In this paper, a framework for information retrieval in Hindi is also proposed by authors. This framework is based on automatically inducing word sense using graph based method for all open class words present in the intended query. Till now researchers have worked for noun sense only, but here we expanded the framework for all open class words i.e. noun, verb, adverb and adjective.", "title": "" }, { "docid": "754fb355da63d024e3464b4656ea5e8d", "text": "Improvements in implant designs have helped advance successful immediate anterior implant placement into fresh extraction sockets. Clinical techniques described in this case enable practitioners to achieve predictable esthetic success using a method that limits the amount of buccal contour change of the extraction site ridge and potentially enhances the thickness of the peri-implant soft tissues coronal to the implant-abutment interface. This approach involves atraumatic tooth removal without flap elevation, and placing a bone graft into the residual gap around an immediate fresh-socket anterior implant with a screw-retained provisional restoration acting as a prosthetic socket seal device.", "title": "" }, { "docid": "73080f337ae7ec5ef0639aec374624de", "text": "We propose a framework for the robust and fully-automatic segmentation of magnetic resonance (MR) brain images called \"Multi-Atlas Label Propagation with Expectation-Maximisation based refinement\" (MALP-EM). The presented approach is based on a robust registration approach (MAPER), highly performant label fusion (joint label fusion) and intensity-based label refinement using EM. We further adapt this framework to be applicable for the segmentation of brain images with gross changes in anatomy. We propose to account for consistent registration errors by relaxing anatomical priors obtained by multi-atlas propagation and a weighting scheme to locally combine anatomical atlas priors and intensity-refined posterior probabilities. The method is evaluated on a benchmark dataset used in a recent MICCAI segmentation challenge. In this context we show that MALP-EM is competitive for the segmentation of MR brain scans of healthy adults when compared to state-of-the-art automatic labelling techniques. To demonstrate the versatility of the proposed approach, we employed MALP-EM to segment 125 MR brain images into 134 regions from subjects who had sustained traumatic brain injury (TBI). We employ a protocol to assess segmentation quality if no manual reference labels are available. Based on this protocol, three independent, blinded raters confirmed on 13 MR brain scans with pathology that MALP-EM is superior to established label fusion techniques. We visually confirm the robustness of our segmentation approach on the full cohort and investigate the potential of derived symmetry-based imaging biomarkers that correlate with and predict clinically relevant variables in TBI such as the Marshall Classification (MC) or Glasgow Outcome Score (GOS). Specifically, we show that we are able to stratify TBI patients with favourable outcomes from non-favourable outcomes with 64.7% accuracy using acute-phase MR images and 66.8% accuracy using follow-up MR images. Furthermore, we are able to differentiate subjects with the presence of a mass lesion or midline shift from those with diffuse brain injury with 76.0% accuracy. The thalamus, putamen, pallidum and hippocampus are particularly affected. Their involvement predicts TBI disease progression.", "title": "" }, { "docid": "4bee6ec901c365f3780257ed62b7c020", "text": "There is no explicitly known example of a triple (g, a, x), where g ≥ 3 is an integer, a a digit in {0, . . . , g − 1} and x a real algebraic irrational number, for which one can claim that the digit a occurs infinitely often in the g–ary expansion of x. In 1909 and later in 1950, É. Borel considered such questions and suggested that the g–ary expansion of any algebraic irrational number in any base g ≥ 2 satisfies some of the laws that are satisfied by almost all numbers. For instance, the frequency where a given finite sequence of digits occurs should depend only on the base and on the length of the sequence. Hence there is a huge gap between the established theory and the expected state of the art. However, some progress have been made recently, mainly thanks to clever use of the Schmidt’s subspace Theorem. We review some of these results.", "title": "" }, { "docid": "cbd6e6c75cae86426c21a38bd523200f", "text": "Schottky junctions have been realized by evaporating gold spots on top of sexithiophen (6T), which is deposited on TiO 2 or ZnO with e-beam and spray pyrolysis. Using Mott-Schottky analysis of 6T/TiO2 and 6T/ZnO devices acceptor densities of 4.5x10(16) and 3.7x10(16) cm(-3) are obtained, respectively. For 6T/TiO2 deposited with the e-beam evaporation a conductivity of 9x10(-8) S cm(-1) and a charge carrier mobility of 1.2x10(-5) cm2/V s is found. Impedance spectroscopy is used to model the sample response in detail in terms of resistances and capacitances. An equivalent circuit is derived from the impedance measurements. The high-frequency data are analyzed in terms of the space-charge capacitance. In these frequencies shallow acceptor states dominate the heterojunction time constant. The high-frequency RC time constant is 8 micros. Deep acceptor states are represented by a resistance and a CPE connected in series. The equivalent circuit is validated in the potential range (from -1.2 to 0.8 V) for 6T/ZnO obtained with spray pyrolysis.", "title": "" }, { "docid": "7df626465d52dfe5859e682c685c62bc", "text": "This thesis addresses the task of error detection in the choice of content words focusing on adjective–noun and verb–object combinations. We show that error detection in content words is an under-explored area in research on learner language since (i) most previous approaches to error detection and correction have focused on other error types, and (ii) the approaches that have previously addressed errors in content words have not performed error detection proper. We show why this task is challenging for the existing algorithms and propose a novel approach to error detection in content words. We note that since content words express meaning, an error detection algorithm should take the semantic properties of the words into account. We use a compositional distribu-tional semantic framework in which we represent content words using their distributions in native English, while the meaning of the combinations is represented using models of com-positional semantics. We present a number of measures that describe different properties of the modelled representations and can reliably distinguish between the representations of the correct and incorrect content word combinations. Finally, we cast the task of error detection as a binary classification problem and implement a machine learning classifier that uses the output of the semantic measures as features. The results of our experiments confirm that an error detection algorithm that uses semantically motivated features achieves good accuracy and precision and outperforms the state-of-the-art approaches. We conclude that the features derived from the semantic representations encode important properties of the combinations that help distinguish the correct combinations from the incorrect ones. The approach presented in this work can naturally be extended to other types of content word combinations. Future research should also investigate how the error correction component for content word combinations could be implemented. 3 4 Acknowledgements First and foremost, I would like to express my profound gratitude to my supervisor, Ted Briscoe, for his constant support and encouragement throughout the course of my research. This work would not have been possible without his invaluable guidance and advice. I am immensely grateful to my examiners, Ann Copestake and Stephen Pulman, for providing their advice and constructive feedback on the final version of the dissertation. I am also thankful to my colleagues at the Natural Language and Information Processing research group for the insightful and inspiring discussions over these years. In particular, I would like to express my gratitude to would like to thank …", "title": "" }, { "docid": "9a427dae8e47e6004a45a19a2283326f", "text": "This article describes the challenges that women and women of color face in their quest to achieve and perform in leadership roles in work settings. We discuss the barriers that women encounter and specifically address the dimensions of gender and race and their impact on leadership. We identify the factors associated with gender evaluations of leaders and the stereotypes and other challenges faced by White women and women of color. We use ideas concerning identity and the intersection of multiple identities to understand the way in which gender mediates and shapes the experience of women in the workplace. We conclude with suggestions for research and theory development that may more fully capture the complex experience of women who serve as leaders.", "title": "" }, { "docid": "81bfa44ec29532d07031fa3b74ba818d", "text": "We propose a recurrent extension of the Ladder networks [22] whose structure is motivated by the inference required in hierarchical latent variable models. We demonstrate that the recurrent Ladder is able to handle a wide variety of complex learning tasks that benefit from iterative inference and temporal modeling. The architecture shows close-to-optimal results on temporal modeling of video data, competitive results on music modeling, and improved perceptual grouping based on higher order abstractions, such as stochastic textures and motion cues. We present results for fully supervised, semi-supervised, and unsupervised tasks. The results suggest that the proposed architecture and principles are powerful tools for learning a hierarchy of abstractions, learning iterative inference and handling temporal information.", "title": "" }, { "docid": "a8920f6ba4500587cf2a160b8d91331a", "text": "In this paper, we present an approach that can handle Z-numbers in the context of multi-criteria decision-making problems. The concept of Z-number as an ordered pair Z=(A, B) of fuzzy numbers A and B is used, where A is a linguistic value of a variable of interest and B is a linguistic value of the probability measure of A. As human beings, we communicate with each other by means of natural language using sentences like “the journey from home to university most likely takes about half an hour.” The Z-numbers are converted to fuzzy numbers. Then the Z-TODIM and Z-TOPSIS are presented as a direct extension of the fuzzy TODIM and fuzzy TOPSIS, respectively. The proposed methods are applied to two case studies and compared with the standard approach using crisp values. The results obtained show the feasibility of the approach.", "title": "" }, { "docid": "5515e892363c3683e39c6d5ec4abe22d", "text": "Government agencies are investing a considerable amount of resources into improving security systems as result of recent terrorist events that dangerously exposed flaws and weaknesses in today’s safety mechanisms. Badge or password-based authentication procedures are too easy to hack. Biometrics represents a valid alternative but they suffer of drawbacks as well. Iris scanning, for example, is very reliable but too intrusive; fingerprints are socially accepted, but not applicable to non-consentient people. On the other hand, face recognition represents a good compromise between what’s socially acceptable and what’s reliable, even when operating under controlled conditions. In last decade, many algorithms based on linear/nonlinear methods, neural networks, wavelets, etc. have been proposed. Nevertheless, Face Recognition Vendor Test 2002 shown that most of these approaches encountered problems in outdoor conditions. This lowered their reliability compared to state of the art biometrics. This paper provides an ‘‘ex cursus’’ of recent face recognition research trends in 2D imagery and 3D model based algorithms. To simplify comparisons across different approaches, tables containing different collection of parameters (such as input size, recognition rate, number of addressed problems) are provided. This paper concludes by proposing possible future directions. 2007 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "dbb4540af2166d4292253b17ce1ff68f", "text": "On average, men outperform women on mental rotation tasks. Even boys as young as 4 1/2 perform better than girls on simplified spatial transformation tasks. The goal of our study was to explore ways of improving 5-year-olds' performance on a spatial transformation task and to examine the strategies children use to solve this task. We found that boys performed better than girls before training and that both boys and girls improved with training, whether they were given explicit instruction or just practice. Regardless of training condition, the more children gestured about moving the pieces when asked to explain how they solved the spatial transformation task, the better they performed on the task, with boys gesturing about movement significantly more (and performing better) than girls. Gesture thus provides useful information about children's spatial strategies, raising the possibility that gesture training may be particularly effective in improving children's mental rotation skills.", "title": "" }, { "docid": "3860b1d259317da9ac6fe2c2ab161ce3", "text": "In recent years, state-of-the-art methods in computer vision have utilized increasingly deep convolutional neural network architectures (CNNs), with some of the most successful models employing hundreds or even thousands of layers. A variety of pathologies such as vanishing/exploding gradients make training such deep networks challenging. While residual connections and batch normalization do enable training at these depths, it has remained unclear whether such specialized architecture designs are truly necessary to train deep CNNs. In this work, we demonstrate that it is possible to train vanilla CNNs with ten thousand layers or more simply by using an appropriate initialization scheme. We derive this initialization scheme theoretically by developing a mean field theory for signal propagation and by characterizing the conditions for dynamical isometry, the equilibration of singular values of the input-output Jacobian matrix. These conditions require that the convolution operator be an orthogonal transformation in the sense that it is norm-preserving. We present an algorithm for generating such random initial orthogonal convolution kernels and demonstrate empirically that they enable efficient training of extremely deep architectures.", "title": "" }, { "docid": "61b7c35516b8a3f2a387526ef2541434", "text": "Understanding and quantifying dependence is at the core of all modelling efforts in financial econometrics. The linear correlation coefficient, which is the far most used measure to test dependence in the financial community and also elsewhere, is only a measure of linear dependence. This means that it is a meaningful measure of dependence if asset returns are well represented by an elliptical distribution. Outside the world of elliptical distributions, however, using the linear correlation coefficient as a measure of dependence may lead to misleading conclusions. Hence, alternative methods for capturing co-dependency should be considered. One class of alternatives are copula-based dependence measures. In this survey we consider two parametric families of copulas; the copulas of normal mixture distributions and Archimedean copulas.", "title": "" }, { "docid": "78cc06956c3f945f013b7baabd97929d", "text": "Main memory is one of the leading hardware causes for machine crashes in today's datacenters. Designing, evaluating and modeling systems that are resilient against memory errors requires a good understanding of the underlying characteristics of errors in DRAM in the field. While there have recently been a few first studies on DRAM errors in production systems, these have been too limited in either the size of the data set or the granularity of the data to conclusively answer many of the open questions on DRAM errors. Such questions include, for example, the prevalence of soft errors compared to hard errors, or the analysis of typical patterns of hard errors. In this paper, we study data on DRAM errors collected on a diverse range of production systems in total covering nearly 300 terabyte-years of main memory. As a first contribution, we provide a detailed analytical study of DRAM error characteristics, including both hard and soft errors. We find that a large fraction of DRAM errors in the field can be attributed to hard errors and we provide a detailed analytical study of their characteristics. As a second contribution, the paper uses the results from the measurement study to identify a number of promising directions for designing more resilient systems and evaluates the potential of different protection mechanisms in the light of realistic error patterns. One of our findings is that simple page retirement policies might be able to mask a large number of DRAM errors in production systems, while sacrificing only a negligible fraction of the total DRAM in the system.", "title": "" }, { "docid": "20a2390dede15514cd6a70e9b56f5432", "text": "The ability to record and replay program executions with low overhead enables many applications, such as reverse-execution debugging, debugging of hard-toreproduce test failures, and “black box” forensic analysis of failures in deployed systems. Existing record-andreplay approaches limit deployability by recording an entire virtual machine (heavyweight), modifying the OS kernel (adding deployment and maintenance costs), requiring pervasive code instrumentation (imposing significant performance and complexity overhead), or modifying compilers and runtime systems (limiting generality). We investigated whether it is possible to build a practical record-and-replay system avoiding all these issues. The answer turns out to be yes — if the CPU and operating system meet certain non-obvious constraints. Fortunately modern Intel CPUs, Linux kernels and user-space frameworks do meet these constraints, although this has only become true recently. With some novel optimizations, our system RR records and replays real-world lowparallelism workloads with low overhead, with an entirely user-space implementation, using stock hardware, compilers, runtimes and operating systems. RR forms the basis of an open-source reverse-execution debugger seeing significant use in practice. We present the design and implementation of RR, describe its performance on a variety of workloads, and identify constraints on hardware and operating system design required to support our approach.", "title": "" }, { "docid": "86bd51407b0774d07e9f8cdea04c8e1d", "text": "A new method for learning variational autoencoders (VAEs) is developed, based on Stein variational gradient descent. A key advantage of this approach is that one need not make parametric assumptions about the form of the encoder distribution. Performance is further enhanced by integrating the proposed encoder with importance sampling. Excellent performance is demonstrated across multiple unsupervised and semi-supervised problems, including semi-supervised analysis of the ImageNet data, demonstrating the scalability of the model to large datasets.", "title": "" }, { "docid": "a8aa7af1b9416d4bd6df9d4e8bcb8a40", "text": "User-computer dialogues are typically one-sided, with the bandwidth from computer to user far greater than that from user to computer. The movement of a user’s eyes can provide a convenient, natural, and high-bandwidth source of additional user input, to help redress this imbalance. We therefore investigate the introduction of eye movements as a computer input medium. Our emphasis is on the study of interaction techniques that incorporate eye movements into the user-computer dialogue in a convenient and natural way. This chapter describes research at NRL on developing such interaction techniques and the broader issues raised by non-command-based interaction styles. It discusses some of the human factors and technical considerations that arise in trying to use eye movements as an input medium, describes our approach and the first eye movement-based interaction techniques that we have devised and implemented in our laboratory, reports our experiences and observations on them, and considers eye movement-based interaction as an exemplar of a new, more general class of non-command-based user-computer interaction.", "title": "" } ]
scidocsrr
af1a6f7baa4b0a78c2d2adebfa845712
BROCCOLI: Software for fast fMRI analysis on many-core CPUs and GPUs
[ { "docid": "d7142245920a5c1f51c716a549a0ee8e", "text": "Finding objective and effective thresholds for voxelwise statistics derived from neuroimaging data has been a long-standing problem. With at least one test performed for every voxel in an image, some correction of the thresholds is needed to control the error rates, but standard procedures for multiple hypothesis testing (e.g., Bonferroni) tend to not be sensitive enough to be useful in this context. This paper introduces to the neuroscience literature statistical procedures for controlling the false discovery rate (FDR). Recent theoretical work in statistics suggests that FDR-controlling procedures will be effective for the analysis of neuroimaging data. These procedures operate simultaneously on all voxelwise test statistics to determine which tests should be considered statistically significant. The innovation of the procedures is that they control the expected proportion of the rejected hypotheses that are falsely rejected. We demonstrate this approach using both simulations and functional magnetic resonance imaging data from two simple experiments.", "title": "" } ]
[ { "docid": "6d149a530769b61a34bcd5b8d900dbcd", "text": "Click here and insert your abstract text. The Web accessibility issue has been subject of study for a wide number of organizations all around the World. The current paper describes an accessibility evaluation that aimed to test the Portuguese enterprises websites. Has the presented results state, the evaluated websites accessibility levels are significantly bad, but the majority of the detected errors are not very complex from a technological point-of-view. With this is mind, our research team, in collaboration with a Portuguese enterprise named ANO and the support of its UTAD-ANOgov/PEPPOL research project, elaborated an improvement proposal, directed to the Web content developers, which aimed on helping these specialists to better understand and implement Web accessibility features. © 2013 The Authors. Published by Elsevier B.V. Selection and peer-review under responsibility of the Scientific Programme Committee of the 5th International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-exclusion (DSAI 2013).", "title": "" }, { "docid": "3575842a3306a11bfcc5b370c6d67daf", "text": "BACKGROUND AND PURPOSE\nMental practice (MP) of a particular motor skill has repeatedly been shown to activate the same musculature and neural areas as physical practice of the skill. Pilot study results suggest that a rehabilitation program incorporating MP of valued motor skills in chronic stroke patients provides sufficient repetitive practice to increase affected arm use and function. This Phase 2 study compared efficacy of a rehabilitation program incorporating MP of specific arm movements to a placebo condition using randomized controlled methods and an appropriate sample size. Method- Thirty-two chronic stroke patients (mean=3.6 years) with moderate motor deficits received 30-minute therapy sessions occurring 2 days/week for 6 weeks, and emphasizing activities of daily living. Subjects randomly assigned to the experimental condition also received 30-minute MP sessions provided directly after therapy requiring daily MP of the activities of daily living; subjects assigned to the control group received the same amount of therapist interaction as the experimental group, and a sham intervention directly after therapy, consisting of relaxation. Outcomes were evaluated by a blinded rater using the Action Research Arm test and the upper extremity section of the Fugl-Meyer Assessment.\n\n\nRESULTS\nNo pre-existing group differences were found on any demographic variable or movement scale. Subjects receiving MP showed significant reductions in affected arm impairment and significant increases in daily arm function (both at the P<0.0001 level). Only patients in the group receiving MP exhibited new ability to perform valued activities.\n\n\nCONCLUSIONS\nThe results support the efficacy of programs incorporating mental practice for rehabilitating affected arm motor function in patients with chronic stroke. These changes are clinically significant.", "title": "" }, { "docid": "cc6458464cd8bb152683fde0af1e3d23", "text": "While the application of IoT in smart technologies becomes more and more proliferated, the pandemonium of its protocols becomes increasingly confusing. More seriously, severe security deficiencies of these protocols become evident, as time-to-market is a key factor, which satisfaction comes at the price of a less thorough security design and testing. This applies especially to the smart home domain, where the consumer-driven market demands quick and cheap solutions. This paper presents an overview of IoT application domains and discusses the most important wireless IoT protocols for smart home, which are KNX-RF, EnOcean, Zigbee, Z-Wave and Thread. Finally, it describes the security features of said protocols and compares them with each other, giving advice on whose protocols are more suitable for a secure smart home.", "title": "" }, { "docid": "3cc97542631d734d8014abfbef652c79", "text": "Internet exchange points (IXPs) are an important ingredient of the Internet AS-level ecosystem - a logical fabric of the Internet made up of about 30,000 ASes and their mutual business relationships whose primary purpose is to control and manage the flow of traffic. Despite the IXPs' critical role in this fabric, little is known about them in terms of their peering matrices (i.e., who peers with whom at which IXP) and corresponding traffic matrices (i.e., how much traffic do the different ASes that peer at an IXP exchange with one another). In this paper, we report on an Internet-wide traceroute study that was specifically designed to shed light on the unknown IXP-specific peering matrices and involves targeted traceroutes from publicly available and geographically dispersed vantage points. Based on our method, we were able to discover and validate the existence of about 44K IXP-specific peering links - nearly 18K more links than were previously known. In the process, we also classified all known IXPs depending on the type of information required to detect them. Moreover, in view of the currently used inferred AS-level maps of the Internet that are known to miss a significant portion of the actual AS relationships of the peer-to-peer type, our study provides a new method for augmenting these maps with IXP-related peering links in a systematic and informed manner.", "title": "" }, { "docid": "54eea56f03b9b9f5983857550b83a5da", "text": "This paper summarizes opportunities for silicon process technologies at mmwave and terahertz frequencies and demonstrates key building blocks for 94-GHz and 600-GHz imaging arrays. It reviews potential applications and summarizes state-of-the-art terahertz technologies. Terahertz focal-plane arrays (FPAs) for video-rate imaging applications have been fabricated in commercially available CMOS and SiGe process technologies respectively. The 3times5 arrays achieve a responsivity of up to 50 kV/W with a minimum NEP of 400 pW/radicHz at 600 GHz. Images of postal envelopes are presented which demonstrate the potential of silicon integrate 600-GHz terahertz FPAs for future low-cost terahertz camera systems.", "title": "" }, { "docid": "e1e878c5df90a96811f885935ac13888", "text": "Multiple-input-multiple-output (MIMO) wireless systems use multiple antenna elements at transmit and receive to offer improved capacity over single antenna topologies in multipath channels. In such systems, the antenna properties as well as the multipath channel characteristics play a key role in determining communication performance. This paper reviews recent research findings concerning antennas and propagation in MIMO systems. Issues considered include channel capacity computation, channel measurement and modeling approaches, and the impact of antenna element properties and array configuration on system performance. Throughout the discussion, outstanding research questions in these areas are highlighted.", "title": "" }, { "docid": "81919bc432dd70ed3e48a0122d91b9e4", "text": "Artemisinin resistance in Plasmodium falciparum has emerged as a major threat for malaria control and elimination worldwide. Mutations in the Kelch propeller domain of PfK13 are the only known molecular markers for artemisinin resistance in this parasite. Over 100 non-synonymous mutations have been identified in PfK13 from various malaria endemic regions. This study aimed to investigate the genetic diversity of PvK12, the Plasmodium vivax ortholog of PfK13, in parasite populations from Southeast Asia, where artemisinin resistance in P. falciparum has emerged. The PvK12 sequences in 120 P. vivax isolates collected from Thailand (22), Myanmar (32) and China (66) between 2004 and 2008 were obtained and 353 PvK12 sequences from worldwide populations were retrieved for further analysis. These PvK12 sequences revealed a very low level of genetic diversity (π = 0.00003) with only three single nucleotide polymorphisms (SNPs). Of these three SNPs, only G581R is nonsynonymous. The synonymous mutation S88S is present in 3% (1/32) of the Myanmar samples, while G704G and G581R are present in 1.5% (1/66) and 3% (2/66) of the samples from China, respectively. None of the mutations observed in the P. vivax samples were associated with artemisinin resistance in P. falciparum. Furthermore, analysis of 473 PvK12 sequences from twelve worldwide P. vivax populations confirmed the very limited polymorphism in this gene and detected only five distinct haplotypes. The PvK12 sequences from global P. vivax populations displayed very limited genetic diversity indicating low levels of baseline polymorphisms of PvK12 in these areas.", "title": "" }, { "docid": "810158f2907eec894e54a57dabb2b9c4", "text": "Dependability properties of bi-directional and braided rings are well recognized in improving communication availability. However, current ring-based topologies have no mechanisms for extreme integrity and have not been considered for emerging high-dependability markets where cost is a significant driver, such as the automotive \"by-wire\" applications. This paper introduces a braided-ring architecture with superior guardian functionality and complete Byzantine fault tolerance while simultaneously reducing cost. This paper reviews anticipated requirements for high-dependability low-cost applications and emphasizes the need for regular safe testing of core coverage functions. The paper describes the ring's main mechanisms for achieving integrity and availability levels similar to SAFEbus/spl reg/ but at low automotive costs. The paper also presents a mechanism to achieve self-stabilizing TDMA-based communication and design methods for fault-tolerant protocols on a network of simplex nodes. The paper also introduces a new self-checking pair concept that leverages braided-ring properties. This novel message-based self-checking-pair concept allows high-integrity source data at extremely low cost.", "title": "" }, { "docid": "1203822bf82dcd890e7a7a60fb282ce5", "text": "Individuals with psychosocial problems such as social phobia or feelings of loneliness might be vulnerable to excessive use of cyber-technological devices, such as smartphones. We aimed to determine the relationship of smartphone addiction with social phobia and loneliness in a sample of university students in Istanbul, Turkey. Three hundred and sixty-seven students who owned smartphones were given the Smartphone Addiction Scale (SAS), UCLA Loneliness Scale (UCLA-LS), and Brief Social Phobia Scale (BSPS). A significant difference was found in the mean SAS scores (p < .001) between users who declared that their main purpose for smartphone use was to access social networking sites. The BSPS scores showed positive correlations with all six subscales and with the total SAS scores. The total UCLA-LS scores were positively correlated with daily life disturbance, positive anticipation, cyber-oriented relationship, and total scores on the SAS. In regression analyses, total BSPS scores were significant predictors for SAS total scores (β = 0.313, t = 5.992, p < .001). In addition, BSPS scores were significant predictors for all six SAS subscales, whereas UCLA-LS scores were significant predictors for only cyber-oriented relationship subscale scores on the SAS (β = 0.130, t = 2.416, p < .05). The results of this study indicate that social phobia was associated with the risk for smartphone addiction in young people. Younger individuals who primarily use their smartphones to access social networking sites also have an excessive pattern of smartphone use. ARTICLE HISTORY Received 12 January 2016 Accepted 19 February 2016", "title": "" }, { "docid": "fb116c7cd3ab8bd88fb7817284980d4a", "text": "Sentence-level sentiment classification is important to understand users' fine-grained opinions. Existing methods for sentence-level sentiment classification are mainly based on supervised learning. However, it is difficult to obtain sentiment labels of sentences since manual annotation is expensive and time-consuming. In this paper, we propose an approach for sentence-level sentiment classification without the need of sentence labels. More specifically, we propose a unified framework to incorporate two types of weak supervision, i.e., document-level and word-level sentiment labels, to learn the sentence-level sentiment classifier. In addition, the contextual information of sentences and words extracted from unlabeled sentences is incorporated into our approach to enhance the learning of sentiment classifier. Experiments on benchmark datasets show that our approach can effectively improve the performance of sentence-level sentiment classification.", "title": "" }, { "docid": "3301a0cf26af8d4d8c7b2b9d56cec292", "text": "Reading comprehension (RC)—in contrast to information retrieval—requires integrating information and reasoning about events, entities, and their relations across a full document. Question answering is conventionally used to assess RC ability, in both artificial agents and children learning to read. However, existing RC datasets and tasks are dominated by questions that can be solved by selecting answers using superficial information (e.g., local context similarity or global term frequency); they thus fail to test for the essential integrative aspect of RC. To encourage progress on deeper comprehension of language, we present a new dataset and set of tasks in which the reader must answer questions about stories by reading entire books or movie scripts. These tasks are designed so that successfully answering their questions requires understanding the underlying narrative rather than relying on shallow pattern matching or salience. We show that although humans solve the tasks easily, standard RC models struggle on the tasks presented here. We provide an analysis of the dataset and the challenges it presents.", "title": "" }, { "docid": "0f173a3486bf09ced9d221019241c7c4", "text": "In millimeter-wave (mmWave) systems, antenna architecture limitations make it difficult to apply conventional fully digital precoding techniques but call for low-cost analog radio frequency (RF) and digital baseband hybrid precoding methods. This paper investigates joint RF-baseband hybrid precoding for the downlink of multiuser multiantenna mmWave systems with a limited number of RF chains. Two performance measures, maximizing the spectral efficiency and the energy efficiency of the system, are considered. We propose a codebook-based RF precoding design and obtain the channel state information via a beam sweep procedure. Via the codebook-based design, the original system is transformed into a virtual multiuser downlink system with the RF chain constraint. Consequently, we are able to simplify the complicated hybrid precoding optimization problems to joint codeword selection and precoder design (JWSPD) problems. Then, we propose efficient methods to address the JWSPD problems and jointly optimize the RF and baseband precoders under the two performance measures. Finally, extensive numerical results are provided to validate the effectiveness of the proposed hybrid precoders.", "title": "" }, { "docid": "c26c5691c34a26f7710448765521b6d5", "text": "Text messages sent via the Short Message Service (SMS) have revolutionized interpersonal communication. Recent years have also seen this service become a critical component of the security infrastructure, assisting with tasks including identity verification and second-factor authentication. At the same time, this messaging infrastructure has become dramatically more open and connected to public networks than ever before. However, the implications of this openness, the security practices of benign services, and the malicious misuse of this ecosystem are not well understood. In this paper, we provide the first longitudinal study to answer these questions, analyzing nearly 400,000 text messages sent to public online SMS gateways over the course of 14 months. From this data, we are able to identify not only a range of services sending extremely sensitive plaintext data and implementing low entropy solutions for one-use codes, but also offer insights into the prevalence of SMS spam and behaviors indicating that public gateways are primarily used for evading account creation policies that require verified phone numbers. This latter finding has significant implications for research combatting phone-verified account fraud and demonstrates that such evasion will continue to be difficult to detect and prevent.", "title": "" }, { "docid": "dfae67d62731a9307a10de7b11d6d117", "text": "A 16 Gb 4-state MLC NAND flash memory augments the sustained program throughput to 34 MB/s by fully exercising all the available cells along a selected word line and by using additional performance enhancement modes. The same chip operating as an 8 Gb SLC device guarantees over 60 MB/s programming throughput. The newly introduced all bit line (ABL) architecture has multiple advantages when higher performance is targeted and it was made possible by adopting the ldquocurrent sensingrdquo (as opposed to the mainstream ldquovoltage sensingrdquo) technique. The general chip architecture is presented in contrast to a state of the art conventional circuit and a double size data buffer is found to be necessary for the maximum parallelism attained. Further conceptual changes designed to counterbalance the area increase are presented, hierarchical column architecture being of foremost importance. Optimization of other circuits, such as the charge pump, is another example. Fast data access rate is essential, and ways of boosting it are described, including a new redundancy scheme. ABL contribution to energy saving is also acknowledged.", "title": "" }, { "docid": "eec7a9a6859e641c3cc0ade73583ef5c", "text": "We propose an Apache Spark-based scale-up server architecture using Docker container-based partitioning method to improve performance scalability. The performance scalability problem of Apache Spark-based scale-up servers is due to garbage collection(GC) and remote memory access overheads when the servers are equipped with significant number of cores and Non-Uniform Memory Access(NUMA). The proposed method minimizes the problems using Docker container-based architecture effectively partitioning the original scale-up server into small logical servers. Our evaluation study based on benchmark programs revealed that the partitioning method showed performance improvement by ranging from 1.1x through 1.7x on a 120 core scale-up system. Our proof-of-concept scale-up server architecture provides the basis towards complete and practical design of partitioning-based scale-up servers showing performance scalability.", "title": "" }, { "docid": "9ce08ed9e7e34ef1f5f12bfbe54e50ea", "text": "GPU-based clusters are increasingly being deployed in HPC environments to accelerate a variety of scientific applications. Despite their growing popularity, the GPU devices themselves are under-utilized even for many computationally-intensive jobs. This stems from the fact that the typical GPU usage model is one in which a host processor periodically offloads computationally intensive portions of an application to the coprocessor. Since some portions of code cannot be offloaded to the GPU (for example, code performing network communication in MPI applications), this usage model results in periods of time when the GPU is idle. GPUs could be time-shared across jobs to \"fill\" these idle periods, but unlike CPU resources such as the cache, the effects of sharing the GPU are not well understood. Specifically, two jobs that time-share a single GPU will experience resource contention and interfere with each other. The resulting slow-down could lead to missed job deadlines. Current cluster managers do not support GPU-sharing, but instead dedicate GPUs to a job for the job's lifetime.\n In this paper, we present a framework to predict and handle interference when two or more jobs time-share GPUs in HPC clusters. Our framework consists of an analysis model, and a dynamic interference detection and response mechanism to detect excessive interference and restart the interfering jobs on different nodes. We implement our framework in Torque, an open-source cluster manager, and using real workloads on an HPC cluster, show that interference-aware two-job colocation (although our method is applicable to colocating more than two jobs) improves GPU utilization by 25%, reduces a job's waiting time in the queue by 39% and improves job latencies by around 20%.", "title": "" }, { "docid": "d60e344c8bfb4422c947ddf22e9837b5", "text": "INTRODUCTION\nPrevious studies evaluated the perception of laypersons to symmetric alteration of anterior dental esthetics. However, no studies have evaluated the perception of asymmetric esthetic alterations. This investigation will determine whether asymmetric and symmetric anterior dental discrepancies are detectable by dental professionals and laypersons.\n\n\nMETHODS\nSeven images of women's smiles were intentionally altered with a software-imaging program. The alterations involved crown length, crown width, midline diastema, papilla height, and gingiva-to-lip relationship of the maxillary anterior teeth. These altered images were rated by groups of general dentists, orthodontists, and laypersons using a visual analog scale. Statistical analysis of the responses resulted in the establishment of threshold levels of attractiveness for each group.\n\n\nRESULTS\nOrthodontists were more critical than dentists and laypeople when evaluating asymmetric crown length discrepancies. All 3 groups could identify a unilateral crown width discrepancy of 2.0 mm. A small midline diastema was not rated as unattractive by any group. Unilateral reduction of papillary height was generally rated less attractive than bilateral alteration. Orthodontists and laypeople rated a 3-mm distance from gingiva to lip as unattractive.\n\n\nCONCLUSIONS\nAsymmetric alterations make teeth more unattractive to not only dental professionals but also the lay public.", "title": "" }, { "docid": "881cd0e0807d28cddcf8e999913c872b", "text": "We examine the relationship between quality-based manufacturing strategy and the use of different types of performance measures, as well as their separate and joint effects on performance. A key part of our investigation is the distinction between financial and both objective and subjective nonfinancial measures. Our results support the view that performance measurement diversity benefits performance as we find that, regardless of strategy, firms with more extensive performance measurement systems—especially those that include objective and subjective nonfinancial measures—have higher performance. But our findings also partly support the view that the strategy-measurement ‘‘fit’’ affects performance. We find that firms that emphasize quality in manufacturing use more of both objective and subjective nonfinancial measures. However, there is only a positive effect on performance from pairing a qualitybased manufacturing strategy with extensive use of subjective measures, but not with objective nonfinancial measures. INTRODUCTION Performance measures play a key role in translating an organization’s strategy into desired behaviors and results (Campbell et al. 2004; Chenhall and Langfield-Smith 1998; Kaplan and Norton 2001; Lillis 2002). They also help to communicate expectations, monitor progress, provide feedback, and motivate employees through performancebased rewards (Banker et al. 2000; Chenhall 2003; Ittner and Larcker 1998b; Ittner et al. 1997; Ittner, Larcker, and Randall 2003). Traditionally, firms have primarily used financial measures for these purposes (Balkcom et al. 1997; Kaplan and Norton 1992). But with the ‘‘new’’ competitive realities of increased customization, flexibility, and responsiveness, and associated advances in manufacturing practices, both academics and practitioners have argued that traditional financial performance measures are no longer adequate for these functions (Dixon et al. 1990; Fisher 1992; Ittner and Larcker 1998a; Neely 1999). Indeed, many We acknowledge the helpful suggestions by Tom Groot, Jim Hesford, Ranjani Krishnan, Fred Lindahl, Helene Loning, Michal Matejka, Ken Merchant, Frank Moers, Mark Peecher, Mike Shields, Sally Widener, workshop participants at the University of Illinois, the 2002 AAA Management Accounting Meeting in Austin, the 2002 World Congress of Accounting Educators in Hong Kong, and the 2003 AAA Annual Meeting in Honolulu. An earlier version of this paper won the best paper award at the 9th World Congress of Accounting Educators in Hong Kong (2002). 186 Van der Stede, Chow, and Lin Behavioral Research in Accounting, 2006 accounting researchers have identified the continued reliance on traditional management accounting systems as a major reason why many new manufacturing initiatives perform poorly (Banker et al. 1993; Ittner and Larcker 1995). In light of this development in theory and practice, the current study seeks to advance understanding of the role that performance measurement plays in executing strategy and enhancing organizational performance. It proposes and empirically tests three hypotheses about the performance effects of performance measurement diversity; the relation between quality-based manufacturing strategy and firms’ use of different types of performance measures; and the joint effects of strategy and performance measurement on organizational performance. The distinction between objective and subjective performance measures is a pivotal part of our investigation. Prior empirical research has typically only differentiated between financial and nonfinancial performance measures. We go beyond this dichotomy to further distinguish between nonfinancial measures that are quantitative and objectively derived (e.g., defect rates), and those that are qualitative and subjectively determined (e.g., an assessment of the degree of cooperation or knowledge sharing across departmental borders). Making this finer distinction between types of nonfinancial performance measures contributes to recent work in accounting that has begun to focus on the use of subjectivity in performance measurement, evaluation, and incentives (e.g., Bushman et al. 1996; Gibbs et al. 2004; Ittner, Larcker, and Meyer 2003; MacLeod and Parent 1999; Moers 2005; Murphy and Oyer 2004). Using survey data from 128 manufacturing firms, we find that firms with more extensive performance measurement systems, especially ones that include objective and subjective nonfinancial measures, have higher performance. This result holds regardless of the firm’s manufacturing strategy. As such, our finding supports the view that performance measurement diversity, per se, is beneficial. But we also find evidence that firms adjust their use of performance measures to strategy. Firms that emphasize quality in manufacturing tend to use more of both objective and subjective nonfinancial measures, but without reducing the number of financial measures. Interestingly, however, combining quality-based strategies with extensive use of objective nonfinancial measures is not associated with higher performance. This set of results is consistent with Ittner and Larcker (1995) who found that quality programs are associated with greater use of nontraditional (i.e., nonfinancial) measures and reward systems, but combining nontraditional measures with extensive quality programs does not improve performance. However, by differentiating between objective and subjective nonfinancial measures—thereby going beyond Ittner and Larcker (1995) and much of the extant accounting literature—we find that performance is higher when the performance measures used in conjunction with a quality-based manufacturing strategy are of the subjective type. Finally, we find that among firms with similar quality-based strategies, those with less extensive performance measurement systems have lower performance, whereas those with more extensive performance measurement systems do not. In the case of subjective performance measures, firms that use them more extensively than firms with similar qualitybased strategies actually have significantly higher performance. Thus, a ‘‘mismatch’’ between performance measurement and strategy is associated with lower performance only when firms use fewer measures than firms with similar quality-based strategies, but not when they use more. The paper proceeds as follows. The next section builds on the extant literature to formulate three hypotheses. The third section discusses the method, sample, and measures. Strategy, Choice of Performance Measures, and Performance 187 Behavioral Research in Accounting, 2006 The fourth section presents the results. The fifth section provides a summary, discusses the study’s limitations, and suggests possible directions for future research. HYPOTHESES Although there is widespread agreement on the need to expand performance measurement, two different views exist on the nature of the desirable change (Ittner, Larcker, and Randall 2003; Ruddle and Feeny 2000). In this section, we engage the relevant literatures to develop three hypotheses. Collectively, the hypotheses provide the basis for comparing the two prevailing schools of thought on how performance measurement should be improved; that of performance measurement diversity regardless of strategy versus that of performance measurement alignment with strategy (Ittner, Larcker, and Randall 2003). The Performance Measurement Diversity View A number of authors have argued that broadening the set of performance measures, per se, enhances organizational performance (e.g., Edvinsson and Malone 1997; Lingle and Schiemann 1996). The premise is that managers have an incentive to concentrate on those activities for which their performance is measured, often at the expense of other relevant but non-measured activities (Hopwood 1974), and greater measurement diversity can reduce such dysfunctional effects (Lillis 2002). Support for this view is available from economicsbased agency studies. Datar et al. (2001), Feltham and Xie (1994), Hemmer (1996), Holmstrom (1979), and Lambert (2001), for example, have demonstrated that in the absence of measurement costs, introducing incentives based on nonfinancial measures can improve contracting by incorporating information on managerial actions that are not fully captured by financial measures. Analytical studies have further identified potential benefits from using performance measures that are subjectively derived. For example, Baiman and Rajan (1995) and Baker et al. (1994) have shown that subjective measures can help to mitigate distortions in managerial effort by ‘‘backing out’’ dysfunctional behavior induced by incomplete objective performance measures, as well as reduce noise in the overall performance evaluation. However, the literature also has noted potential drawbacks from measurement diversity. It increases system complexity, thus taxing managers’ cognitive abilities (Ghosh and Lusch 2000; Lipe and Salterio 2000, 2002). It also increases the burden of determining relative weights for different measures (Ittner and Larcker 1998a; Moers 2005). Finally, multiple measures are also potentially conflicting (e.g., manufacturing efficiency and customer responsiveness), leading to incongruence of goals, at least in the short run (Baker 1992; Holmstrom and Milgrom 1991), and organizational friction (Lillis 2002). Despite these potential drawbacks, there is considerable empirical support for increased measurement diversity. For example, in a study of time-series data in 18 hotels, Banker et al. (2000) found that when nonfinancial measures are included in the compensation contract, managers more closely aligned their efforts to those measures, resulting in increased performance. Hoque and James (2000) and Scott and Tiessen (1999) also have found positive relations between firm performance and increased use of different types of performance measures (e.g., financial and nonfinancial). These resul", "title": "" }, { "docid": "c51acd24cb864b050432a055fef2de9a", "text": "Electric motor and power electronics-based inverter are the major components in industrial and automotive electric drives. In this paper, we present a model-based fault diagnostics system developed using a machine learning technology for detecting and locating multiple classes of faults in an electric drive. Power electronics inverter can be considered to be the weakest link in such a system from hardware failure point of view; hence, this work is focused on detecting faults and finding which switches in the inverter cause the faults. A simulation model has been developed based on the theoretical foundations of electric drives to simulate the normal condition, all single-switch and post-short-circuit faults. A machine learning algorithm has been developed to automatically select a set of representative operating points in the (torque, speed) domain, which in turn is sent to the simulated electric drive model to generate signals for the training of a diagnostic neural network, fault diagnostic neural network (FDNN). We validated the capability of the FDNN on data generated by an experimental bench setup. Our research demonstrates that with a robust machine learning approach, a diagnostic system can be trained based on a simulated electric drive model, which can lead to a correct classification of faults over a wide operating domain.", "title": "" }, { "docid": "9f2db5cf1ee0cfd0250e68bdbc78b434", "text": "A novel transverse equivalent network is developed in this letter to efficiently analyze a recently proposed leaky-wave antenna in substrate integrated waveguide (SIW) technology. For this purpose, precise modeling of the SIW posts for any distance between vias is essential to obtain accurate results. A detailed parametric study is performed resulting in leaky-mode dispersion curves as a function of the main geometrical dimensions of the antenna. Finally, design curves that directly provide the requested dimensions to synthesize the desired scanning response and leakage rate are reported and validated with experiments.", "title": "" } ]
scidocsrr
a80797672b972684fb4e5f3fc8faa8d8
STAR-Vote: A Secure, Transparent, Auditable, and Reliable Voting System
[ { "docid": "dbcae5be70fef927ccac30876b0a8bcf", "text": "Many operating system services require special privilege to execute their tasks. A programming error in a privileged service opens the door to system compromise in the form of unauthorized acquisition of privileges. In the worst case, a remote attacker may obtain superuser privileges. In this paper, we discuss the methodology and design of privilege separation, a generic approach that lets parts of an application run with different levels of privilege. Programming errors occurring in the unprivileged parts can no longer be abused to gain unauthorized privileges. Privilege separation is orthogonal to capability systems or application confinement and enhances the security of such systems even further. Privilege separation is especially useful for system services that authenticate users. These services execute privileged operations depending on internal state not known to an application confinement mechanism. As a concrete example, the concept of privilege separation has been implemented in OpenSSH. However, privilege separation is equally useful for other authenticating services. We illustrate how separation of privileges reduces the amount of OpenSSH code that is executed with special privilege. Privilege separation prevents known security vulnerabilities in prior OpenSSH versions including some that were unknown at the time of its implementation.", "title": "" } ]
[ { "docid": "5aa14d0c93eded7085fe637bffa155f2", "text": "In the human genome, 98% of DNA sequences are non-protein-coding regions that were previously disregarded as junk DNA. In fact, non-coding regions host a variety of cis-regulatory regions which precisely control the expression of genes. Thus, Identifying active cis-regulatory regions in the human genome is critical for understanding gene regulation and assessing the impact of genetic variation on phenotype. The developments of high-throughput sequencing and machine learning technologies make it possible to predict cis-regulatory regions genome wide. Based on rich data resources such as the Encyclopedia of DNA Elements (ENCODE) and the Functional Annotation of the Mammalian Genome (FANTOM) projects, we introduce DECRES based on supervised deep learning approaches for the identification of enhancer and promoter regions in the human genome. Due to their ability to discover patterns in large and complex data, the introduction of deep learning methods enables a significant advance in our knowledge of the genomic locations of cis-regulatory regions. Using models for well-characterized cell lines, we identify key experimental features that contribute to the predictive performance. Applying DECRES, we delineate locations of 300,000 candidate enhancers genome wide (6.8% of the genome, of which 40,000 are supported by bidirectional transcription data), and 26,000 candidate promoters (0.6% of the genome). The predicted annotations of cis-regulatory regions will provide broad utility for genome interpretation from functional genomics to clinical applications. The DECRES model demonstrates potentials of deep learning technologies when combined with high-throughput sequencing data, and inspires the development of other advanced neural network models for further improvement of genome annotations.", "title": "" }, { "docid": "00ff2d5e2ca1d913cbed769fe59793d4", "text": "In recent work, we showed that putatively adaptive emotion regulation strategies, such as reappraisal and acceptance, have a weaker association with psychopathology than putatively maladaptive strategies, such as rumination, suppression, and avoidance (e.g., Aldao & Nolen-Hoeksema, 2010; Aldao, Nolen-Hoeksema, & Schweizer, 2010). In this investigation, we examined the interaction between adaptive and maladaptive emotion regulation strategies in the prediction of psychopathology symptoms (depression, anxiety, and alcohol problems) concurrently and prospectively. We assessed trait emotion regulation and psychopathology symptoms in a sample of community residents at Time 1 (N = 1,317) and then reassessed psychopathology at Time 2 (N = 1,132). Cross-sectionally, we found that the relationship between adaptive strategies and psychopathology symptoms was moderated by levels of maladaptive strategies: adaptive strategies had a negative association with psychopathology symptoms only at high levels of maladaptive strategies. In contrast, adaptive strategies showed no prospective relationship to psychopathology symptoms either alone or in interaction with maladaptive strategies. We discuss the implications of this investigation for future work on the contextual factors surrounding the deployment of emotion regulation strategies.", "title": "" }, { "docid": "6558b2a3c43e11d58f3bb829425d6a8d", "text": "While end-to-end neural conversation models have led to promising advances in reducing hand-crafted features and errors induced by the traditional complex system architecture, they typically require an enormous amount of data due to the lack of modularity. Previous studies adopted a hybrid approach with knowledge-based components either to abstract out domainspecific information or to augment data to cover more diverse patterns. On the contrary, we propose to directly address the problem using recent developments in the space of continual learning for neural models. Specifically, we adopt a domainindependent neural conversational model and introduce a novel neural continual learning algorithm that allows a conversational agent to accumulate skills across different tasks in a data-efficient way. To the best of our knowledge, this is the first work that applies continual learning to conversation systems. We verified the efficacy of our method through a conversational skill transfer from either synthetic dialogs or human-human dialogs to human-computer conversations in a customer support domain.", "title": "" }, { "docid": "d1444f26cee6036f1c2df67a23d753be", "text": "Text mining has becoming an emerging research area in now-a-days that helps to extract useful information from large amount of natural language text documents. The need of grouping similar documents together for different applications has gaining the attention of researchers in this area. Document clustering organizes the documents into different groups called as clusters. The documents in one cluster have higher degree of similarity than the documents in other cluster. The paper provides an overview of the document clustering reviewed from different papers and the challenges in document clustering. KeywordsText Mining, Document Clustering, Similarity Measures, Challenges in Document Clustering", "title": "" }, { "docid": "8785e91dede4c48cf1543bbcd2374b6d", "text": "We propose TrustSVD, a trust-based matrix factorization technique for recommendations. TrustSVD integrates multiple information sources into the recommendation model in order to reduce the data sparsity and cold start problems and their degradation of recommendation performance. An analysis of social trust data from four real-world data sets suggests that not only the explicit but also the implicit influence of both ratings and trust should be taken into consideration in a recommendation model. TrustSVD therefore builds on top of a state-of-the-art recommendation algorithm, SVD++ (which uses the explicit and implicit influence of rated items), by further incorporating both the explicit and implicit influence of trusted and trusting users on the prediction of items for an active user. The proposed technique is the first to extend SVD++ with social trust information. Experimental results on the four data sets demonstrate that TrustSVD achieves better accuracy than other ten counterparts recommendation techniques.", "title": "" }, { "docid": "b1a508ecaa6fef0583b430fc0074af74", "text": "Recent past has seen a lot of developments in the field of image-based dietary assessment. Food image classification and recognition are crucial steps for dietary assessment. In the last couple of years, advancements in the deep learning and convolutional neural networks proved to be a boon for the image classification and recognition tasks, specifically for food recognition because of the wide variety of food items. In this paper, we report experiments on food/non-food classification and food recognition using a GoogLeNet model based on deep convolutional neural network. The experiments were conducted on two image datasets created by our own, where the images were collected from existing image datasets, social media, and imaging devices such as smart phone and wearable cameras. Experimental results show a high accuracy of 99.2% on the food/non-food classification and 83.6% on the food category recognition.", "title": "" }, { "docid": "4e2c466fac826f5e32a51f09355d7585", "text": "Congested networks involve complex traffic dynamics that can be accurately captured with detailed simulation models. However, when performing optimization of such networks the use of simulators is limited due to their stochastic nature and their relatively high evaluation cost. This has lead to the use of general-purpose analytical metamodels, that are cheaper to evaluate and easier to integrate within a classical optimization framework, but do not capture the specificities of the underlying congested conditions. In this paper, we argue that to perform efficient optimization for congested networks it is important to develop analytical surrogates specifically tailored to the context at hand so that they capture the key components of congestion (e.g. its sources, its propagation, its impact) while achieving a good tradeoff between realism and tractability. To demonstrate this, we present a surrogate that provides a detailed description of congestion by capturing the main interactions between the different network components while preserving analytical tractable. In particular, we consider the optimization of vehicle traffic in an urban road network. The proposed surrogate model is an approximate queueing network model that resorts to finite capacity queueing theory to account for congested conditions. Existing analytic queueing models for urban networks are formulated for a single intersection, and thus do not take into account the interactions between queues. The proposed model considers a set of intersections and analytically captures these interactions. We show that this level of detail is sufficient for optimization in the context of signal control for peak hour traffic. Although there is a great variety of signal control methodologies in the literature, there is still a need for solutions that are appropriate and efficient under saturated conditions, where the performance of signal control strategies and the formation and propagation of queues are strongly related. We formulate a fixed-time signal control problem where the network model is included as a set of constraints. We apply this methodology to a subnetwork of the Lausanne city center and use a microscopic traffic simulator to validate its performance. We also compare it with several other methods. As congestion increases, the new method leads to improved average performance measures. The results highlight the importance of taking the interaction between consecutive roads into account when deriving signal plans for congested urban road networks.", "title": "" }, { "docid": "a4fe9b813b64d887ba02f02a4fa71f5b", "text": "In this paper, we focus on three problems in deep learning based medical image segmentation. Firstly, U-net, as a popular model for medical image segmentation, is difficult to train when convolutional layers increase even though a deeper network usually has a better generalization ability because of more learnable parameters. Secondly, the exponential ReLU (ELU), as an alternative of ReLU, is not much different from ReLU when the network of interest gets deep. Thirdly, the Dice loss, as one of the pervasive loss functions for medical image segmentation, is not effective when the prediction is close to ground truth and will cause oscillation during training. To address the aforementioned three problems, we propose and validate a deeper network that can fit medical image datasets that are usually small in the sample size. Meanwhile, we propose a new loss function to accelerate the learning process and a combination of different activation functions to improve the network performance. Our experimental results suggest that our network is comparable or superior to state-of-the-art methods.", "title": "" }, { "docid": "abe5bdf6a17cf05b49ac578347a3ca5d", "text": "To realize the broad vision of pervasive computing, underpinned by the “Internet of Things” (IoT), it is essential to break down application and technology-based silos and support broad connectivity and data sharing; the cloud being a natural enabler. Work in IoT tends toward the subsystem, often focusing on particular technical concerns or application domains, before offloading data to the cloud. As such, there has been little regard given to the security, privacy, and personal safety risks that arise beyond these subsystems; i.e., from the wide-scale, cross-platform openness that cloud services bring to IoT. In this paper, we focus on security considerations for IoT from the perspectives of cloud tenants, end-users, and cloud providers, in the context of wide-scale IoT proliferation, working across the range of IoT technologies (be they things or entire IoT subsystems). Our contribution is to analyze the current state of cloud-supported IoT to make explicit the security considerations that require further work.", "title": "" }, { "docid": "e18351a3a27f6a909a22add693173f4e", "text": "Extension architectures of popular web browsers have been carefully studied by the research community; however, the security impact of interactions between different extensions installed on a given system has received comparatively little attention. In this paper, we consider the impact of the lack of isolation between traditional Firefox browser extensions, and identify a novel extension-reuse vulnerability that allows adversaries to launch stealthy attacks against users. This attack leverages capability leaks from legitimate extensions to avoid the inclusion of security-sensitive API calls within the malicious extension itself, rendering extensions that use this technique difficult to detect through the manual vetting process that underpins the security of the Firefox extension ecosystem. We then present CROSSFIRE, a lightweight static analyzer to detect instances of extension-reuse vulnerabilities. CROSSFIRE uses a multi-stage static analysis to efficiently identify potential capability leaks in vulnerable, benign extensions. If a suspected vulnerability is identified, CROSSFIRE then produces a proof-ofconcept exploit instance – or, alternatively, an exploit template that can be adapted to rapidly craft a working attack that validates the vulnerability. To ascertain the prevalence of extension-reuse vulnerabilities, we performed a detailed analysis of the top 10 Firefox extensions, and ran further experiments on a random sample drawn from the top 2,000. The results indicate that popular extensions, downloaded by millions of users, contain numerous exploitable extension-reuse vulnerabilities. A case study also provides anecdotal evidence that malicious extensions exploiting extension-reuse vulnerabilities are indeed effective at cloaking themselves from extension vetters.", "title": "" }, { "docid": "b90ec3edc349a98c41d1106b3c6628ba", "text": "Conventional speech recognition system is constructed by unfolding the spectral-temporal input matrices into one-way vectors and using these vectors to estimate the affine parameters of neural network according to the vector-based error backpropagation algorithm. System performance is constrained because the contextual correlations in frequency and time horizons are disregarded and the spectral and temporal factors are excluded. This paper proposes a spectral-temporal factorized neural network (STFNN) to tackle this weakness. The spectral-temporal structure is preserved and factorized in hidden layers through two ways of factor matrices which are trained by using the factorized error backpropagation. Affine transformation in standard neural network is generalized to the spectro-temporal factorization in STFNN. The structural features or patterns are extracted and forwarded towards the softmax outputs. A deep neural factorization is built by cascading a number of factorization layers with fully-connected layers for speech recognition. An orthogonal constraint is imposed in factor matrices for redundancy reduction. Experimental results show the merit of integrating the factorized features in deep feedforward and recurrent neural networks for speech recognition.", "title": "" }, { "docid": "5c4f20fcde1cc7927d359fd2d79c2ba5", "text": "There are different interpretations of user experience that lead to different scopes of measure. The ISO definition suggests measures of user experience are similar to measures of satisfaction in usability. A survey at Nokia showed that user experience was interpreted in a similar way to usability, but with the addition of anticipation and hedonic responses. CHI 2009 SIG participants identified not just measurement methods, but methods that help understanding of how and why people use products. A distinction can be made between usability methods that have the objective of improving human performance, and user experience methods that have the objective of improving user satisfaction with achieving both pragmatic and hedonic goals. Sometimes the term “user experience” is used to refer to both approaches. DEFINITIONS OF USABILITY AND USER EXPERIENCE There has been a lot of recent debate about the scope of user experience, and how it should be defined [5]. The definition of user experience in ISO FDIS 9241-210 is: A person's perceptions and responses that result from the use and/or anticipated use of a product, system or service. This contrasts with the revised definition of usability in ISO FDIS 9241-210: Extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use. Both these definitions suggest that usability or user experience can be measured during or after use of a product, system or service. A person's “perceptions and responses” in the definition of user experience are similar to the concept of satisfaction in usability. From this perspective, measures of user experience can be encompassed within the 3-component model of usability [1], particularly when the experience is task-related. A weakness of both definitions is that they are not explicitly concerned with time. Just as the ISO 9241-11 definition of usability has nothing to say about learnability (where usability changes over time), so the ISO 9241-210 definition of user experience has nothing to say about the way user experience evolves from expectation, through actual interaction, to a total experience that includes reflection on the experience [7]. USER EXPERIENCE NEEDS IN DESIGN AND DEVELOPMENT Ketola and Roto [4] surveyed the needs for information on user experience in Nokia, asking senior staff: Which User Experience information (measurable data gained from our target users directly or indirectly), is useful for your organization? How? 21 needs were identified from 18 respondents who worked in Research, Development, Care, and Quality. Ketola and Roto categorised the responses in terms of the area measured: UX lifecycle, retention, use of functions, breakdowns, customer care, localization, device performance and new technology. In Table 1, the needs have been recategorized by type of measure. It is clear that most of the measures are common to conventional approaches to user centred design, but three measures are specific to user experience: • The impact of expected UX to purchase decisions • Continuous excitement • Why and when the user experiences frustration? USER EXPERIENCE EVALUATION METHODS At the CHI 2009 SIG: “User Experience Evaluation – Do You Know Which Method to Use?” [6] [8], participants were asked to describe user experience evaluation methods that they used. 36 methods were collected (including the example methods presented by the organizers). These have been categorised in Table 2 by the type of evaluation context, and the type of data collected. There was very little mention of using measures specific to user experience, particularly from industry participants. It seems that industry's interpretation of user experience evaluation methods is much broader, going beyond conventional evaluation to encompass methods that collect information that helps design for user experience. In that sense user experience evaluation seems to be interpreted as user centred design methods for achieving user experience. The differentiating factor from more traditional usability work is thus a wider end goal: not just achieving effectiveness, efficiency and satisfaction, but optimising the whole user experience from expectation through actual interaction to reflection on the experience. DIFFERENCES BETWEEN USABILITY AND USER EXPERIENCE Although there is no fundamental difference between measures of usability and measures of user experience at a particular point in time, the difference in emphasis between task performance and pleasure leads to different concerns during development. In the context of user centred design, typical usability concerns include: Measurement category Measurement type Measure Area measured Anticipation Pre-purchase Anticipated use The impact of expected UX to purchase decisions UX lifecycle Overall usability First use Effectiveness Success of taking the product into use UX lifecycle Product upgrade Effectiveness Success in transferring content from old device to the new device UX lifecycle Expectations vs. reality Satisfaction Has the device met your expectations? Retention Long term experience Satisfaction Are you satisfied with the product quality (after 3 months of use) Retention Hedonic Engagement Pleasure Continuous excitement Retention UX Obstacles Frustration Why and when the user experiences frustration? Breakdowns Detailed usability Use of device functions How used What functions are used, how often, why, how, when, where? Use of functions Malfunction Technical problems Amount of “reboots” and severe technical problems experienced. Breakdowns Usability problems Usability problems Top 10 usability problems experienced by the customers. Breakdowns Effect of localization Satisfaction with localisation How do users perceive content in their local language? Localization Latencies Satisfaction with device performance Perceived latencies in key tasks. Device performance Performance Satisfaction with device performance Perceived UX on device performance Device performance Perceived complexity Satisfaction with task complexity Actual and perceived complexity of task accomplishments. Device performance User differences Previous devices Previous user experience Which device you had previously? Retention Differences in user groups User differences How different user groups access features? Use of functions Reliability of product planning User differences Comparison of target users vs. actual buyers? Use of functions Support Customer experience in “touchpoints” Satisfaction with support How does customer think & feel about the interaction in the touch points? Customer care Accuracy of support information Consequences of poor support Does inaccurate support information result in product returns? How? Customer care Innovation feedback User wish list New user ideas & innovations triggered by new experiences New technologies Impact of use Change in user behaviour How the device affects user behaviour How are usage patterns changing when new technologies are introduced New technologies Table 1. Categorisation of usability measures reported in [4] 1. Designing for and evaluating overall effectiveness and efficiency. 2. Designing for and evaluating user comfort and satisfaction. 3. Designing to make the product easy to use, and evaluating the product in order to identify and fix usability problems. 4. When relevant, the temporal aspect leads to a concern for learnability. In the context of user centred design, typical user experience concerns include: 1. Understanding and designing the user’s experience with a product: the way in which people interact with a product over time: what they do and why. 2. Maximising the achievement of the hedonic goals of stimulation, identification and evocation and associated emotional responses. Sometimes the two sets of issues are contrasted as usability and user experience. But some organisations would include both under the common umbrella of user experience. Evaluation context Lab tests Lab study with mind maps Paper prototyping Field tests Product / Tool Comparison Competitive evaluation of prototypes in the wild Field observation Long term pilot study Longitudinal comparison Contextual Inquiry Observation/Post Interview Activity Experience Sampling Longitudinal Evaluation Ethnography Field observations Longitudinal Studies Evaluation of groups Evaluating collaborative user experiences, Instrumented product TRUE Tracking Realtime User Experience Domain specific Nintendi Wii Children OPOS Outdoor Play Observation Scheme This-or-that Approaches Evaluating UX jointly with usability Evaluation data User opinion/interview Lab study with mind maps Quick and dirty evaluation Audio narrative Retrospective interview Contextual Inquiry Focus groups evaluation Observation \\ Post Interview Activity Experience Sampling Sensual Evaluation Instrument Contextual Laddering Interview ESM User questionnaire Survey Questions Emocards Experience sampling triggered by events, SAM Magnitude Estimation TRUE Tracking Realtime User Experience Questionnaire (e.g. AttrakDiff) Human responses PURE preverbal user reaction evaluation Psycho-physiological measurements Expert evaluation Expert evaluation Heuristic matrix Perspective-Based Inspection Table2. User experience evaluation methods (CHI 2009 SIG) CONCLUSIONS The scope of user experience The concept of user experience both broadens: • The range of human responses that would be measured to include pleasure. • The circumstances in which they would be measured to include anticipated use and reflection on use. Equally importantly the goal to achieve improved user experience over the whole lifecycle of user involvement with the product leads to increased emphasis on use of methods that help understand what can be done to improve this experience through the whole lifecycle of user involvement. However, notably absent from any of the current surveys or initiative", "title": "" }, { "docid": "d5284538412222101f084fee2dc1acc4", "text": "The hand is an integral component of the human body, with an incredible spectrum of functionality. In addition to possessing gross and fine motor capabilities essential for physical survival, the hand is fundamental to social conventions, enabling greeting, grooming, artistic expression and syntactical communication. The loss of one or both hands is, thus, a devastating experience, requiring significant psychological support and physical rehabilitation. The majority of hand amputations occur in working-age males, most commonly as a result of work-related trauma or as casualties sustained during combat. For millennia, humans have used state-of-the-art technology to design clever devices to facilitate the reintegration of hand amputees into society. The present article provides a historical overview of the progress in replacing a missing hand, from early iron hands intended primarily for use in battle, to today's standard body-powered and myoelectric prostheses, to revolutionary advancements in the restoration of sensorimotor control with targeted reinnervation and hand transplantation.", "title": "" }, { "docid": "59aa4318fa39c1d6ec086af7041148b2", "text": "Two of the most important outcomes of learning analytics are predicting students’ learning and providing effective feedback. Learning Management Systems (LMS), which are widely used to support online and face-to-face learning, provide extensive research opportunities with detailed records of background data regarding users’ behaviors. The purpose of this study was to investigate the effects of undergraduate students’ LMS learning behaviors on their academic achievements. In line with this purpose, the participating students’ online learning behaviors in LMS were examined by using learning analytics for 14 weeks, and the relationship between students’ behaviors and their academic achievements was analyzed, followed by an analysis of their views about the influence of LMS on their academic achievement. The present study, in which quantitative and qualitative data were collected, was carried out with the explanatory mixed method. A total of 71 undergraduate students participated in the study. The results revealed that the students used LMSs as a support to face-to-face education more intensively on course days (at the beginning of the related lessons and at nights on course days) and that they activated the content elements the most. Lastly, almost all the students agreed that LMSs helped increase their academic achievement only when LMSs included such features as effectiveness, interaction, reinforcement, attractive design, social media support, and accessibility.", "title": "" }, { "docid": "804139352206af823bc8bae12789c416", "text": "In a two-tier heterogeneous network (HetNet) where femto access points (FAPs) with lower transmission power coexist with macro base stations (BSs) with higher transmission power, the FAPs may suffer significant performance degradation due to inter-tier interference. Introducing cognition into the FAPs through the spectrum sensing (or carrier sensing) capability helps them avoiding severe interference from the macro BSs and enhance their performance. In this paper, we use stochastic geometry to model and analyze performance of HetNets composed of macro BSs and cognitive FAPs in a multichannel environment. The proposed model explicitly accounts for the spatial distribution of the macro BSs, FAPs, and users in a Rayleigh fading environment. We quantify the performance gain in outage probability obtained by introducing cognition into the femto-tier, provide design guidelines, and show the existence of an optimal spectrum sensing threshold for the cognitive FAPs, which depends on the HetNet parameters. We also show that looking into the overall performance of the HetNets is quite misleading in the scenarios where the majority of users are served by the macro BSs. Therefore, the performance of femto-tier needs to be explicitly accounted for and optimized.", "title": "" }, { "docid": "dbab8fdd07b1180ba425badbd1616bb2", "text": "The proliferation of cyber-physical systems introduces the fourth stage of industrialization, commonly known as Industry 4.0. The vertical integration of various components inside a factory to implement a flexible and reconfigurable manufacturing system, i.e., smart factory, is one of the key features of Industry 4.0. In this paper, we present a smart factory framework that incorporates industrial network, cloud, and supervisory control terminals with smart shop-floor objects such as machines, conveyers, and products. Then, we provide a classification of the smart objects into various types of agents and define a coordinator in the cloud. The autonomous decision and distributed cooperation between agents lead to high flexibility. Moreover, this kind of self-organized system leverages the feedback and coordination by the central coordinator in order to achieve high efficiency. Thus, the smart factory is characterized by a self-organized multi-agent system assisted with big data based feedback and coordination. Based on this model, we propose an intelligent negotiation mechanism for agents to cooperate with each other. Furthermore, the study illustrates that complementary strategies can be designed to prevent deadlocks by improving the agents’ decision making and the coordinator’s behavior. The simulation results assess the effectiveness of the proposed negotiation mechanism and deadlock prevention strategies. © 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "2172e78731ee63be5c15549e38c4babb", "text": "The design of a low-cost low-power ring oscillator-based truly random number generator (TRNG) macrocell, which is suitable to be integrated in smart cards, is presented. The oscillator sampling technique is exploited, and a tetrahedral oscillator with large jitter has been employed to realize the TRNG. Techniques to improve the statistical quality of the ring oscillatorbased TRNGs' bit sequences have been presented and verified by simulation and measurement. A postdigital processor is added to further enhance the randomness of the output bits. Fabricated in the HHNEC 0.13-μm standard CMOS process, the proposed TRNG has an area as low as 0.005 mm2. Powered by a single 1.8-V supply voltage, the TRNG has a power consumption of 40 μW. The bit rate of the TRNG after postprocessing is 100 kb/s. The proposed TRNG has been made into an IP and successfully applied in an SD card for encryption application. The proposed TRNG has passed the National Institute of Standards and Technology tests and Diehard tests.", "title": "" }, { "docid": "aecc5e00e4be529c76d6d629310c8b5c", "text": "For a user to perceive continuous interactive response time in a visualization tool, the rule of thumb is that it must process, deliver, and display rendered results for any given interaction in under 100 milliseconds. In many visualization systems, successive interactions trigger independent queries and caching of results. Consequently, computationally expensive queries like multidimensional clustering cannot keep up with rapid sequences of interactions, precluding visual benefits such as motion parallax. In this paper, we describe a heuristic prefetching technique to improve the interactive response time of KMeans clustering in dynamic query visualizations of multidimensional data. We address the tradeoff between high interaction and intense query computation by observing how related interactions on overlapping data subsets produce similar clustering results, and characterizing these similarities within a parameter space of interaction. We focus on the two-dimensional parameter space defined by the minimum and maximum values of a time range manipulated by dragging and stretching a one-dimensional filtering lens over a plot of time series data. Using calculation of nearest neighbors of interaction points in parameter space, we reuse partial query results from prior interaction sequences to calculate both an immediate best-effort clustering result and to schedule calculation of an exact result. The method adapts to user interaction patterns in the parameter space by reprioritizing the interaction neighbors of visited points in the parameter space. A performance study on Mesonet meteorological data demonstrates that the method is a significant improvement over the baseline scheme in which interaction triggers on-demand, exact-range clustering with LRU caching. We also present initial evidence that approximate, temporary clustering results are sufficiently accurate (compared to exact results) to convey useful cluster structure during rapid and protracted interaction.", "title": "" }, { "docid": "1d9b50bf7fa39c11cca4e864bbec5cf3", "text": "FPGA-based embedded soft vector processors can exceed the performance and energy-efficiency of embedded GPUs and DSPs for lightweight deep learning applications. For low complexity deep neural networks targeting resource constrained platforms, we develop optimized Caffe-compatible deep learning library routines that target a range of embedded accelerator-based systems between 4 -- 8 W power budgets such as the Xilinx Zedboard (with MXP soft vector processor), NVIDIA Jetson TK1 (GPU), InForce 6410 (DSP), TI EVM5432 (DSP) as well as the Adapteva Parallella board (custom multi-core with NoC). For MNIST (28×28 images) and CIFAR10 (32×32 images), the deep layer structure is amenable to MXP-enhanced FPGA mappings to deliver 1.4 -- 5× higher energy efficiency than all other platforms. Not surprisingly, embedded GPU works better for complex networks with large image resolutions.", "title": "" }, { "docid": "63af822cd877b95be976f990b048f90c", "text": "We propose a method for generating classifier ensembles based on feature extraction. To create the training data for a base classifier, the feature set is randomly split into K subsets (K is a parameter of the algorithm) and principal component analysis (PCA) is applied to each subset. All principal components are retained in order to preserve the variability information in the data. Thus, K axis rotations take place to form the new features for a base classifier. The idea of the rotation approach is to encourage simultaneously individual accuracy and diversity within the ensemble. Diversity is promoted through the feature extraction for each base classifier. Decision trees were chosen here because they are sensitive to rotation of the feature axes, hence the name \"forest\". Accuracy is sought by keeping all principal components and also using the whole data set to train each base classifier. Using WEKA, we examined the rotation forest ensemble on a random selection of 33 benchmark data sets from the UCI repository and compared it with bagging, AdaBoost, and random forest. The results were favorable to rotation forest and prompted an investigation into diversity-accuracy landscape of the ensemble models. Diversity-error diagrams revealed that rotation forest ensembles construct individual classifiers which are more accurate than these in AdaBoost and random forest, and more diverse than these in bagging, sometimes more accurate as well", "title": "" } ]
scidocsrr
3408aae7df8fa4f6530680eae794daa2
On the Semantics and Pragmatics of Linguistic Feedback
[ { "docid": "c85d1b4193f016da93e555bb4227d7cd", "text": "ground in on orderly way. To do this, we argue, they try to establish for each utterance the mutual belief that the addressees hove understood what the speaker meant well enough for current purposes. This is accomplished by the collective actions of the current contributor and his or her partners, and these result in units of conversation called contributions. We present a model of contributions and show how it accounts for o variety of features of everyday conversations.", "title": "" } ]
[ { "docid": "5ca765f0ddc5b22ddd88cb41f5c2fde4", "text": "The development of self-adaptive software requires the engineering of an adaptation engine that controls the underlying adaptable software by feedback loops. The engine often describes the adaptation by runtime models representing the adaptable software and by activities such as analysis and planning that use these models. To systematically address the interplay between runtime models and adaptation activities, runtime megamodels have been proposed. A runtime megamodel is a specific model capturing runtime models and adaptation activities. In this article, we go one step further and present an executable modeling language for ExecUtable RuntimE MegAmodels (EUREMA) that eases the development of adaptation engines by following a model-driven engineering approach. We provide a domain-specific modeling language and a runtime interpreter for adaptation engines, in particular feedback loops. Megamodels are kept alive at runtime and by interpreting them, they are directly executed to run feedback loops. Additionally, they can be dynamically adjusted to adapt feedback loops. Thus, EUREMA supports development by making feedback loops explicit at a higher level of abstraction and it enables solutions where multiple feedback loops interact or operate on top of each other and self-adaptation co-exists with offline adaptation for evolution.", "title": "" }, { "docid": "ccce778a661b2f4a1689da1ac190b2a6", "text": "Neural Networks sequentially build high-level features through their successive layers. We propose here a new neural network model where each layer is associated with a set of candidate mappings. When an input is processed, at each layer, one mapping among these candidates is selected according to a sequential decision process. The resulting model is structured according to a DAG like architecture, so that a path from the root to a leaf node defines a sequence of transformations. Instead of considering global transformations, like in classical multilayer networks, this model allows us for learning a set of local transformations. It is thus able to process data with different characteristics through specific sequences of such local transformations, increasing the expression power of this model w.r.t a classical multilayered network. The learning algorithm is inspired from policy gradient techniques coming from the reinforcement learning domain and is used here instead of the classical back-propagation based gradient descent techniques. Experiments on different datasets show the relevance of this approach.", "title": "" }, { "docid": "1ab9bfcb356b394a3e9441a75668bc07", "text": "User Generated Content (UGC) is a rapidly emerging growth engine of many Internet businesses and an important component of the new knowledge society. However, little research has been done on the mechanisms inherent to UGC. This research explores the relationships among the quality, value, and benefits of UGC. The main objective is to identify and evaluate the quality factors that affect UGC value, which ultimately influences the utility of UGC. We identify the three quality dimensions of UGC: content, design, and technology. We classify UGC value into three categories: functional value, emotional value, and social value. We attempt to characterize the mechanism underlying UGC value by evaluating the relationships between the quality and value of UGC and investigating what types of UGC value affect UGC utility. Our results show that all three factors of UGC quality are strongly associated with increases in the functional, emotional, and social values of UGC. Our findings also demonstrate that the functional and emotional values of UGC are critically important factors for UGC utility. Based on these findings, we discuss theoretical implications for future research and practical implications for UGC services.", "title": "" }, { "docid": "59a25ae61a22baa8e20ae1a5d88c4499", "text": "This paper tackles a major privacy threat in current location-based services where users have to report their exact locations to the database server in order to obtain their desired services. For example, a mobile user asking about her nearest restaurant has to report her exact location. With untrusted service providers, reporting private location information may lead to several privacy threats. In this paper, we present a peer-to-peer (P2P)spatial cloaking algorithm in which mobile and stationary users can entertain location-based services without revealing their exact location information. The main idea is that before requesting any location-based service, the mobile user will form a group from her peers via single-hop communication and/or multi-hop routing. Then,the spatial cloaked area is computed as the region that covers the entire group of peers. Two modes of operations are supported within the proposed P2P s patial cloaking algorithm, namely, the on-demand mode and the proactive mode. Experimental results show that the P2P spatial cloaking algorithm operated in the on-demand mode has lower communication cost and better quality of services than the proactive mode, but the on-demand incurs longer response time.", "title": "" }, { "docid": "87931f7371cbe68481c0bd01609855b9", "text": "We begin the mathematical study of Isogeometric Analysis based on NURBS (non-uniform rational B-splines.) Isogeometric Analysis is a generalization of classical Finite Element Analysis (FEA) which possesses improved properties. For example, NURBS are capable of more precise geometric representation of complex objects and, in particular, can exactly represent many commonly engineered shapes, such as cylinders, spheres and tori. Isogeometric Analysis also simplifies mesh refinement because the geometry is fixed at the coarsest level of refinement and is unchanged throughout the refinement process. This eliminates geometrical errors and the necessity of linking the refinement procedure to a CAD representation of the geometry, as in classical FEA. In this work we study approximation and stability properties in the context of h-refinement. We develop approximation estimates based on a new BrambleHilbert lemma in so-called “bent” Sobolev spaces appropriate for NURBS approximations and we establish inverse estimates similar to ones for finite elements. We apply the theoretical results to several cases of interest including elasticity, isotropic incompressible elasticity and Stokes flow, and advection-diffusion, and perform numerical tests which corroborate the mathematical results. We also perform numerical calculations that involve hypotheses outside our theory and these suggest that there are many other interesting mathematical properties of Isogeometric Analysis yet to be proved.", "title": "" }, { "docid": "f04d59966483bf7e4053a9d504278a82", "text": "Radio Frequency Identification (RFID) is a promising new technology that is widely deployed for object tracking and monitoring, ticketing, supply-chain management, contactless payment, etc. However, RFID related security problems attract more and more attentions. This paper has studied a novel elliptic curve cryptography (ECC) based RFID security protocol and it shows some great features. Firstly, the high strength of ECC encryption provides convincing security for communication and tag memory data access. Secondly, the public-key cryptography used in the protocol reduces the key storage requirement and the backend system just store the private key. Thirdly, the new protocol just depends on simple calculations, such as XOR, bitwise AND, and so forth, which reduce the tag computation. Finally, the computational performance, security features, and the formal proof based on BAN logic are also discussed in detail in the paper.", "title": "" }, { "docid": "f213bc5b5a16b381262aefe842babc59", "text": "Optogenetic methodology enables direct targeting of specific neural circuit elements for inhibition or excitation while spanning timescales from the acute (milliseconds) to the chronic (many days or more). Although the impact of this temporal versatility and cellular specificity has been greater for basic science than clinical research, it is natural to ask whether the dynamic patterns of neural circuit activity discovered to be causal in adaptive or maladaptive behaviors could become targets for treatment of neuropsychiatric diseases. Here, we consider the landscape of ideas related to therapeutic targeting of circuit dynamics. Specifically, we highlight optical, ultrasonic, and magnetic concepts for the targeted control of neural activity, preclinical/clinical discovery opportunities, and recently reported optogenetically guided clinical outcomes.", "title": "" }, { "docid": "e25c0621e876a9044ce2d4eb96cf8e63", "text": "PURPOSE\nThe significance of nitric oxide in the physiology of the penis was evaluated, including its role in pathophysiological mechanisms and pathological consequences involving this organ.\n\n\nMATERIALS AND METHODS\nAnimal and human studies pertaining to nitric oxide in the penis were reviewed and analyzed in the context of current descriptions of the molecular biology and physiological effects of this chemical.\n\n\nRESULTS\nPotential sources of nitric oxide in the penis include neurons, sinusoidal endothelium and corporeal smooth muscle cells. Nitric oxide is perceived to exert a host of functional roles by binding with specific molecular targets. Its synthesis and action in the penis are influenced by many different regulatory factors.\n\n\nCONCLUSIONS\nNitric oxide exerts a significant role in the physiology of the penis, operating chiefly as the principal mediator of erectile function. Alterations in the biology of nitric oxide likely account for various forms of erectile dysfunction. The diverse physiological roles of nitric oxide suggest that it may also directly contribute to or cause pathological consequences involving the penis.", "title": "" }, { "docid": "cd0d425c8315a22ed9e52b8bdd489b52", "text": "Data mining is an essential phase in knowledge discovery in database which is actually used to extract hidden patterns from large databases. Data mining concepts and methods can be applied in various fields like marketing, medicine, real estate, customer relationship management, engineering, web mining, etc. The main objective of this paper is to compare the performance accuracy of Multilayer perceptron (MLP) Artificial Neural Network and ID3 (Iterative Dichotomiser 3), C4.5 (also known as J48) Decision Trees algorithms Weka data mining software in predicting Typhoid fever. The data used is the patient’s dataset collected from a well known Nigerian Hospital. ID3, C4.5 Decision tree and MLP Artificial Neural Network WEKA Data mining software was used for the implementation. The data collected were transformed in a form that is acceptable to the data mining software and it was splitted into two sets: The training dataset and the testing dataset so that it can be imported into the system. The training set was used to enable the system to observe relationships between input data and the resulting outcomes in order to perform the prediction. The testing dataset contains data used to test the performance of the model. This model can be used by medical experts both in the private and public hospitals to make more timely and consistent diagnosis of typhoid fever cases which will reduce death rate in our country. The MLP ANN model exhibits good performance in the prediction of typhoid fever disease in general because of the low values generated in the Mean Absolute Error (MAE), Root Mean Squared Error (RMSE) and Relative Absolute Error (RAE) error performance measures. KeywordsID3, C4.5 , MLP, Decision Tree Artificial Neural Network, Typhoid fever African Journal of Computing & ICT Reference Format: O..O. Adeyemo, T. .O Adeyeye & D. Ogunbiyi (2015). Ccomparative Study of ID3/C4.5 Decision tree and Multilayer Perceptron Algorithms for the Prediction of Typhoid Fever. Afr J. of Comp & ICTs. Vol 8, No. 1. Pp 103-112.", "title": "" }, { "docid": "072b842bb999a348ac2b6aa4a44f5ff2", "text": "Eating disorders, such as anorexia nervosa are a major health concern affecting many young individuals. Given the extensive adoption of social media technologies in the anorexia affected demographic, we study behavioral characteristics of this population focusing on the social media Tumblr. Aligned with observations in prior literature, we find the presence of two prominent anorexia related communities on Tumblr -- pro-anorexia and pro-recovery. Empirical analyses on several thousand Tumblr posts show use of the site as a media-rich platform replete with triggering content for enacting anorexia as a lifestyle choice. Through use of common pro-anorexia tags, the pro-recovery community however attempts to \"permeate\" into the pro-anorexia community to educate them of the health risks of anorexia. Further, the communities exhibit distinctive affective, social, cognitive, and linguistic style markers. Compared with recover- ing anorexics, pro-anorexics express greater negative affect, higher cognitive impairment, and greater feelings of social isolation and self-harm. We also observe that these characteristics may be used in a predictive setting to detect anorexia content with 80% accuracy. Based on our findings, clinical implications of detecting anorexia related content on social media are discussed.", "title": "" }, { "docid": "567445f68597ea8ff5e89719772819be", "text": "We have developed an interactive pop-up book called Electronic Popables to explore paper-based computing. Our book integrates traditional pop-up mechanisms with thin, flexible, paper-based electronics and the result is an artifact that looks and functions much like an ordinary pop-up, but has added elements of dynamic interactivity. This paper introduces the book and, through it, a library of paper-based sensors and a suite of paper-electronics construction techniques. We also reflect on the unique and under-explored opportunities that arise from combining material experimentation, artistic design, and engineering.", "title": "" }, { "docid": "96895d7beb792e909ae5166ca3e65fae", "text": "O2 reduction in aprotic Na-O2 batteries results in the formation of NaO2, which can be oxidized at small overpotentials (<200 mV) on charge. In this study, we investigated the NaO2 oxidation mechanism using rotating ring disk electrode (RRDE) measurements of Na-O2 reaction products and by tracking the morphological evolution of the NaO2 discharge product at different states of charge using scanning electron microscopy (SEM). The results show that negligible soluble species are formed during NaO2 oxidation, and that the oxidation occurs predominantly via charge transfer at the interface between NaO2 and carbon electrode fibers rather than uniformly from all NaO2 surfaces. X-ray absorption near edge structure (XANES), and X-ray photoelectron spectroscopy (XPS) measurements show that the band gap of NaO2 is smaller than that of Li2O2 formed in Li-O2 batteries, in which charging overpotentials are much higher (∼1000 mV). These results emphasize the importance of discharge product electronic structure for rationalizing metal-air battery mechanisms and performance.", "title": "" }, { "docid": "1bb149552a2506d7305641e7e4300d3a", "text": "This paper presents the LineScout Technology, a mobile teleoperated robot for power line inspection and maintenance. Optimizing several geometric parameters achieved a compact design that was successfully tested over many line configurations and obstacle sequences. An overview of the technology is presented, including a description of the control strategy, followed by a section focusing on key aspects of the prototype thorough validation. Working on live lines, up to 735 kV and 1,000 A, means that the technology must be robust to electromagnetic interference. The third generation prototype, tested in laboratory and in field conditions, is now ready to undertake inspection pilot projects.", "title": "" }, { "docid": "ba2c4cd490998d5a89099c57bb3a0c8e", "text": "The number of cycles for each external memory access in Single Instruction Multiple Data (SIMD) processors is heavily affected by the access pattern, such as aligned, unaligned, or stride. We developed a high-performance dynamic on-chip memory-allocation method for SIMD processors by considering the memory access pattern as well as the access frequency. The access pattern and the access count for an array of a loop are determined by both code analysis and profiling, which are performed on a developed compiler framework. This framework not only conducts dynamic on-chip memory allocation but also generates optimized codes for a target processor. The proposed allocation method has been tested with several multimedia benchmarks including motion estimation, 2-D discrete cosine transform, and MPEG2 encoder programs.", "title": "" }, { "docid": "7b68202ea6727a8baa51df8fac643b4a", "text": "In this paper, a three-level inverter-fed induction motor drive operating under Direct Torque Control (DTC) is presented. A triangular wave is used as dither signal of minute amplitude (for torque hysteresis band and flux hysteresis band respectively) in the error block. This method minimizes flux and torque ripple in a three-level inverter fed induction motor drive while the dynamic performance is not affected. The optimal value of dither frequency and magnitude is found out under free running condition. The proposed technique reduces torque ripple by 60% (peak to peak) compared to the case without dither injection, results in low acoustic noise and increases the switching frequency of the inverter. A laboratory prototype of the drive system has been developed and the simulation and experimental results are reported.", "title": "" }, { "docid": "ddeb70a9abd07b113c8c7bfcf2f535b6", "text": "Implementation of authentic leadership can affect not only the nursing workforce and the profession but the healthcare delivery system and society as a whole. Creating a healthy work environment for nursing practice is crucial to maintain an adequate nursing workforce; the stressful nature of the profession often leads to burnout, disability, and high absenteeism and ultimately contributes to the escalating shortage of nurses. Leaders play a pivotal role in retention of nurses by shaping the healthcare practice environment to produce quality outcomes for staff nurses and patients. Few guidelines are available, however, for creating and sustaining the critical elements of a healthy work environment. In 2005, the American Association of Critical-Care Nurses released a landmark publication specifying 6 standards (skilled communication, true collaboration, effective decision making, appropriate staffing, meaningful recognition, and authentic leadership) necessary to establish and sustain healthy work environments in healthcare. Authentic leadership was described as the \"glue\" needed to hold together a healthy work environment. Now, the roles and relationships of authentic leaders in the healthy work environment are clarified as follows: An expanded definition of authentic leadership and its attributes (eg, genuineness, trustworthiness, reliability, compassion, and believability) is presented. Mechanisms by which authentic leaders can create healthy work environments for practice (eg, engaging employees in the work environment to promote positive behaviors) are described. A practical guide on how to become an authentic leader is advanced. A research agenda to advance the study of authentic leadership in nursing practice through collaboration between nursing and business is proposed.", "title": "" }, { "docid": "849f89d0007ec44c45257f07f08ba1d1", "text": "This paper presents Autobank, a prototype tool for constructing a widecoverage Minimalist Grammar (MG) (Stabler, 1997), and semi-automatically converting the Penn Treebank (PTB) into a deep Minimalist treebank. The front end of the tool is a graphical user interface which facilitates the rapid development of a seed set of MG trees via manual reannotation of PTB preterminals with MG lexical categories. The system then extracts various dependency mappings between the source and target trees, and uses these in concert with a non-statistical MG parser to automatically reannotate the rest of the corpus. Autobank thus enables deep treebank conversions (and subsequent modifications) without the need for complex transduction algorithms accompanied by cascades of ad hoc rules; instead, the locus of human effort falls directly on the task of grammar construction itself.", "title": "" }, { "docid": "73fefd128d5f454f52fd345814244bad", "text": "In this paper a spatial interpolation approach, based on polar-grid representation and Kriging predictor, is proposed for 3D point cloud sampling. Discrete grid representation is a widely used technique because of its simplicity and capacity of providing an efficient and compact representation, allowing subsequent applications such as artificial perception and autonomous navigation. Two-dimensional occupancy grid representations have been studied extensively in the past two decades, and recently 2.5D and 3D grid-based approaches dominate current applications. A key challenge in perception systems for vehicular applications is to balance low computational complexity and reliable data interpretation. To this end, this paper contributes with a discrete 2.5D polar-grid that upsamples the input data, ie sparse 3D point cloud, by means of a deformable kriging-based interpolation strategy. Experiments carried out on the KITTI dataset, using data from a LIDAR, demonstrate that the approach proposed in this work allows a proper representation of urban environments.", "title": "" }, { "docid": "57e5d801778711f2ab9a152f08ae53e8", "text": "A modular multilevel converter (MMC) is one of the next-generation multilevel PWM converters intended for high- or medium-voltage power conversion without transformers. The MMC consists of cascade connection of multiple bidirectional PWM chopper-cells and floating dc capacitors per leg, thus requiring voltage-balancing control of their chopper-cells. However, no paper has been discussed explicitly on voltage-balancing control with theoretical and experimental verifications. This paper deals with two types of modular multilevel PWM converters with focus on their circuit configurations and voltage-balancing control. Combination of averaging and balancing controls enables the MMCs to achieve voltage balancing without any external circuit. The viability of the MMCs as well as the effectiveness of the PWM control method is confirmed by simulation and experiment.", "title": "" }, { "docid": "5f68e7d03c48d842add703ce0492c453", "text": "This paper presents a summary of the available single-phase ac-dc topologies used for EV/PHEV, level-1 and -2 on-board charging and for providing reactive power support to the utility grid. It presents the design motives of single-phase on-board chargers in detail and makes a classification of the chargers based on their future vehicle-to-grid usage. The pros and cons of each different ac-dc topology are discussed to shed light on their suitability for reactive power support. This paper also presents and analyzes the differences between charging-only operation and capacitive reactive power operation that results in increased demand from the dc-link capacitor (more charge/discharge cycles and increased second harmonic ripple current). Moreover, battery state of charge is spared from losses during reactive power operation, but converter output power must be limited below its rated power rating to have the same stress on the dc-link capacitor.", "title": "" } ]
scidocsrr
b01adfd5184606c40c96f78aeac25dea
Adaptive sampling for sensor networks
[ { "docid": "08b150c83af511d9cb9cc3382e3cc7db", "text": "To answer user queries efficiently, a stream management system must handle continuous, high-volume, possibly noisy, and time-varying data streams. One major research area in stream management seeks to allocate resources (such as network bandwidth and memory) to query plans, either to minimize resource usage under a precision requirement, or to maximize precision of results under resource constraints. To date, many solutions have been proposed; however, most solutions are ad hoc with hard-coded heuristics to generate query plans. In contrast, we perceive stream resource management as fundamentally a filtering problem, in which the objective is to filter out as much data as possible to conserve resources, provided that the precision standards can be met. We select the Kalman Filter as a general and adaptive filtering solution for conserving resources. The Kalman Filter has the ability to adapt to various stream characteristics, sensor noise, and time variance. Furthermore, we realize a significant performance boost by switching from traditional methods of caching static data (which can soon become stale) to our method of caching dynamic procedures that can predict data reliably at the server without the clients' involvement. In this work we focus on minimization of communication overhead for both synthetic and real-world streams. Through examples and empirical studies, we demonstrate the flexibility and effectiveness of using the Kalman Filter as a solution for managing trade-offs between precision of results and resources in satisfying stream queries.", "title": "" }, { "docid": "e5241fb804825b9be212749af156eedc", "text": "Sensor networks are being widely deployed for measurement, detection and surveillance applications. In these new applications, users issue long-running queries over a combination of stored data and sensor data. Most existing applications rely on a centralized system for collecting sensor data. These systems lack flexibility because data is extracted in a predefined way; also, they do not scale to a large number of devices because large volumes of raw data are transferred regardless of the queries that are submitted. In our new concept of sensor database system, queries dictate which data is extracted from the sensors. In this paper, we define a model for sensor databases. Stored data are represented as relations while sensor data are represented as time series. Each long-running query formulated over a sensor database defines a persistent view, which is maintained during a given time interval. We also describe the design and implementation of the COUGAR sensor database system.", "title": "" } ]
[ { "docid": "07447829f6294660359219c2310968b6", "text": "Caudal duplication (dipygus) is an uncommon pathologic of conjoined twinning. The conjoined malformation is classified according to the nature and site of the union. We report the presence of this malformation in a female crossbreed puppy. The puppy was delivered by caesarean section following a prolonged period of dystocia. External findings showed a single head (monocephalus) and a normal cranium with no fissure in the medial line detected. The thorax displayed a caudal duplication arising from the lumbosacral region (rachipagus). The puppy had three upper limbs, a right and left, and a third limb in the dorsal region where the bifurcation began. The subsequent caudal duplication appeared symmetrical. Necropsy revealed internal abnormalities consisting of a complete duplication of the urogenital system and a duplication of the large intestines arising from a bifurcation of the caudal ileum . Considering the morphophysiological description the malformation described would be classified as the first case in the dog of a monocephalusrachipagustribrachius tetrapus.", "title": "" }, { "docid": "8a45f8c149970e917cddf99ec0954d40", "text": "In recent years, the threats and damages caused by active worms have become more and more serious. In order to reduce the loss caused by fastspreading active worms, an effective detection mechanism to quickly detect worms is desired. In this paper, we first explore various scan strategies used by worms on finding vulnerable hosts. We show that targeted worms spread much faster than random scan worms. We then present a generic worm detection architecture to monitor malicious worm activities. We propose and evaluate our detection mechanism called Victim Number Based Algorithm. We show that our detection algorithm is effective and able to detect worm events before 2% of vulnerable hosts are infected for most scenarios. Furthermore, in order to reduce false alarms, we propose an integrated approach using multiple parameters as indicators to detect worm events. The results suggest that our integrated approach can differentiate worm attacks from DDoS attacks and benign scans.", "title": "" }, { "docid": "23bac6bee39a0a68c5124befe18ca868", "text": "A key step in gene repression by Polycomb is trimethylation of histone H3 K27 by PCR2 to form H3K27me3. H3K27me3 provides a binding surface for PRC1. We show that monoubiquitination of histone H2A by PRC1-type complexes to form H2Aub creates a binding site for Jarid2–Aebp2–containing PRC2 and promotes H3K27 trimethylation on H2Aub nucleosomes. Jarid2, Aebp2 and H2Aub thus constitute components of a positive feedback loop establishing H3K27me3 chromatin domains.", "title": "" }, { "docid": "f1fcc04fdc1a8c45b0ef670328c3e98e", "text": "T digital divide has loomed as a public policy issue for over a decade. Yet, a theoretical account for the effects of the digital divide is currently lacking. This study examines three levels of the digital divide. The digital access divide (the first-level digital divide) is the inequality of access to information technology (IT) in homes and schools. The digital capability divide (the second-level digital divide) is the inequality of the capability to exploit IT arising from the first-level digital divide and other contextual factors. The digital outcome divide (the third-level digital divide) is the inequality of outcomes (e.g., learning and productivity) of exploiting IT arising from the second-level digital divide and other contextual factors. Drawing on social cognitive theory and computer self-efficacy literature, we developed a model to show how the digital access divide affects the digital capability divide and the digital outcome divide among students. The digital access divide focuses on computer ownership and usage in homes and schools. The digital capability divide and the digital outcome divide focus on computer self-efficacy and learning outcomes, respectively. This model was tested using data collected from over 4,000 students in Singapore. The results generate insights into the relationships among the three levels of the digital divide and provide a theoretical account for the effects of the digital divide. While school computing environments help to increase computer self-efficacy for all students, these factors do not eliminate knowledge the gap between students with and without home computers. Implications for theory and practice are discussed.", "title": "" }, { "docid": "497a8fb11abdec8cdfd32ffbfc0f1baa", "text": "A functional serotonin transporter promoter polymorphism, HTTLPR, alters the risk of disease as well as brain morphometry and function. Here, we show that HTTLPR is functionally triallelic. The L(G) allele, which is the L allele with a common G substitution, creates a functional AP2 transcription-factor binding site. Expression assays in 62 lymphoblastoid cell lines representing the six genotypes and in transfected raphe-derived cells showed codominant allele action and low, nearly equivalent expression for the S and L(G) alleles, accounting for more variation in HTT expression than previously recognized. The gain-of-function L(A)L(A) genotype was approximately twice as common in 169 whites with obsessive-compulsive disorder (OCD) than in 253 ethnically matched controls. We performed a replication study in 175 trios consisting of probands with OCD and their parents. The L(A) allele was twofold overtransmitted to the patients with OCD. The HTTLPR L(A)L(A) genotype exerts a moderate (1.8-fold) effect on risk of OCD, which crystallizes the evidence that the HTT gene has a role in OCD.", "title": "" }, { "docid": "4dca30abbc390ef2bec26861dbe244e3", "text": "In 1997, the National Institute of Standards and Technology (NIST) initiated a process to select a symmetric-key encryption algorithm to be used to protect sensitive (unclassified) Federal information in furtherance of NIST's statutory responsibilities. In 1998, NIST announced the acceptance of 15 candidate algorithms and requested the assistance of the cryptographic research community in analyzing the candidates. This analysis included an initial examination of the security and efficiency characteristics for each algorithm. NIST reviewed the results of this preliminary research and selected MARS, RC™, Rijndael, Serpent and Twofish as finalists. Having reviewed further public analysis of the finalists, NIST has decided to propose Rijndael as the Advanced Encryption Standard (AES). The research results and rationale for this selection are documented in this report.", "title": "" }, { "docid": "cacb6737f3c0aee30c6f04cb2ecfc4ce", "text": "OBJECTIVES\nTo present our novel technique and step-by-step approach to bipolar diathermy circumcision and related procedures in adult males.\n\n\nMETHODS\nWe reviewed our technique of bipolar circumcision and related procedures in 54 cases over a 22-month period at our day procedure center. Bipolar diathermy cutting and hemostasis was performed using bipolar forceps with a Valleylab machine set at 15. Sleeve circumcision was used. A dorsal slit was made, followed by frenulum release and ventral slit, and was completed with bilateral circumferential cutting. Frenuloplasties released the frenulum. Preputioplasties used multiple 2-3 mm longitudinal cuts to release the constriction, with frenulum left intact. All wounds were closed with interrupted 4/0 Vicryl Rapide™.\n\n\nRESULTS\nA total of 54 nonemergency bipolar circumcision procedures were carried out from November 2010-August 2012 (42 circumcisions, eight frenuloplasties, and four preputioplasties). Patients were aged 18-72 years (mean, 34 years). There was minimal to no intraoperative bleeding in all cases, allowing for precise dissection. All patients were requested to attend outpatient reviews; three frenuloplasty and two circumcision patients failed to return. Of the remaining 49, mean interval to review was 49 days, with a range of 9-121 days. Two circumcision patients reported mild bleeding with nocturnal erections within a week postoperatively, but they did not require medical attention. Two others presented to family practitioners with possible wound infections which resolved with oral antibiotics. All 49 patients had well-healed wounds.\n\n\nCONCLUSION\nThe bipolar diathermy technique is a simple procedure, easily taught, and reproducible. It is associated with minimal bleeding, is safe and efficient, uses routine operating equipment and is universally applicable to circumcision/frenuloplasty/preputioplasty. In addition, it has minimal postoperative complications, and has associated excellent cosmesis.", "title": "" }, { "docid": "5dc629bb3c9fca0d8082dd5736aabcd7", "text": "This paper presents a first attempt towards finding an abstractive compression generation system for a set of related sentences which jointly models sentence fusion and paraphrasing using continuous vector representations. Our paraphrastic fusion system improves the informativity and the grammaticality of the generated sentences. Our system can be applied to various real world applications such as text simplification, microblog, opinion and newswire summarization. We conduct our experiments on human generated multi-sentence compression datasets and evaluate our system on several newly proposed Machine Translation (MT) evaluation metrics. Our experiments demonstrate that our method brings significant improvements over the state of the art systems across different metrics.", "title": "" }, { "docid": "e509d0aa776dcb649349ec3d49a347f1", "text": "Fibrous dysplasia (FD) is a benign fibro-osseous bone disease of unknown etiology and uncertain pathogenesis. When bone maturation is completed, indicating the occurence of stabilization is a strong evidence of mechanism. The lesion frequently affects the craniofacial skeleton. The maxilla is affected twice comparing mandible and occurs more frequently in the posterior area. In this case, a 16 year-old female patient is presented who was diagnosed as having maxillofacial fibrous dysplasia. *Corresponding author: Gözde Canıtezer, Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ondokuz Mayıs University, 55139 Kurupelit, Samsun, Turkey, Tel: +90 362 3121919-3012, +90 505 8659063; Fax: +90 362 4576032; E-mail: gozde.canitezer@omu.edu.tr Received October 02, 2013; Accepted November 04, 2013; Published November 06, 2013 Citation: Canıtezer G, Gunduz K, Ozden B, Kose HI (2013) Monostotic Fibrous Dysplasia: A Case Report. Dentistry 3: 1667. doi:10.4172/2161-1122.1000167 Copyright: © 2013 Canıtezer G, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.", "title": "" }, { "docid": "92b4a18334345b55aae40b99adcc3840", "text": "Online social networks (OSNs) are becoming increasingly popular and Identity Clone Attacks (ICAs) that aim at creating fake identities for malicious purposes on OSNs are becoming a significantly growing concern. Such attacks severely affect the trust relationships a victim has built with other users if no active protection is applied. In this paper, we first analyze and characterize the behaviors of ICAs. Then we propose a detection framework that is focused on discovering suspicious identities and then validating them. Towards detecting suspicious identities, we propose two approaches based on attribute similarity and similarity of friend networks. The first approach addresses a simpler scenario where mutual friends in friend networks are considered; and the second one captures the scenario where similar friend identities are involved. We also present experimental results to demonstrate flexibility and effectiveness of the proposed approaches. Finally, we discuss some feasible solutions to validate suspicious identities.", "title": "" }, { "docid": "952180cae4947149740530353c65f9c9", "text": "Many cities in the global South are facing the emergence and growth of highly dynamic slum areas, but often lack detailed information on these developments. Available statistical data are commonly aggregated to large, heterogeneous administrative units that are geographically meaningless for informing effective pro-poor policies. General base information neither allows spatially disaggregated analysis of deprived areas nor monitoring of rapidly changing settlement dynamics, which characterize slums. This paper explores the utility of the gray-level co-occurrence matrix (GLCM) variance to distinguish between slums and formal built-up (formal) areas in very high spatial and spectral resolution satellite imagery such as WorldView-2, OrbView, Quickbird, and Resourcesat. Three geographically different cities are selected for this investigation: Mumbai and Ahmedabad, India and Kigali, Rwanda. The exploration of the utility and transferability of the GLCM shows that the variance of the GLCM combined with the normalized difference vegetation index (NDVI) is able to separate slums and formal areas. The overall accuracy achieved is 84% in Kigali, 87% in Mumbai, and 88% in Ahmedabad. Furthermore, combining spectral information with the GLCM variance within a random forest classifier results in a pixel-based classification accuracy of 90%. The final slum map, aggregated to homogenous urban patches (HUPs), shows an accuracy of 88%-95% for slum locations depending on the scale parameter.", "title": "" }, { "docid": "0552b51b1036c2998754ef32d13a4cf8", "text": "Camera technology is continuously improving and high quality cameras are now available under one pound of weight. This enables novel and innovative uses, for example at the end of a long boom pole. Unfortunately lighter cameras used in such ways are more susceptible to vertical disturbances and the bouncing associated with walking resulting in shaking and distortion. We introduce a miniaturized active stabilization mechanism that attenuates such disturbances and keeps the camera steady. Feedback control effectively emulates the stabilizing inertial dynamics associated with higher weights without the penalty of higher weight. The system uses only accelerometer readings and avoids pure integration and associated numerical drift issues. We design, analyze, build, and test the mechanism to show appropriate performance.", "title": "" }, { "docid": "b9032df84f8d17e514a5275066fe0ef4", "text": "Semantic similarity measuring between words can be applied to many applications, such as Artificial Intelligence, Information Processing, Medical Care and Linguistics. In this paper, we present a new approach for semantic similarity measuring which is based on edge-counting and information content theory. Specifically, the proposed measure nonlinearly transforms the weighted shortest path length between the compared concepts to achieve the semantic similarity results, and the relation between parameters and the correlation value is discussed in detail. Experimental results show that the proposed approach not only achieves high correlation value against human ratings but also has better distribution characteristics of the correlation coefficient compared with several related works in the literature. In addition, the proposed method is computationally efficient due to the simplified ways of weighting the shortest path length between the concept pairs. & 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f03f84bfa290fd3d1df6d9249cd9d8a6", "text": "We suggest a new technique to reduce energy consumption in the processor datapath without sacrificing performance by exploiting operand value locality at run time. Data locality is one of the major characteristics of video streams as well as other commonly used applications. We use a cache-like scheme to store a selective history of computation results, and the resultant Te-e-21se leads to power savings. The cache is indexed by the OpeTandS. Based on OUT model, an 8 to 128 entry execution cache TedUCeS power consumption by 20% to 60%.", "title": "" }, { "docid": "b45f832faf2816d456afa25a3641ffe9", "text": "This book is about feedback control of computing systems. The main idea of feedback control is to use measurements of a system’s outputs, such as response times, throughputs, and utilizations, to achieve externally specified goals. This is done by adjusting the system control inputs, such as parameters that affect buffer sizes, scheduling policies, and concurrency levels. Since the measured outputs are used to determine the control inputs, and the inputs then affect the outputs, the architecture is called feedback or closed loop. Almost any system that is considered automatic has some element of feedback control. In this book we focus on the closed-loop control of computing systems and methods for their analysis and design.", "title": "" }, { "docid": "865e319ad9562b1f85bff036139b6db2", "text": "This paper focuses on the problem of text detection and recognition in videos. Even though text detection and recognition in images has seen much progress in recent years, relatively little work has been done to extend these solutions to the video domain. In this work, we extend an existing end-to-end solution for text recognition in natural images to video. We explore a variety of methods for training local character models and explore methods to capitalize on the temporal redundancy of text in video. We present detection performance using the Video Analysis and Content Extraction (VACE) benchmarking framework on the ICDAR 2013 Robust Reading Challenge 3 video dataset and on a new video text dataset. We also propose a new performance metric based on precision-recall curves to measure the performance of text recognition in videos. Using this metric, we provide early video text recognition results on the above mentioned datasets.", "title": "" }, { "docid": "fffa8669c2deebe34bb70d95f44aa34b", "text": "A rough neuron is defined as a pair of conventional neurons that are called the upper and lower bound neurons. In this paper, the sinusoidal rough-neural networks (SR-NNs) are used to identify the discrete dynamic nonlinear systems (DDNSs) with or without noise in series–parallel configuration. In the identification of periodic nonlinear systems, sinusoidal activation functions provide more efficient neural networks than the sigmoidal activation functions. Based on the Lyapunov stability theory, an online learning algorithm is developed to train the SR-NNs. The asymptotically convergence of the identification error to zero and the boundedness of parameters as well as predictions are proved. SR-NNs are used to identify some DDNSs and the cement rotary kiln (CRK). CRK is a complex nonlinear system in the cement factory, which produces the cement clinker. The experiments show that the SR-NNs in the identification of nonlinear systems have better performances than multilayer perceptrons (MLPs), sinusoidal neural networks, and rough MLPs, particularly in the presence of noise.", "title": "" }, { "docid": "637a1bc6dd1e3445f5ef92df562a57bd", "text": "This paper deals with the 3D reconstruction problem for dynamic non-rigid objects with a single RGB-D sensor. It is a challenging task as we consider the almost inevitable accumulation error issue in some previous sequential fusion methods and also the possible failure of surface tracking in a long sequence. Therefore, we propose a global non-rigid registration framework and tackle the drifting problem via an explicit loop closure. Our novel scheme starts with a fusion step to get multiple partial scans from the input sequence, followed by a pairwise non-rigid registration and loop detection step to obtain correspondences between neighboring partial pieces and those pieces that form a loop. Then, we perform a global registration procedure to align all those pieces together into a consistent canonical space as guided by those matches that we have established. Finally, our proposed model-update step helps fixing potential misalignments that still exist after the global registration. Both geometric and appearance constraints are enforced during our alignment; therefore, we are able to get the recovered model with accurate geometry as well as high fidelity color maps for the mesh. Experiments on both synthetic and various real datasets have demonstrated the capability of our approach to reconstruct complete and watertight deformable objects.", "title": "" }, { "docid": "34d024643d687d092c0859497ab0001c", "text": "BACKGROUND\nHealth IT is expected to have a positive impact on the quality and efficiency of health care. But reports on negative impact and patient harm continue to emerge. The obligation of health informatics is to make sure that health IT solutions provide as much benefit with as few negative side effects as possible. To achieve this, health informatics as a discipline must be able to learn, both from its successes as well as from its failures.\n\n\nOBJECTIVES\nTo present motivation, vision, and history of evidence-based health informatics, and to discuss achievements, challenges, and needs for action.\n\n\nMETHODS\nReflections on scientific literature and on own experiences.\n\n\nRESULTS\nEight challenges on the way towards evidence-based health informatics are identified and discussed: quality of studies; publication bias; reporting quality; availability of publications; systematic reviews and meta-analysis; training of health IT evaluation experts; translation of evidence into health practice; and post-market surveillance. Identified needs for action comprise: establish health IT study registers; increase the quality of publications; develop a taxonomy for health IT systems; improve indexing of published health IT evaluation papers; move from meta-analysis to meta-summaries; include health IT evaluation competencies in curricula; develop evidence-based implementation frameworks; and establish post-marketing surveillance for health IT.\n\n\nCONCLUSIONS\nThere has been some progress, but evidence-based health informatics is still in its infancy. Building evidence in health informatics is our obligation if we consider medical informatics a scientific discipline.", "title": "" }, { "docid": "efb124a26b0cdc9b022975dd83ec76c8", "text": "Apache Spark is an open-source cluster computing framework for big data processing. It has emerged as the next generation big data processing engine, overtaking Hadoop MapReduce which helped ignite the big data revolution. Spark maintains MapReduce's linear scalability and fault tolerance, but extends it in a few important ways: it is much faster (100 times faster for certain applications), much easier to program in due to its rich APIs in Python, Java, Scala (and shortly R), and its core data abstraction, the distributed data frame, and it goes far beyond batch applications to support a variety of compute-intensive tasks, including interactive queries, streaming, machine learning, and graph processing. This tutorial will provide an accessible introduction to Spark and its potential to revolutionize academic and commercial data science practices.", "title": "" } ]
scidocsrr
c661449ef79514f7401a52066f48e29b
Neural Network Models for Paraphrase Identification, Semantic Textual Similarity, Natural Language Inference, and Question Answering
[ { "docid": "5d1b66986357f2566ac503727a80bb87", "text": "Natural Language Inference (NLI) task requires an agent to determine the logical relationship between a natural language premise and a natural language hypothesis. We introduce Interactive Inference Network (IIN), a novel class of neural network architectures that is able to achieve high-level understanding of the sentence pair by hierarchically extracting semantic features from interaction space. We show that an interaction tensor (attention weight) contains semantic information to solve natural language inference, and a denser interaction tensor contains richer semantic information. One instance of such architecture, Densely Interactive Inference Network (DIIN), demonstrates the state-of-the-art performance on large scale NLI copora and large-scale NLI alike corpus. It’s noteworthy that DIIN achieve a greater than 20% error reduction on the challenging Multi-Genre NLI (MultiNLI; Williams et al. 2017) dataset with respect to the strongest published system.", "title": "" }, { "docid": "5664ca8d7f0f2f069d5483d4a334c670", "text": "In Semantic Textual Similarity, systems rate the degree of semantic equivalence between two text snippets. This year, the participants were challenged with new data sets for English, as well as the introduction of Spanish, as a new language in which to assess semantic similarity. For the English subtask, we exposed the systems to a diversity of testing scenarios, by preparing additional OntoNotesWordNet sense mappings and news headlines, as well as introducing new genres, including image descriptions, DEFT discussion forums, DEFT newswire, and tweet-newswire headline mappings. For Spanish, since, to our knowledge, this is the first time that official evaluations are conducted, we used well-formed text, by featuring sentences extracted from encyclopedic content and newswire. The annotations for both tasks leveraged crowdsourcing. The Spanish subtask engaged 9 teams participating with 22 system runs, and the English subtask attracted 15 teams with 38 system runs.", "title": "" }, { "docid": "87f0a390580c452d77fcfc7040352832", "text": "• J. Wieting, M. Bansal, K. Gimpel, K. Livescu, and D. Roth. 2015. From paraphrase database to compositional paraphrase model and back. TACL. • K. S. Tai, R. Socher, and C. D. Manning. 2015. Improved semantic representations from treestructured long short-term memory networks. ACL. • W. Yin and H. Schutze. 2015. Convolutional neural network for paraphrase identification. NAACL. The product also streams internet radio and comes with a 30-day free trial for realnetworks' rhapsody music subscription. The device plays internet radio streams and comes with a 30-day trial of realnetworks rhapsody music service. Given two sentences, measure their similarity:", "title": "" }, { "docid": "de721f4b839b0816f551fa8f8ee2065e", "text": "This paper presents a syntax-driven approach to question answering, specifically the answer-sentence selection problem for short-answer questions. Rather than using syntactic features to augment existing statistical classifiers (as in previous work), we build on the idea that questions and their (correct) answers relate to each other via loose but predictable syntactic transformations. We propose a probabilistic quasi-synchronous grammar, inspired by one proposed for machine translation (D. Smith and Eisner, 2006), and parameterized by mixtures of a robust nonlexical syntax/alignment model with a(n optional) lexical-semantics-driven log-linear model. Our model learns soft alignments as a hidden variable in discriminative training. Experimental results using the TREC dataset are shown to significantly outperform strong state-of-the-art baselines.", "title": "" } ]
[ { "docid": "4a76739b77446025bc209a9c7d7cf1a0", "text": "Background\nMetabolic syndrome is defined as a cluster of at least three out of five clinical risk factors: abdominal (visceral) obesity, hypertension, elevated serum triglycerides, low serum high-density lipoprotein (HDL) and insulin resistance. It is estimated to affect over 20% of the global adult population. Abdominal (visceral) obesity is thought to be the predominant risk factor for metabolic syndrome and as predictions estimate that 50% of adults will be classified as obese by 2030 it is likely that metabolic syndrome will be a significant problem for health services and a drain on health economies.Evidence shows that regular and consistent exercise reduces abdominal obesity and results in favourable changes in body composition. It has therefore been suggested that exercise is a medicine in its own right and should be prescribed as such.\n\n\nPurpose of this review\nThis review provides a summary of the current evidence on the pathophysiology of dysfunctional adipose tissue (adiposopathy). It describes the relationship of adiposopathy to metabolic syndrome and how exercise may mediate these processes, and evaluates current evidence on the clinical efficacy of exercise in the management of abdominal obesity. The review also discusses the type and dose of exercise needed for optimal improvements in health status in relation to the available evidence and considers the difficulty in achieving adherence to exercise programmes.\n\n\nConclusion\nThere is moderate evidence supporting the use of programmes of exercise to reverse metabolic syndrome although at present the optimal dose and type of exercise is unknown. The main challenge for health care professionals is how to motivate individuals to participate and adherence to programmes of exercise used prophylactically and as a treatment for metabolic syndrome.", "title": "" }, { "docid": "3b38ff37137549b170dc3bdcf0a955c5", "text": "Little is known about corporate social responsibility (CSR) in lesser developed countries. To address this knowledge gap, we used Chile as a test case, and conducted 44 in-depth interviews with informants who are leading CSR initiatives. Using institutional theory as a lens, we outline the state of CSR practice in Chile, describe the factors that have led to the emergence of CSR, and note the barriers to wider adoption of these initiatives.", "title": "" }, { "docid": "91a56dbdefc08d28ff74883ec10a5d6e", "text": "A truly autonomous guided vehicle (AGV) must sense its surrounding environment and react accordingly. In order to maneuver an AGV autonomously, it has to overcome navigational and collision avoidance problems. Previous AGV control systems have relied on hand-coded algorithms for processing sensor information. An intelligent distributed fuzzy logic control system (IDFLCS) has been implemented in a mecanum wheeled AGV system in order to achieve improved reliability and to reduce complexity of the development of control systems. Fuzzy logic controllers have been used to achieve robust control of mechatronic systems by fusing multiple signals from noisy sensors, integrating the representation of human knowledge and implementing behaviour-based control using if-then rules. This paper presents an intelligent distributed controller that implements fuzzy logic on an AGV that uses four independently driven mecanum wheels, incorporating laser, inertial and ultrasound sensors. Distributed control system, fuzzy control strategy, navigation and motion control of such an AGV are presented.", "title": "" }, { "docid": "1f28f5efa70a6387b00e335a8cf1e1d0", "text": "The two underlying requirements of face age progression, i.e. aging accuracy and identity permanence, are not well studied in the literature. In this paper, we present a novel generative adversarial network based approach. It separately models the constraints for the intrinsic subject-specific characteristics and the age-specific facial changes with respect to the elapsed time, ensuring that the generated faces present desired aging effects while simultaneously keeping personalized properties stable. Further, to generate more lifelike facial details, high-level age-specific features conveyed by the synthesized face are estimated by a pyramidal adversarial discriminator at multiple scales, which simulates the aging effects in a finer manner. The proposed method is applicable to diverse face samples in the presence of variations in pose, expression, makeup, etc., and remarkably vivid aging effects are achieved. Both visual fidelity and quantitative evaluations show that the approach advances the state-of-the-art.", "title": "" }, { "docid": "c08518b806c93dde1dd04fdf3c9c45bb", "text": "Purpose – The objectives of this article are to develop a multiple-item scale for measuring e-service quality and to study the influence of perceived quality on consumer satisfaction levels and the level of web site loyalty. Design/methodology/approach – First, there is an explanation of the main attributes of the concepts examined, with special attention being paid to the multi-dimensional nature of the variables and the relationships between them. This is followed by an examination of the validation processes of the measuring instruments. Findings – The validation process of scales suggested that perceived quality is a multidimensional construct: web design, customer service, assurance and order management; that perceived quality influences on satisfaction; and that satisfaction influences on consumer loyalty. Moreover, no differences in these conclusions were observed if the total sample is divided between buyers and information searchers. Practical implications – First, the need to develop user-friendly web sites which ease consumer purchasing and searching, thus creating a suitable framework for the generation of higher satisfaction and loyalty levels. Second, the web site manager should enhance service loyalty, customer sensitivity, personalised service and a quick response to complaints. Third, the web site should uphold sufficient security levels in communications and meet data protection requirements regarding the privacy. Lastly, the need for correct product delivery and product manipulation or service is recommended. Originality/value – Most relevant studies about perceived quality in the internet have focused on web design aspects. Moreover, the existing literature regarding internet consumer behaviour has not fully analysed profits generated by higher perceived quality in terms of user satisfaction and loyalty.", "title": "" }, { "docid": "f50342dfacd198dc094ef96415de4899", "text": "While the ubiquity and importance of nonliteral language are clear, people’s ability to use and understand it remains a mystery. Metaphor in particular has been studied extensively across many disciplines in cognitive science. One approach focuses on the pragmatic principles that listeners utilize to infer meaning from metaphorical utterances. While this approach has generated a number of insights about how people understand metaphor, to our knowledge there is no formal model showing that effects in metaphor understanding can arise from basic principles of communication. Building upon recent advances in formal models of pragmatics, we describe a computational model that uses pragmatic reasoning to interpret metaphorical utterances. We conduct behavioral experiments to evaluate the model’s performance and show that our model produces metaphorical interpretations that closely fit behavioral data. We discuss implications of the model for metaphor understanding, principles of communication, and formal models of language understanding.", "title": "" }, { "docid": "256d8659fe5bca53bd03a2f7a101282b", "text": "The paper combines and extends the technologies of fuzzy sets and association rules, considering users' differential emphasis on each attribute through fuzzy regions. A fuzzy data mining algorithm is proposed to discovery fuzzy association rules for weighted quantitative data. This is expected to be more realistic and practical than crisp association rules. Discovered rules are expressed in natural language that is more understandable to humans. The paper demonstrates the performance of the proposed approach using a synthetic but realistic dataset", "title": "" }, { "docid": "2af36afd2440a4940873fef1703aab3f", "text": "In recent years researchers have found that alternations in arterial or venular tree of the retinal vasculature are associated with several public health problems such as diabetic retinopathy which is also the leading cause of blindness in the world. A prerequisite for automated assessment of subtle changes in arteries and veins, is to accurately separate those vessels from each other. This is a difficult task due to high similarity between arteries and veins in addition to variation of color and non-uniform illumination inter and intra retinal images. In this paper a novel structural and automated method is presented for artery/vein classification of blood vessels in retinal images. The proposed method consists of three main steps. In the first step, several image enhancement techniques are employed to improve the images. Then a specific feature extraction process is applied to separate major arteries from veins. Indeed, vessels are divided to smaller segments and feature extraction and vessel classification are applied to each small vessel segment instead of each vessel point. Finally, a post processing step is added to improve the results obtained from the previous step using structural characteristics of the retinal vascular network. In the last stage, vessel features at intersection and bifurcation points are processed for detection of arterial and venular sub trees. Ultimately vessel labels are revised by publishing the dominant label through each identified connected tree of arteries or veins. Evaluation of the proposed approach against two different datasets of retinal images including DRIVE database demonstrates the good performance and robustness of the method. The proposed method may be used for determination of arteriolar to venular diameter ratio in retinal images. Also the proposed method potentially allows for further investigation of labels of thinner arteries and veins which might be found by tracing them back to the major vessels.", "title": "" }, { "docid": "5a2be4e590d31b0cb553215f11776a15", "text": "This paper presents a review of the state of the art and a discussion on vertical take-off and landing (VTOL) unmanned aerial vehicles (UAVs) applied to the inspection of power utility assets and other similar civil applications. The first part of the paper presents the authors' view on specific benefits and operation constraints associated with the use of UAVs in power industry applications. The second part cites more than 70 recent publications related to this field of application. Among them, some present complete technologies while others deal with specific subsystems relevant to the application of such mobile platforms to power line inspection. The authors close with a discussion of key factors for successful application of VTOL UAVs to power industry infrastructure inspection.", "title": "" }, { "docid": "91e97df8ee68b2aa8219faeba398f20f", "text": "We propose a method for animating still manga imagery through camera movements. Given a series of existing manga pages, we start by automatically extracting panels, comic characters, and balloons from the manga pages. Then, we use a data-driven graphical model to infer per-panel motion and emotion states from low-level visual patterns. Finally, by combining domain knowledge of film production and characteristics of manga, we simulate camera movements over the manga pages, yielding an animation. The results augment the still manga contents with animated motion that reveals the mood and tension of the story, while maintaining the original narrative. We have tested our method on manga series of different genres, and demonstrated that our method can generate animations that are more effective in storytelling and pacing, with less human efforts, as compared with prior works. We also show two applications of our method, mobile comic reading, and comic trailer generation.", "title": "" }, { "docid": "3e7bac216957b18a24cbd0393b0ff26a", "text": "This research investigated the influence of parent–adolescent communication quality, as perceived by the adolescents, on the relationship between adolescents’ Internet use and verbal aggression. Adolescents (N = 363, age range 10–16, MT1 = 12.84, SD = 1.93) were examined twice with a six-month delay. Controlling for social support in general terms, moderated regression analyses showed that Internet-related communication quality with parents determined whether Internet use is associated with an increase or a decrease in adolescents’ verbal aggression scores over time. A three way interaction indicated that high Internet-related communication quality with peers can have disadvantageous effects if the communication quality with parents is low. Implications on resources and risk factors related to the effects of Internet use are discussed. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "5bca58cbd1ef80ebf040529578d2a72a", "text": "In this letter, a printable chipless tag with electromagnetic code using split ring resonators is proposed. A 4 b chipless tag that can be applied to paper/plastic-based items such as ID cards, tickets, banknotes and security documents is designed. The chipless tag generates distinct electromagnetic characteristics by various combinations of a split ring resonator. Furthermore, a reader system is proposed to digitize electromagnetic characteristics and convert chipless tag to electromagnetic code.", "title": "" }, { "docid": "8ff8a8ce2db839767adb8559f6d06721", "text": "Indoor environments present opportunities for a rich set of location-aware applications such as navigation tools for humans and robots, interactive virtual games, resource discovery, asset tracking, location-aware sensor networking etc. Typical indoor applications require better accuracy than what current outdoor location systems provide. Outdoor location technologies such as GPS have poor indoor performance because of the harsh nature of indoor environments. Further, typical indoor applications require different types of location information such as physical space, position and orientation. This dissertation describes the design and implementation of the Cricket indoor location system that provides accurate location in the form of user space, position and orientation to mobile and sensor network applications. Cricket consists of location beacons that are attached to the ceiling of a building, and receivers, called listeners, attached to devices that need location. Each beacon periodically transmits its location information in an RF message. At the same time, the beacon also transmits an ultrasonic pulse. The listeners listen to beacon transmissions and measure distances to nearby beacons, and use these distances to compute their own locations. This active-beacon passive-listener architecture is scalable with respect to the number of users, and enables applications that preserve user privacy. This dissertation describes how Cricket achieves accurate distance measurements between beacons and listeners. Once the beacons are deployed, the MAT and AFL algorithms, described in this dissertation, use measurements taken at a mobile listener to configure the beacons with a coordinate assignment that reflects the beacon layout. This dissertation presents beacon interference avoidance and detection algorithms, as well as outlier rejection algorithms to prevent and filter out outlier distance estimates caused by uncoordinated beacon transmissions. The Cricket listeners can measure distances with an accuracy of 5 cm. The listeners can detect boundaries with an accuracy of 1 cm. Cricket has a position estimation accuracy of 10 cm and an orientation accuracy of 3 degrees. Thesis Supervisor: Hari Balakrishnan Title: Associate Professor of Computer Science and Engineering", "title": "" }, { "docid": "8760b523ca90dccf7a9a197622bda043", "text": "The increasing need for better performance, protection, and reliability in shipboard power distribution systems, and the increasing availability of power semiconductors is generating the potential for solid state circuit breakers to replace traditional electromechanical circuit breakers. This paper reviews various solid state circuit breaker topologies that are suitable for low and medium voltage shipboard system protection. Depending on the application solid state circuit breakers can have different main circuit topologies, fault detection methods, commutation methods of power semiconductor devices, and steady state operation after tripping. This paper provides recommendations on the solid state circuit breaker topologies that provides the best performance-cost tradeoff based on the application.", "title": "" }, { "docid": "ce41e19933571f6904e317a33b97716b", "text": "Ivan Voitalov, 2 Pim van der Hoorn, 2 Remco van der Hofstad, and Dmitri Krioukov 2, 4, 5 Department of Physics, Northeastern University, Boston, Massachusetts 02115, USA Network Science Institute, Northeastern University, Boston, Massachusetts 02115, USA Department of Mathematics and Computer Science, Eindhoven University of Technology, Postbus 513, 5600 MB Eindhoven, Netherlands Department of Mathematics, Northeastern University, Boston, Massachusetts 02115, USA Department of Electrical & Computer Engineering, Northeastern University, Boston, Massachusetts 02115, USA", "title": "" }, { "docid": "8f95bf125d4b10acb373e54407c39b9b", "text": "Research and development irrigation management information systems are the important measures of making irrigation management more modernized and standardized. The difficulties of building information systems have been increased along with the continuous development of information technology and the complexity of information systems, information systems put forward higher request to “shared” and “reuse”. Ontology-based information systems modeling can eliminate semantic differences, and carry out knowledge sharing and interoperability of different systems. In this paper, we introduce several common models which used in information systems modeling briefly; and then we introduce ontology, summarize ontology-based information systems modeling process; finally, we discuss the applications of ontology-based information systems modeling in irrigation management information systems preliminary.", "title": "" }, { "docid": "ca70bf377f8823c2ecb1cdd607c064ec", "text": "To date, few studies have compared the effectiveness of topical silicone gels versus that of silicone gel sheets in preventing scars. In this prospective study, we compared the efficacy and the convenience of use of the 2 products. We enrolled 30 patients who had undergone a surgical procedure 2 weeks to 3 months before joining the study. These participants were randomly assigned to 2 treatment arms: one for treatment with a silicone gel sheet, and the other for treatment with a topical silicone gel. Vancouver Scar Scale (VSS) scores were obtained for all patients; in addition, participants completed scoring patient questionnaires 1 and 3 months after treatment onset. Our results reveal not only that no significant difference in efficacy exists between the 2 products but also that topical silicone gels are more convenient to use. While previous studies have advocated for silicone gel sheets as first-line therapies in postoperative scar management, we maintain that similar effects can be expected with topical silicone gel. The authors recommend that, when clinicians have a choice of silicone-based products for scar prevention, they should focus on each patient's scar location, lifestyle, and willingness to undergo scar prevention treatment.", "title": "" }, { "docid": "ea8622fad1ceba3f274e30247dd2f678", "text": "In software engineering it is widely acknowledged that the usage of metrics at the initial phases of the object oriented software life cycle can help designers to make better decisions and to predict external quality attributes, such as maintainability. Following this idea we have carried out three controlled experiments to ascertain if any correlation exists between the structural complexity and the size of UML class diagrams and their maintainability. We used 8 metrics for measuring the structural complexity of class diagrams due to the usage of UML relationships, and 3 metrics to measure their size. With the aim of determining which of these metrics are really relevant to be used as class diagrams maintainability indicators, we present in this work a study based on Principal Component Analysis. The obtained results show that the metrics related to associations, aggregations, generalizations and dependencies, are the most relevant whilst those related to size seem to be redundant.", "title": "" }, { "docid": "cd18d1e77af0e2146b67b028f1860ff0", "text": "Convolutional neural networks (CNN) have recently shown outstanding image classification performance in the large- scale visual recognition challenge (ILSVRC2012). The success of CNNs is attributed to their ability to learn rich mid-level image representations as opposed to hand-designed low-level features used in other image classification methods. Learning CNNs, however, amounts to estimating millions of parameters and requires a very large number of annotated image samples. This property currently prevents application of CNNs to problems with limited training data. In this work we show how image representations learned with CNNs on large-scale annotated datasets can be efficiently transferred to other visual recognition tasks with limited amount of training data. We design a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset. We show that despite differences in image statistics and tasks in the two datasets, the transferred representation leads to significantly improved results for object and action classification, outperforming the current state of the art on Pascal VOC 2007 and 2012 datasets. We also show promising results for object and action localization.", "title": "" }, { "docid": "a64b7b5a24e75bac31d3a071f5a29025", "text": "A new hand gesture recognition method based on Input– Output Hidden Markov Models is presented. This method deals with the dynamic aspects of gestures. Gestures are extracted from a sequence of video images by tracking the skin–color blobs corresponding to the hand into a body– face space centered on the face of the user. Our goal is to recognize two classes of gestures: deictic and symbolic.", "title": "" } ]
scidocsrr
95d112316cef13135085f7e4ab80cfa7
Towards Mobile Phone Localization without War-Driving
[ { "docid": "75a1832a5fdd9c48f565eb17e8477b4b", "text": "We introduce a new interactive system: a game that is fun and can be used to create valuable output. When people play the game they help determine the contents of images by providing meaningful labels for them. If the game is played as much as popular online games, we estimate that most images on the Web can be labeled in a few months. Having proper labels associated with each image on the Web would allow for more accurate image search, improve the accessibility of sites (by providing descriptions of images to visually impaired individuals), and help users block inappropriate images. Our system makes a significant contribution because of its valuable output and because of the way it addresses the image-labeling problem. Rather than using computer vision techniques, which don't work well enough, we encourage people to do the work by taking advantage of their desire to be entertained.", "title": "" } ]
[ { "docid": "bad378dceb9e4c060fa52acdf328d845", "text": "Autonomous robot execution of surgical sub-tasks has the potential to reduce surgeon fatigue and facilitate supervised tele-surgery. This paper considers the sub-task of surgical debridement: removing dead or damaged tissue fragments to allow the remaining healthy tissue to heal. We present an autonomous multilateral surgical debridement system using the Raven, an open-architecture surgical robot with two cable-driven 7 DOF arms. Our system combines stereo vision for 3D perception with trajopt, an optimization-based motion planner, and model predictive control (MPC). Laboratory experiments involving sensing, grasping, and removal of 120 fragments suggest that an autonomous surgical robot can achieve robustness comparable to human performance. Our robot system demonstrated the advantage of multilateral systems, as the autonomous execution was 1.5× faster with two arms than with one; however, it was two to three times slower than a human. Execution speed could be improved with better state estimation that would allow more travel between MPC steps and fewer MPC replanning cycles. The three primary contributions of this paper are: (1) introducing debridement as a sub-task of interest for surgical robotics, (2) demonstrating the first reliable autonomous robot performance of a surgical sub-task using the Raven, and (3) reporting experiments that highlight the importance of accurate state estimation for future research. Further information including code, photos, and video is available at: http://rll.berkeley.edu/raven.", "title": "" }, { "docid": "36e531c34dd8f714f481c6ab9ed1a375", "text": "Generating informative responses in end-toend neural dialogue systems attracts a lot of attention in recent years. Various previous work leverages external knowledge and the dialogue contexts to generate such responses. Nevertheless, few has demonstrated their capability on incorporating the appropriate knowledge in response generation. Motivated by this, we propose a novel open-domain conversation generation model in this paper, which employs the posterior knowledge distribution to guide knowledge selection, therefore generating more appropriate and informative responses in conversations. To the best of our knowledge, we are the first one who utilize the posterior knowledge distribution to facilitate conversation generation. Our experiments on both automatic and human evaluation clearly verify the superior performance of our model over the state-of-the-art baselines.", "title": "" }, { "docid": "488110f56eee525ae4f06f21da795f78", "text": "Recently, a technique called Layer-wise Relevance Propagation (LRP) was shown to deliver insightful explanations in the form of input space relevances for understanding feed-forward neural network classification decisions. In the present work, we extend the usage of LRP to recurrent neural networks. We propose a specific propagation rule applicable to multiplicative connections as they arise in recurrent network architectures such as LSTMs and GRUs. We apply our technique to a word-based bi-directional LSTM model on a five-class sentiment prediction task, and evaluate the resulting LRP relevances both qualitatively and quantitatively, obtaining better results than a gradient-based related method which was used in previous work.", "title": "" }, { "docid": "8c7b6d0ecb1b1a4a612f44e8de802574", "text": "Recently, the Fisher vector representation of local features has attracted much attention because of its effectiveness in both image classification and image retrieval. Another trend in the area of image retrieval is the use of binary feature such as ORB, FREAK, and BRISK. Considering the significant performance improvement in terms of accuracy in both image classification and retrieval by the Fisher vector of continuous feature descriptors, if the Fisher vector were also to be applied to binary features, we would receive the same benefits in binary feature based image retrieval and classification. In this paper, we derive the closed-form approximation of the Fisher vector of binary features which are modeled by the Bernoulli mixture model. In experiments, it is shown that the Fisher vector representation improves the accuracy of image retrieval by 25% compared with a bag of binary words approach.", "title": "" }, { "docid": "7ff483824e208e892cd4ee50bb94e471", "text": "Gentle stroking touches are rated most pleasant when applied at a velocity of between 1-10 cm/s. Such touches are considered highly relevant in social interactions. Here, we investigate whether stroking sensations generated by a vibrotactile array can produce similar pleasantness responses, with the ultimate goal of using this type of haptic display in technology mediated social touch. A study was conducted in which participants received vibrotactile stroking stimuli of different velocities and intensities, applied to their lower arm. Results showed that the stimuli were perceived as continuous stroking sensations in a straight line. Furthermore, pleasantness ratings for low intensity vibrotactile stroking followed an inverted U-curve, similar to that found in research into actual stroking touches. The implications of these findings are discussed.", "title": "" }, { "docid": "682803607ab7f72f27f5f145e1dabb0c", "text": "Theories of how initially satisfied marriages deteriorate or remain stable over time have been limited by a failure to distinguish between key facets of change. The present study defines the trajectory of marital satisfaction in terms of 2 separate parameters--(a) the initial level of satisfaction and (b) the rate of change in satisfaction over time--and seeks to estimate unique effects on each of these parameters with variables derived from intrapersonal and interpersonal models of marriage. Sixty newlywed couples completed measures of neuroticism, were observed during a marital interaction and provided reports of marital satisfaction every 6 months for 4 years. Neuroticism was associated with initial levels of marital satisfaction but had no additional effects on rates of change. Behavior during marital interaction predicted rates of change in marital satisfaction but was not associated with initial levels.", "title": "" }, { "docid": "6a3c5a88df65588435f5099166fae043", "text": "Due to short - but frequent - sessions of smartphone usage, the fast and easy usability of authentication mechanisms in this special environment has a big impact on user acceptance. In this work we propose a user-friendly alternative to common authentication methods (like PINs and patterns). The advantages of the proposed method are its security, fastness, and easy usage, requiring minimal user interaction compared to other authentication techniques currently used on smartphones. The mechanism described uses the presence of a Bluetooth-connected hardware-token to authenticate the user and can easily be implemented on current smartphones. It is based on an authentication protocol which meets the requirements on energy efficiency and limited resources by optimizing the communication effort. A prototype was implemented on an Android smartphone and an MSP430 based MCU. The token allows fast authentication without the need for additional user action. The entire authentication process can be completed in less than one second, the developed software prototype requires no soft- or hardware modifications (like rooting) of the Android phone.", "title": "" }, { "docid": "8c8e9332a29edb7417ad47b045bf9de7", "text": "Knowledge and lessons from past accidental exposures in radiotherapy are very helpful in finding safety provisions to prevent recurrence. Disseminating lessons is necessary but not sufficient. There may be additional latent risks for other accidental exposures, which have not been reported or have not occurred, but are possible and may occur in the future if not identified, analyzed, and prevented by safety provisions. Proactive methods are available for anticipating and quantifying risk from potential event sequences. In this work, proactive methods, successfully used in industry, have been adapted and used in radiotherapy. Risk matrix is a tool that can be used in individual hospitals to classify event sequences in levels of risk. As with any anticipative method, the risk matrix involves a systematic search for potential risks; that is, any situation that can cause an accidental exposure. The method contributes new insights: The application of the risk matrix approach has identified that another group of less catastrophic but still severe single-patient events may have a higher probability, resulting in higher risk. The use of the risk matrix approach for safety assessment in individual hospitals would provide an opportunity for self-evaluation and managing the safety measures that are most suitable to the hospital's own conditions.", "title": "" }, { "docid": "819f5df03cebf534a51eb133cd44cb0d", "text": "Although DBP (di-n-butyl phthalate) is commonly encountered as an artificially-synthesized plasticizer with potential to impair fertility, we confirm that it can also be biosynthesized as microbial secondary metabolites from naturally occurring filamentous fungi strains cultured either in an artificial medium or natural water. Using the excreted crude enzyme from the fungi for catalyzing a variety of substrates, we found that the fungal generation of DBP was largely through shikimic acid pathway, which was assembled by phthalic acid with butyl alcohol through esterification. The DBP production ability of the fungi was primarily influenced by fungal spore density and incubation temperature. This study indicates an important alternative natural waterborne source of DBP in addition to artificial synthesis, which implied fungal contribution must be highlighted for future source control and risk management of DBP.", "title": "" }, { "docid": "45a45087a6829486d46eda0adcff978f", "text": "Container technology has the potential to considerably simplify the management of the software stack of High Performance Computing (HPC) clusters. However, poor integration with established HPC technologies is still preventing users and administrators to reap the benefits of containers. Message Passing Interface (MPI) is a pervasive technology used to run scientific software, often written in Fortran and C/C++, that presents challenges for effective integration with containers. This work shows how an existing MPI implementation can be extended to improve this integration.", "title": "" }, { "docid": "b6c762cee1001d6f45635a1cf52af8ea", "text": "Understanding one's own and other individual's emotional states is essential for maintaining emotional equilibrium and strong social bonds. Although the neural substrates supporting reflection upon one's own feelings have been investigated, no studies have directly examined attributions about the internal emotional states of others to determine whether common or distinct neural systems support these abilities. The present study sought to directly compare brain regions involved in judging one's own, as compared to another individual's, emotional state. Thirteen participants viewed mixed valence blocks of photos drawn from the International Affective Picture System while whole-brain fMRI data were collected. Preblock cues instructed participants to evaluate either their emotional response to each photo, the emotional state of the central figure in each photo, or (in a baseline condition) whether the photo was taken indoors or outdoors. Contrasts indicated (1) that both self and other judgments activated the medial prefrontal cortex (MPFC), the superior temporal gyrus, and the posterior cingulate/precuneus, (2) that self judgments selectively activated subregions of the MPFC and the left temporal cortex, whereas (3) other judgments selectively activated the left lateral prefrontal cortex (including Broca's area) and the medial occipital cortex. These results suggest (1) that self and other evaluation of emotion rely on a network of common mechanisms centered on the MPFC, which has been hypothesized to support mental state attributions in general, and (2) that medial and lateral PFC regions selectively recruited by self or other judgments may be involved in attention to, and elaboration of, internally as opposed to externally generated information.", "title": "" }, { "docid": "333c8a22b502b771c9f5f0df67d6da1c", "text": "Brain extraction from magnetic resonance imaging (MRI) is crucial for many neuroimaging workflows. Current methods demonstrate good results on non-enhanced T1-weighted images, but struggle when confronted with other modalities and pathologically altered tissue. In this paper we present a 3D convolutional deep learning architecture to address these shortcomings. In contrast to existing methods, we are not limited to non-enhanced T1w images. When trained appropriately, our approach handles an arbitrary number of modalities including contrast-enhanced scans. Its applicability to MRI data, comprising four channels: non-enhanced and contrast-enhanced T1w, T2w and FLAIR contrasts, is demonstrated on a challenging clinical data set containing brain tumors (N=53), where our approach significantly outperforms six commonly used tools with a mean Dice score of 95.19. Further, the proposed method at least matches state-of-the-art performance as demonstrated on three publicly available data sets: IBSR, LPBA40 and OASIS, totaling N=135 volumes. For the IBSR (96.32) and LPBA40 (96.96) data set the convolutional neuronal network (CNN) obtains the highest average Dice scores, albeit not being significantly different from the second best performing method. For the OASIS data the second best Dice (95.02) results are achieved, with no statistical difference in comparison to the best performing tool. For all data sets the highest average specificity measures are evaluated, whereas the sensitivity displays about average results. Adjusting the cut-off threshold for generating the binary masks from the CNN's probability output can be used to increase the sensitivity of the method. Of course, this comes at the cost of a decreased specificity and has to be decided application specific. Using an optimized GPU implementation predictions can be achieved in less than one minute. The proposed method may prove useful for large-scale studies and clinical trials.", "title": "" }, { "docid": "a2dfa8007b3a13da31a768fe07393d15", "text": "Predicting the time and effort for a software problem has long been a difficult task. We present an approach that automatically predicts the fixing effort, i.e., the person-hours spent on fixing an issue. Our technique leverages existing issue tracking systems: given a new issue report, we use the Lucene framework to search for similar, earlier reports and use their average time as a prediction. Our approach thus allows for early effort estimation, helping in assigning issues and scheduling stable releases. We evaluated our approach using effort data from the JBoss project. Given a sufficient number of issues reports, our automatic predictions are close to the actual effort; for issues that are bugs, we are off by only one hour, beating na¨ýve predictions by a factor of four.", "title": "" }, { "docid": "41d97d98a524e5f1e45ae724017819d9", "text": "Dynamically changing (reconfiguring) the membership of a replicated distributed system while preserving data consistency and system availability is a challenging problem. In this paper, we show that reconfiguration can be simplified by taking advantage of certain properties commonly provided by Primary/Backup systems. We describe a new reconfiguration protocol, recently implemented in Apache Zookeeper. It fully automates configuration changes and minimizes any interruption in service to clients while maintaining data consistency. By leveraging the properties already provided by Zookeeper our protocol is considerably simpler than state of the art.", "title": "" }, { "docid": "187ea2797b524f68740c7b3ca7eab8db", "text": "Directly solving the ordinary least squares problem will (in general) require O(nd) operations. From Table 5.1, the Gaussian sketch does not actually improve upon this scaling for unconstrained problems: when m d (as is needed in the unconstrained case), then computing the sketch SA requires O(nd) operations as well. If we compute sketches using the JLT, then this cost is reduced to O(nd log(d)) so that we do see some significant savings relative to OLS. There are other strategies, of course. In a statistical setting, in which the rows of (A, y) correspond to distinct samples, it is natural to consider a method based on sample splitting. That is, suppose that we do the following:", "title": "" }, { "docid": "0f9151aa44b4175710af869082263631", "text": "In order to research characteristics of unbalanced rotor system with external excitations, a dynamic model of rotor was established. This model not only considered the influences of the gyroscopic effect and the gravity, but also includes two kinds of unbalance which named static/dynamic unbalance. Use the hypothesis of small angle, expression of forces and torque which caused by translation and rotational of these two unbalances and gravity was derived in detail. Using the Lagrange method with six degree of freedom, motion equations of the system were derived. Combined Runge-Kutta approach, dynamic equations of the model were solved. Moreover, nonlinear vibration characteristics were analyzed by means of three kinds of diagrams. Thus, theoretical foundations are established for optimization design and fault diagnosis of rotor-bearing system.", "title": "" }, { "docid": "77c35887241735b833b0b8baaee569c4", "text": "Existing research efforts into tennis visualization have primarily focused on using ball and player tracking data to enhance professional tennis broadcasts and to aid coaches in helping their students. Gathering and analyzing this data typically requires the use of an array of synchronized cameras, which are expensive for non-professional tennis matches. In this paper, we propose TenniVis, a novel tennis match visualization system that relies entirely on data that can be easily collected, such as score, point outcomes, point lengths, service information, and match videos that can be captured by one consumer-level camera. It provides two new visualizations to allow tennis coaches and players to quickly gain insights into match performance. It also provides rich interactions to support ad hoc hypothesis development and testing. We first demonstrate the usefulness of the system by analyzing the 2007 Australian Open men's singles final. We then validate its usability by two pilot user studies where two college tennis coaches analyzed the matches of their own players. The results indicate that useful insights can quickly be discovered and ad hoc hypotheses based on these insights can conveniently be tested through linked match videos.", "title": "" }, { "docid": "a433ebaeeb5dc5b68976b3ecb770c0cd", "text": "1 abstract The importance of the inspection process has been magniied by the requirements of the modern manufacturing environment. In electronics mass-production manufacturing facilities, an attempt is often made to achieve 100 % quality assurance of all parts, subassemblies, and nished goods. A variety of approaches for automated visual inspection of printed circuits have been reported over the last two decades. In this survey, algorithms and techniques for the automated inspection of printed circuit boards are examined. A classiication tree for these algorithms is presented and the algorithms are grouped according to this classiication. This survey concentrates mainly on image analysis and fault detection strategies, these also include the state-of-the-art techniques. A summary of the commercial PCB inspection systems is also presented. 2 Introduction Many important applications of vision are found in the manufacturing and defense industries. In particular, the areas in manufacturing where vision plays a major role are inspection, measurements , and some assembly tasks. The order among these topics closely reeects the manufacturing needs. In most mass-production manufacturing facilities, an attempt is made to achieve 100% quality assurance of all parts, subassemblies, and nished products. One of the most diicult tasks in this process is that of inspecting for visual appearance-an inspection that seeks to identify both functional and cosmetic defects. With the advances in computers (including high speed, large memory and low cost) image processing, pattern recognition, and artiicial intelligence have resulted in better and cheaper equipment for industrial image analysis. This development has made the electronics industry active in applying automated visual inspection to manufacturing/fabricating processes that include printed circuit boards, IC chips, photomasks, etc. Nello 1] gives a summary of the machine vision inspection applications in electronics industry. 01", "title": "" }, { "docid": "4b7eeaf30527604d1e95ef778910564c", "text": "Verification activities are necessary to ensure that the requirements are specified in a correct way. However, until now requirements verification research has focused on traditional up-front requirements. Agile or just-in-time requirements are by definition incomplete, not specific and might be ambiguous when initially specified, indicating a different notion of ‘correctness’. We analyze how verification of agile requirements quality should be performed, based on literature of traditional and agile requirements. This leads to an agile quality framework, instantiated for the specific requirement types of feature requests in open source projects and user stories in agile projects. We have performed an initial qualitative validation of our framework for feature requests with eight practitioners from the Dutch agile community, receiving overall positive feedback.", "title": "" }, { "docid": "1d3b2a5906d7db650db042db9ececed1", "text": "Music consists of precisely patterned sequences of both movement and sound that engage the mind in a multitude of experiences. We move in response to music and we move in order to make music. Because of the intimate coupling between perception and action, music provides a panoramic window through which we can examine the neural organization of complex behaviors that are at the core of human nature. Although the cognitive neuroscience of music is still in its infancy, a considerable behavioral and neuroimaging literature has amassed that pertains to neural mechanisms that underlie musical experience. Here we review neuroimaging studies of explicit sequence learning and temporal production—findings that ultimately lay the groundwork for understanding how more complex musical sequences are represented and produced by the brain. These studies are also brought into an existing framework concerning the interaction of attention and time-keeping mechanisms in perceiving complex patterns of information that are distributed in time, such as those that occur in music.", "title": "" } ]
scidocsrr
603386944343c29b4df4b8ce17cee735
Safety impacts of bicycle infrastructure: A critical review.
[ { "docid": "83533345743229694e055c27240b295c", "text": "OBJECTIVES\nTo assess existing research on the effects of various interventions on levels of bicycling. Interventions include infrastructure (e.g., bike lanes and parking), integration with public transport, education and marketing programs, bicycle access programs, and legal issues.\n\n\nMETHODS\nA comprehensive search of peer-reviewed and non-reviewed research identified 139 studies. Study methodologies varied considerably in type and quality, with few meeting rigorous standards. Secondary data were gathered for 14 case study cities that adopted multiple interventions.\n\n\nRESULTS\nMany studies show positive associations between specific interventions and levels of bicycling. The 14 case studies show that almost all cities adopting comprehensive packages of interventions experienced large increases in the number of bicycle trips and share of people bicycling.\n\n\nCONCLUSIONS\nMost of the evidence examined in this review supports the crucial role of public policy in encouraging bicycling. Substantial increases in bicycling require an integrated package of many different, complementary interventions, including infrastructure provision and pro-bicycle programs, supportive land use planning, and restrictions on car use.", "title": "" } ]
[ { "docid": "f8275a80021312a58c9cd52bbcd4c431", "text": "Mobile online social networks (OSNs) are emerging as the popular mainstream platform for information and content sharing among people. In order to provide Quality of Experience (QoE) support for mobile OSN services, in this paper we propose a socially-driven learning-based framework, namely Spice, for media content prefetching to reduce the access delay and enhance mobile user's satisfaction. Through a large-scale data-driven analysis over real-life mobile Twitter traces from over 17,000 users during a period of five months, we reveal that the social friendship has a great impact on user's media content click behavior. To capture this effect, we conduct social friendship clustering over the set of user's friends, and then develop a cluster-based Latent Bias Model for socially-driven learning-based prefetching prediction. We then propose a usage-adaptive prefetching scheduling scheme by taking into account that different users may possess heterogeneous patterns in the mobile OSN app usage. We comprehensively evaluate the performance of Spice framework using trace-driven emulations on smartphones. Evaluation results corroborate that the Spice can achieve superior performance, with an average 67.2% access delay reduction at the low cost of cellular data and energy consumption. Furthermore, by enabling users to offload their machine learning procedures to a cloud server, our design can achieve speed-up of a factor of 1000 over the local data training execution on smartphones.", "title": "" }, { "docid": "b15dcda2b395d02a2df18f6d8bfa3b19", "text": "We present a method for human pose tracking that learns explicitly about the dynamic effects of human motion on joint appearance. In contrast to previous techniques which employ generic tools such as dense optical flow or spatiotemporal smoothness constraints to pass pose inference cues between frames, our system instead learns to predict joint displacements from the previous frame to the current frame based on the possibly changing appearance of relevant pixels surrounding the corresponding joints in the previous frame. This explicit learning of pose deformations is formulated by incorporating concepts from human pose estimation into an optical flow-like framework. With this approach, state-of-the-art performance is achieved on standard benchmarks for various pose tracking tasks including 3D body pose tracking in RGB video, 3D hand pose tracking in depth sequences, and 3D hand gesture tracking in RGB video.", "title": "" }, { "docid": "e38407316eeee84eaf086ed7779da0a1", "text": "Percutaneous vertebroplasty (PV) and kyphoplasty (PK) are the 2vertebral augmentation procedures that have emerged as minimally invasive surgical options to treat painful vertebral compression fractures (VCF) during the last 2 decades. VCF may either be osteoporotic or tumor-associated. Two hundred million women are affected by osteoporosis globally. Vertebral fracture may result in acute pain around the fracture site, loss of vertebral height due to vertebral collapse, spinal instability, and kyphotic deformity. The main goal of the PV and PK procedures is to give immediate pain relief to patients and restore the vertebral height lost due to fracture. In percutaneous vertebroplasty, bone cement is injected through a minimal incision into the fractured site. Kyphoplasty involves insertion of a balloon into the fractured site, followed by inflation-deflation to create a cavity into which the filler material is injected, and the balloon is taken out prior to cement injection. This literature review presents a qualitative overview on the current status of vertebral augmentation procedures,especially PV and PK, and compares the efficacy and safety of these 2 procedures. The review consists of a brief history of the development of these 2 techniques, a discussion on the current research on the bone cement, clinical outcome of the 2 procedures, and it also sheds light on ongoing and future research to maximize the efficacy and safety of vertebral augmentation procedures.", "title": "" }, { "docid": "742dbd75ad995d5c51c4cbce0cc7f8cc", "text": "Grasping objects under uncertainty remains an open problem in robotics research. This uncertainty is often due to noisy or partial observations of the object pose or shape. To enable a robot to react appropriately to unforeseen effects, it is crucial that it continuously takes sensor feedback into account. While visual feedback is important for inferring a grasp pose and reaching for an object, contact feedback offers valuable information during manipulation and grasp acquisition. In this paper, we use model-free deep reinforcement learning to synthesize control policies that exploit contact sensing to generate robust grasping under uncertainty. We demonstrate our approach on a multi-fingered hand that exhibits more complex finger coordination than the commonly used twofingered grippers. We conduct extensive experiments in order to assess the performance of the learned policies, with and without contact sensing. While it is possible to learn grasping policies without contact sensing, our results suggest that contact feedback allows for a significant improvement of grasping robustness under object pose uncertainty and for objects with a complex shape.", "title": "" }, { "docid": "8557c77501fbdc29a4cd0f161224ca8c", "text": "We present a preliminary analysis of the fundamental viability of meta-learning, revisiting the No Free Lunch (NFL) theorem. The analysis shows that given some simple and very basic assumptions, the NFL theorem is of little relevance to research in Machine Learning. We augment the basic NFL framework to illustrate that the notion of an Ultimate Learning Algorithm is well defined. We show that, although cross-validation still is not a viable way to construct general-purpose learning algorithms, meta-learning offers a natural alternative. We still have to pay for our lunch, but the cost is reasonable: the necessary fundamental assumptions are ones we all make anyway.", "title": "" }, { "docid": "2d997b25227266eddba3da5f728d078b", "text": "Image morphing has received much attention in recent years. It has proven to be a powerful tool for visual effects in film and television, enabling the fluid transformation of one digital image into another. This paper surveys the growth of this field and describes recent advances in image morphing in terms of feature specification, warp generation methods, and transition control. These areas relate to the ease of use and quality of results. We describe the role of radial basis functions, thin plate splines, energy minimization, and multilevel free-form deformations in advancing the state-of-the-art in image morphing. Recent work on a generalized framework for morphing among multiple images is described.", "title": "" }, { "docid": "309fef7105de05da3a0e987c1dc1c3cc", "text": "Flying Ad hoc Network (FANET) is an infrastructure-less multi-hop radio ad hoc network in which Unmanned Aerial Vehicles (UAVs) and Ground Control Station (GCS) collaborates to forward data traffic. Compared to the standard Mobile Ad hoc NETworks (MANETs), the FANET architecture has some specific features (3D mobility, low UAV density, intermittent network connectivity) that bring challenges to the communication protocol design. Such routing protocol must provide safety by finding an accurate and reliable route between UAVs. This safety can be obtained through the use of agile method during software based routing protocol development (for instance the use of Model Driven Development) by mapping each FANET safety requirement into the routing design process. This process must be completed with a sequential safety validation testing with formal verification tools, standardized simulator (by using real simulation environment) and real-world experiments. In this paper, we considered FANET communication safety by presenting design methodologies and evaluations of FANET routing protocols. We use the LARISSA architecture to guarantee the efficiency and accuracy of the whole system. We also use the model driven development methodology to provide model and code consistency through the use of formal verification tools. To complete the FANET safety validation, OMNeT++ simulations (using real UAVs mobility traces) and real FANET outdoor experiments have been carried out. We confront both results to evaluate routing protocol performances and conclude about its safety consideration.", "title": "" }, { "docid": "ced688e5215ba23fd8bcb8c2ba8584d3", "text": "N2pc is generally interpreted as the electrocortical correlate of the distractor-suppression mechanisms through which attention selection takes place in humans. Here, we present data that challenge this common N2pc interpretation. In Experiment 1, multiple distractors induced greater N2pc amplitudes even when they facilitated target identification, despite the suppression account of the N2pc predicted the contrary; in Experiment 2, spatial proximity between target and distractors did not affect the N2pc amplitude, despite resulting in more interference in response times; in Experiment 3, heterogeneous distractors delayed response times but did not elicit a greater N2pc relative to homogeneous distractors again in contrast with what would have predicted the suppression hypothesis. These results do not support the notion that the N2pc unequivocally mirrors distractor-suppression processes. We propose that the N2pc indexes mechanisms involved in identifying and localizing relevant stimuli in the scene through enhancement of their features and not suppression of distractors.", "title": "" }, { "docid": "826480d6a5100a8af3f61f0a9674bfb6", "text": "Internet of Things is smartly changing various existing research areas into new themes, including smart health, smart home, smart industry, and smart transport. Relying on the basis of “smart transport,” Internet of Vehicles (IoV) is evolving as a new theme of research and development from vehicular ad hoc networks (VANETs). This paper presents a comprehensive framework of IoV with emphasis on layered architecture, protocol stack, network model, challenges, and future aspects. Specifically, following the background on the evolution of VANETs and motivation on IoV an overview of IoV is presented as the heterogeneous vehicular networks. The IoV includes five types of vehicular communications, namely, vehicle-to-vehicle, vehicle-to-roadside, vehicle-to-infrastructure of cellular networks, vehicle-to-personal devices, and vehicle-to-sensors. A five layered architecture of IoV is proposed considering functionalities and representations of each layer. A protocol stack for the layered architecture is structured considering management, operational, and security planes. A network model of IoV is proposed based on the three network elements, including cloud, connection, and client. The benefits of the design and development of IoV are highlighted by performing a qualitative comparison between IoV and VANETs. Finally, the challenges ahead for realizing IoV are discussed and future aspects of IoV are envisioned.", "title": "" }, { "docid": "dc83550afd690e371283428647ed806e", "text": "Recently, convolutional neural networks have demonstrated excellent performance on various visual tasks, including the classification of common two-dimensional images. In this paper, deep convolutional neural networks are employed to classify hyperspectral images directly in spectral domain. More specifically, the architecture of the proposed classifier contains five layers with weights which are the input layer, the convolutional layer, the max pooling layer, the full connection layer, and the output layer. These five layers are implemented on each spectral signature to discriminate against others. Experimental results based on several hyperspectral image data sets demonstrate that the proposed method can achieve better classification performance than some traditional methods, such as support vector machines and the conventional deep learning-based methods.", "title": "" }, { "docid": "8e0b16179aabf850c09633df600e6a4a", "text": "Impacts of Informal Caregiving on Caregiver Employment, Health, and Family As the aging population increases, the demand for informal caregiving is becoming an ever more important concern for researchers and policy-makers alike. To shed light on the implications of informal caregiving, this paper reviews current research on its impact on three areas of caregivers’ lives: employment, health, and family. Because the literature is inherently interdisciplinary, the research designs, sampling procedures, and statistical methods used are heterogeneous. Nevertheless, we are still able to draw several conclusions: first, despite the prevalence of informal caregiving and its primary association with lower levels of employment, the affected labor force is seemingly small. Second, such caregiving tends to lower the quality of the caregiver’s psychological health, which also has a negative impact on physical health outcomes. Third, the implications for family life remain under investigated. The research findings also differ strongly among subgroups, although they do suggest that female, spousal, and intense caregivers tend to be the most affected by caregiving. JEL Classification: E26, J14, J46", "title": "" }, { "docid": "8608ccbb61cbfbf3aae7e832ad4be0aa", "text": "Part A: Fundamentals and Cryptography Chapter 1: A Framework for System Security Chapter 1 aims to describe a conceptual framework for the design and analysis of secure systems with the goal of defining a common language to express “concepts”. Since it is designed both for theoreticians and for practitioners, there are two kinds of applicability. On the one hand a meta-model is proposed to theoreticians, enabling them to express arbitrary axioms of other security models in this special framework. On the other hand the framework provides a language for describing the requirements, designs, and evaluations of secure systems. This information is given to the reader in the introduction and as a consequence he wants to get the specification of the framework. Unfortunately, the framework itself is not described! However, the contents cover first some surrounding concepts like “systems, owners, security and functionality”. These are described sometimes in a confusing way, so that it remains unclear, what the author really wants to focus on. The following comparison of “Qualitative and Quantitative Security” is done 1For example: if the reader is told, that “every system has an owner, and every owner is a system”, there obviously seems to be no difference between these entities (cp. p. 4).", "title": "" }, { "docid": "de73980005a62a24820ed199fab082a3", "text": "Natural language interfaces offer end-users a familiar and convenient option for querying ontology-based knowledge bases. Several studies have shown that they can achieve high retrieval performance as well as domain independence. This paper focuses on usability and investigates if NLIs are useful from an end-user’s point of view. To that end, we introduce four interfaces each allowing a different query language and present a usability study benchmarking these interfaces. The results of the study reveal a clear preference for full sentences as query language and confirm that NLIs are useful for querying Semantic Web data.", "title": "" }, { "docid": "b2bed3ee655d1ee1c4a9cd4c5ac64264", "text": "Detecting outliers from big data plays an important role in network security. Previous outlier detection algorithms are generally incapable of handling big data. In this paper we present an parallel outlier detection method for big data, based on a new parallel auto-encoder method. Specifically, we build a replicator model of the input data to obtain the representation of sample data. Then, the replicator model is used to measure the replicability of test data, where records having higher reconstruction errors are classified as outliers. Experimental results show the performance of the proposed parallel algorithm.", "title": "" }, { "docid": "8206fa7ff6b126371c19367fd32ca8ee", "text": "In Seed Testing Laboratory (STL), test for other distinguishable variety (ODV) is carried out for the seed sample received from the seed producer after harvesting. This is done to obtain foundation/certified tag from the seed certification department before marketing the product. This is currently done manually by observing the morphological characteristics of the seeds through the naked eye. This is a time-consuming process for the STL and there is a chance for human error in identifying ODV. Hence, there is the need for the machine vision technique to automate the process of identifying ODV in the seed testing laboratory. Various techniques for varietal identification of sunflower seeds are explored in this paper. Experiments are performed on a dataset that contains ten varieties of sunflower seeds and the success rate achieved using various techniques is reported in this paper. In cascaded support vector machine (SVM), the order in which the classifier blocks are arranged plays an important role in improving the classification rate. The main contribution of this paper lies in the manipulation of ant colony optimization technique for obtaining the order of cascaded SVM by maximizing the total probability of correct decision. When the SVM is cascaded in optimum order, classification rate has been increased from $88.32\\%$ (obtained using the actual order) to $98.82\\%$ for kernel linear discriminant analysis based boundary descriptors. The closed form expression for computing the total probability of correct detection of the constructed cascaded SVM classifier is also reported in this paper.", "title": "" }, { "docid": "6947f9e3da52e03e867a0c8c015c17df", "text": "Graphs are a powerful and versatile tool useful in various subfields of science and engineering. In many applications, for example, in pattern recognition and computer vision, it is required to measure the similarity of objects. When graphs are used for the representation of structured objects, then the problem of measuring object similarity turns into the problem of computing the similarity of graphs, which is also known as graph matching. In this paper, similarity measures on graphs and related algorithms will be reviewed. Applications of graph matching will be demonstrated giving examples from the fields of pattern recognition and computer vision. Also recent theoretical work showing various relations between different similarity measures will be discussed.", "title": "" }, { "docid": "6a917d1c159c8445b82ac50f3f06f9d4", "text": "As renewable energy increasingly penetrates into power grid systems, new challenges arise for system operators to keep the systems reliable under uncertain circumstances, while ensuring high utilization of renewable energy. With the naturally intermittent renewable energy, such as wind energy, playing more important roles, system robustness becomes a must. In this paper, we propose a robust optimization approach to accommodate wind output uncertainty, with the objective of providing a robust unit commitment schedule for the thermal generators in the day-ahead market that minimizes the total cost under the worst wind power output scenario. Robust optimization models the randomness using an uncertainty set which includes the worst-case scenario, and protects this scenario under the minimal increment of costs. In our approach, the power system will be more reliable because the worst-case scenario has been considered. In addition, we introduce a variable to control the conservatism of our model, by which we can avoid over-protection. By considering pumped-storage units, the total cost is reduced significantly.", "title": "" }, { "docid": "c61c111c5b5d1c4663905371b638e703", "text": "Many standard computer vision datasets exhibit biases due to a variety of sources including illumination condition, imaging system, and preference of dataset collectors. Biases like these can have downstream effects in the use of vision datasets in the construction of generalizable techniques, especially for the goal of the creation of a classification system capable of generalizing to unseen and novel datasets. In this work we propose Unbiased Metric Learning (UML), a metric learning approach, to achieve this goal. UML operates in the following two steps: (1) By varying hyper parameters, it learns a set of less biased candidate distance metrics on training examples from multiple biased datasets. The key idea is to learn a neighborhood for each example, which consists of not only examples of the same category from the same dataset, but those from other datasets. The learning framework is based on structural SVM. (2) We do model validation on a set of weakly-labeled web images retrieved by issuing class labels as keywords to search engine. The metric with best validation performance is selected. Although the web images sometimes have noisy labels, they often tend to be less biased, which makes them suitable for the validation set in our task. Cross-dataset image classification experiments are carried out. Results show significant performance improvement on four well-known computer vision datasets.", "title": "" }, { "docid": "626cbfd87a6582d36cd1a98342ce2cc2", "text": "We apply the two-pluyer game assumprio~ls of 1i111ited search horizon and cornn~itnrent to nroves i constant time, to .single-agent heuristic search problems. We present a varicrtion of nrinimcr lookuhead search, and an analog to ulphu-betu pruning rlrot signijicantly improves the efficiency c. the algorithm. Paradoxically. the search horizon reachuble with this algorithm increases wir. increusing branching facior. hl addition. we present a new algorithm, called Real-Time-A ', fo interleaving planning and execution. We prove that the ulgorithm makes locally optimal decision and is guaranteed to find a solution. We also present a learning version of this algorithm thrr improves its performance over successive problen~ solving trials by learning more accurate heuristi values, and prove that the learned values converge to their exact values along every optimal path These algorithms ef/ectively solve significanrly larger problems rhan have previously beerr solvabk using heuristic evaluation functions.", "title": "" } ]
scidocsrr
a656a70a8fda684126a459a79ecd55dd
Persuasive Normative Messages : The Influence of Injunctive and Personal Norms on Using Free Plastic Bags
[ { "docid": "c93c0966ef744722d58bbc9170e9a8ab", "text": "Past research has generated mixed support among social scientists for the utility of social norms in accounting for human behavior. We argue that norms do have a substantial impact on human action; however, the impact can only be properly recognized when researchers (a) separate 2 types of norms that at times act antagonistically in a situation—injunctive norms (what most others approve or disapprove) and descriptive norms (what most others do)—and (b) focus Ss' attention principally on the type of norm being studied. In 5 natural settings, focusing Ss on either the descriptive norms or the injunctive norms regarding littering caused the Ss* littering decisions to change only in accord with the dictates of the then more salient type of norm.", "title": "" }, { "docid": "367d49d63f0c79906b50cfb9943c8d3a", "text": "This article develops a conceptual framework for advancing theories of environmentally significant individual behavior and reports on the attempts of the author’s research group and others to develop such a theory. It discusses definitions of environmentally significant behavior; classifies the behaviors and their causes; assesses theories of environmentalism, focusing especially on value-belief-norm theory; evaluates the relationship between environmental concern and behavior; and summarizes evidence on the factors that determine environmentally significant behaviors and that can effectively alter them. The article concludes by presenting some major propositions supported by available research and some principles for guiding future research and informing the design of behavioral programs for environmental protection.", "title": "" } ]
[ { "docid": "277cf6fa4b5085287593ee2ca86e67fc", "text": "What can we learn of the human mind by examining its products? Here it is argued that a great deal can be learned, and that the study of human minds through its creations in the real world could be a promising field of study within the cognitive sciences. The city is a case in point. Since the beginning of cities human ideas about them have been dominated by geometric ideas, and the real history of cities has always oscillated between the geometric and the ‘organic’. Set in the context of the suggestion from cognitive neuroscience that we impose more geometric order on the world that it actually possesses, an intriguing question arises: what is the role of geometric intuition in how we understand cities and how we create them? Here we argue that all cities, the organic as well as the geometric, are pervasively ordered by geometric intuition, so that neither the forms of the cities nor their functioning can be understood without insight into their distinctive and pervasive emergent geometrical forms. The city is, as it is often said to be, the creation of economic and social processes, but, it is argued, these processes operate within an envelope of geometric possibility defined by human minds in its interaction with spatial laws that govern the relations between objects and spaces in the ambient world. Note: I have included only selected images here. All the examples will be shown fully in the presentation. Introduction: the Ideal and the Organic The most basic distinction we make about the form of cities is between the ideal and the organic. The ideal are geometric, the organic are not — or seem not to be. The geometric we define in terms of straight lines and 90 or 45 degree angles, the organic in terms of the lack of either (Fig. 1). The ideal seem to be top-down impositions of the human mind, the outcome of reason, often in association with power. We easily grasp their patterns when seen ‘all at once’. The organic we take to be the outcome of unplanned bottom up processes reflecting the", "title": "" }, { "docid": "e2c6437d257559211d182b5707aca1a4", "text": "In present times, social forums such as Quora and Yahoo! Answers constitute powerful media through which people discuss on a variety of topics and express their intentions and thoughts. Here they often reveal their potential intent to purchase ‘Purchase Intent’ (PI). A purchase intent is defined as a text expression showing a desire to purchase a product or a service in future. Extracting posts having PI from a user’s social posts gives huge opportunities towards web personalization, targeted marketing and improving community observing systems. In this paper, we explore the novel problem of detecting PIs from social posts and classifying them. We find that using linguistic features along with statistical features of PI expressions achieves a significant improvement in PI classification over ‘bag-ofwords’ based features used in many present day socialmedia classification tasks. Our approach takes into consideration the specifics of social posts like limited contextual information, incorrect grammar, language ambiguities, etc. by extracting features at two different levels of text granularity word and phrase based features and grammatical dependency based features. Apart from these, the patterns observed in PI posts help us to identify some specific features.", "title": "" }, { "docid": "38a74fff83d3784c892230255943ee23", "text": "Several researchers, present authors included, envision personal mobile robot agents that can assist humans in their daily tasks. Despite many advances in robotics, such mobile robot agents still face many limitations in their perception, cognition, and action capabilities. In this work, we propose a symbiotic interaction between robot agents and humans to overcome the robot limitations while allowing robots to also help humans. We introduce a visitor’s companion robot agent, as a natural task for such symbiotic interaction. The visitor lacks knowledge of the environment but can easily open a door or read a door label, while the mobile robot with no arms cannot open a door and may be confused about its exact location, but can plan paths well through the building and can provide useful relevant information to the visitor. We present this visitor companion task in detail with an enumeration and formalization of the actions of the robot agent in its interaction with the human. We briefly describe the wifi-based robot localization algorithm and show results of the different levels of human help to the robot during its navigation. We then test the value of robot help to the visitor during the task to understand the relationship tradeoffs. Our work has been fully implemented in a mobile robot agent, CoBot, which has successfully navigated for several hours and continues to navigate in our indoor environment.", "title": "" }, { "docid": "3071b8a720277f0ab203a40aade90347", "text": "The Internet became an indispensable part of people's lives because of the significant role it plays in the ways individuals interact, communicate and collaborate with each other. Over recent years, social media sites succeed in attracting a large portion of online users where they become not only content readers but also content generators and publishers. Social media users generate daily a huge volume of comments and reviews related to different aspects of life including: political, scientific and social subjects. In general, sentiment analysis refers to the task of identifying positive and negative opinions, emotions and evaluations related to an article, news, products, services, etc. Arabic sentiment analysis is conducted in this study using a small dataset consisting of 1,000 Arabic reviews and comments collected from Facebook and Twitter social network websites. The collected dataset is used in order to conduct a comparison between two free online sentiment analysis tools: SocialMention and SentiStrength that support Arabic language. The results which based on based on the two of classifiers (Decision tree (J48) and SVM) showed that the SentiStrength is better than SocialMention tool.", "title": "" }, { "docid": "c00a29466c82f972a662b0e41b724928", "text": "We introduce the type theory ¿µv, a call-by-value variant of Parigot's ¿µ-calculus, as a Curry-Howard representation theory of classical propositional proofs. The associated rewrite system is Church-Rosser and strongly normalizing, and definitional equality of the type theory is consistent, compatible with cut, congruent and decidable. The attendant call-by-value programming language µPCFv is obtained from ¿µv by augmenting it by basic arithmetic, conditionals and fixpoints. We study the behavioural properties of µPCFv and show that, though simple, it is a very general language for functional computation with control: it can express all the main control constructs such as exceptions and first-class continuations. Proof-theoretically the dual ¿µv-constructs of naming and µ-abstraction witness the introduction and elimination rules of absurdity respectively. Computationally they give succinct expression to a kind of generic (forward) \"jump\" operator, which may be regarded as a unifying control construct for functional computation. Our goal is that ¿µv and µPCFv respectively should be to functional computation with first-class access to the flow of control what ¿-calculus and PCF respectively are to pure functional programming: ¿µv gives the logical basis via the Curry-Howard correspondence, and µPCFv is a prototypical language albeit in purified form.", "title": "" }, { "docid": "4585252e5cc2b50fb4e53eb408ef9b77", "text": "Android-based Internet-of-Things devices with excellent compatibility and openness are constantly emerging. A typical example is Android Things that Google supports. Compatibility based on the same platform can provide more convenient personalization services centering on mobile devices, while this uniformity-based computing environment can expose many security vulnerabilities. For example, new mobile malware running on Android can instantly transition to all connected devices. In particular, the Android platform has a structural weakness that makes it easy to repackage applications. This can lead to malicious behavior. To protect mobile apps that are vulnerable to malicious activity, various code obfuscation techniques are applied to key logic. The most effective one of this kind involves safely concealing application programming interfaces (API). It is very important to ensure that obfuscation is applied to the appropriate API with an adequate degree of resistance to reverse engineering. Because there is no objective evaluation method, it depends on the developer judgment. Therefore, in this paper, we propose a scheme that can quantitatively evaluate the level of hiding of APIs, which represent the function of the Android application based on machine learning theory. To perform the quantitative evaluation, the API information is obtained by static analysis of a DEX file, and the API-called code executed in Dalvik in the Android platform is dynamically extracted. Moreover, the sensitive APIs are classified using the extracted API and Naive Bayes classification. The proposed scheme yields a high score according to the level of hiding of the classified API. We tested the proposed scheme on representative applications of the Google Play Store. We believe it can be used as a model for obfuscation assessment schemes, because it can evaluate the level of obfuscation in general without relying on specific obfuscation tools.", "title": "" }, { "docid": "0e56ef5556c34274de7d7dceff17317e", "text": "We investigate grounded sentence representations, where we train a sentence encoder to predict the image features of a given caption— i.e., we try to “imagine” how a sentence would be depicted visually—and use the resultant features as sentence representations. We examine the quality of the learned representations on a variety of standard sentence representation quality benchmarks, showing improved performance for groundedmodels over non-grounded ones. In addition, we thoroughly analyze the extent to which grounding contributes to improved performance, and show that the system also learns improved word embeddings.", "title": "" }, { "docid": "53a55e8aa8b3108cdc8d015eabb3476d", "text": "We investigate a family of poisoning attacks against Support Vector Machines (SVM). Such attacks inject specially crafted training data that increases the SVM’s test error. Central to the motivation for these attacks is the fact that most learning algorithms assume that their training data comes from a natural or well-behaved distribution. However, this assumption does not generally hold in security-sensitive settings. As we demonstrate, an intelligent adversary can, to some extent, predict the change of the SVM’s decision function due to malicious input and use this ability to construct malicious data. The proposed attack uses a gradient ascent strategy in which the gradient is computed based on properties of the SVM’s optimal solution. This method can be kernelized and enables the attack to be constructed in the input space even for non-linear kernels. We experimentally demonstrate that our gradient ascent procedure reliably identifies good local maxima of the non-convex validation error surface, which significantly increases the classifier’s test error.", "title": "" }, { "docid": "cef6d9eb15f00eedcb7241d62e5a1b02", "text": "There has been a rapid increase in the use of social networking websites in the last few years. People most conveniently express their views and opinions on a wide array of topics via such websites. Sentiment analysis of such data which comprises of people's views is very important in order to gauge public opinion on a particular topic of interest. This paper reviews a number of techniques, both lexicon-based approaches as well as learning based methods that can be used for sentiment analysis of text. In order to adapt these techniques for sentiment analysis of data procured from one of the social networking websites, Twitter, a number of issues and challenges need to be addressed, which are put forward in this paper.", "title": "" }, { "docid": "ef5935c1a6be177318263ce28e02a454", "text": "In the metallization technology for crystalline Si solar cell, it has been required to develop new material with low cost for high-temperature and highly-reliable interconnection. We have developed new method to fabricate fine metal alloy particles with narrow distribution of particle size from 0.5 to 10μm. We called it as Nanomized method. The fine particles are composed of uniform structure dispersed metal alloy in nano-scale level and do not include void in the particle. The metal alloy particles fabricated by conventional atomized method include void inside particle and non-uniformly dispersed alloy components. We confirmed reliability and mechanical strength. We produced bonding material from the mixture of Cu particles and Sn-based alloy particles. The bonding strength of die chip on Al electrode reached 80 MPa as a top data, 50 MPa in average. The bonding strength of Sn-based alloy particle fabricated from conventional atomized method deteriorated less than 8 MPa after 500 hours later at 300 degree C.", "title": "" }, { "docid": "37572963400c8a78cef3cd4a565b328e", "text": "The impressive performance of utilizing deep learning or neural network has attracted much attention in both the industry and research communities, especially towards computer vision aspect related applications. Despite its superior capability of learning, generalization and interpretation on various form of input, micro-expression analysis field is yet remains new in applying this kind of computing system in automated expression recognition system. A new feature extractor, BiVACNN is presented in this paper, where it first estimates the optical flow fields from the apex frame, then encode the flow fields features using CNN. Concretely, the proposed method consists of three stages: apex frame acquisition, multivariate features formation and feature learning using CNN. In the multivariate features formation stage, we attempt to derive six distinct features from the apex details, which include: the apex itself, difference between the apex and onset frames, horizontal optical flow, vertical optical flow, magnitude and orientation. It is demonstrated that utilizing the horizontal and vertical optical flow capable to achieve 80% recognition accuracy in CASME II and SMIC-HS databases.", "title": "" }, { "docid": "ff815f534ab19e79d46adaf8f579f01c", "text": "Leveraging zero-shot learning to learn mapping functions between vector spaces of different languages is a promising approach to bilingual dictionary induction. However, methods using this approach have not yet achieved high accuracy on the task. In this paper, we propose a bridging approach, where our main contribution is a knowledge distillation training objective. As teachers, rich resource translation paths are exploited in this role. And as learners, translation paths involving low resource languages learn from the teachers. Our training objective allows seamless addition of teacher translation paths for any given low resource pair. Since our approach relies on the quality of monolingual word embeddings, we also propose to enhance vector representations of both the source and target language with linguistic information. Our experiments on various languages show large performance gains from our distillation training objective, obtaining as high as 17% accuracy improvements.", "title": "" }, { "docid": "885764d7e71711b8f9a086d43c6e4f9a", "text": "In Indian economy, Agriculture is the most important branch and 70 percentage of rural population livelihood depends on agricultural work. Farming is the one of the important part of Agriculture. Crop yield depends on environment’s factors like precipitation, temperature, evapotranspiration, etc. Generally farmers cultivate crop, based on previous experience. But nowadays, the uncertainty increased in environment. So, accurate analysis of historic data of environment parameters should be done for successful farming. To get more harvest, we should also do the analysis of previous cultivation data. The Prediction of crop yield can be done based on historic crop cultivation data and weather data using data mining methods. This paper describes the role of data mining in Agriculture and crop yield prediction. This paper also describes Groundnut crop yield prediction analysis and Naive Bayes Method.", "title": "" }, { "docid": "e1b83ecf08498491b8d70043cc67d523", "text": "We give a brief discussion of denoising algorithms for depth data and introduce a novel technique based on the NL-means filter. A unified approach is presented that removes outliers from depth data and accordingly achieves an unbiased smoothing result. This robust denoising algorithm takes intra-patch similarity and optional color information into account in order to handle strong discontinuities and to preserve fine detail structure in the data. We achieve fast computation times with a GPU-based implementation. Results using data from a time-of-flight camera system show a significant gain in visual quality.", "title": "" }, { "docid": "76882dc402b82d9fffb0621bc6016259", "text": "Representing discrete words in a continuous vector space turns out to be useful for natural language applications related to text understanding. Meanwhile, it poses extensive challenges, one of which is due to the polysemous nature of human language. A common solution (a.k.a word sense induction) is to separate each word into multiple senses and create a representation for each sense respectively. However, this approach is usually computationally expensive and prone to data sparsity, since each sense needs to be managed discriminatively. In this work, we propose a new framework for generating context-aware text representations without diving into the sense space. We model the concept space shared among senses, resulting in a framework that is efficient in both computation and storage. Specifically, the framework we propose is one that: i) projects both words and concepts into the same vector space; ii) obtains unambiguous word representations that not only preserve the uniqueness among words, but also reflect their context-appropriate meanings. We demonstrate the effectiveness of the framework in a number of tasks on text understanding, including word/phrase similarity measurements, paraphrase identification and question-answer relatedness classification.", "title": "" }, { "docid": "a02a53a7fe03bc687d841e67ee08f641", "text": "Spontaneous gestures that accompany speech are related to both verbal and spatial processes. We argue that gestures emerge from perceptual and motor simulations that underlie embodied language and mental imagery. We first review current thinking about embodied cognition, embodied language, and embodied mental imagery. We then provide evidence that gestures stem from spatial representations and mental images. We then propose the gestures-as-simulated-action framework to explain how gestures might arise from an embodied cognitive system. Finally, we compare this framework with other current models of gesture production, and we briefly outline predictions that derive from the framework.", "title": "" }, { "docid": "0c4f09c41c35690de71f106403d14223", "text": "This paper views Islamist radicals as self-interested political revolutionaries and builds on a general model of political extremism developed in a previous paper (Ferrero, 2002), where extremism is modelled as a production factor whose effect on expected revenue is initially positive and then turns negative, and whose level is optimally chosen by a revolutionary organization. The organization is bound by a free-access constraint and hence uses the degree of extremism as a means of indirectly controlling its level of membership with the aim of maximizing expected per capita income of its members, like a producer co-operative. The gist of the argument is that radicalization may be an optimal reaction to perceived failure (a widespread perception in the Muslim world) when political activists are, at the margin, relatively strongly averse to effort but not so averse to extremism, a configuration that is at odds with secular, Western-style revolutionary politics but seems to capture well the essence of Islamic revolutionary politics, embedded as it is in a doctrinal framework.", "title": "" }, { "docid": "9906a8e8302f4178472113d074415f25", "text": "The usage and applications of social media have become pervasive. This has enabled an innovative paradigm to solve multimedia problems (e.g., recommendation and popularity prediction), which are otherwise hard to address purely by traditional approaches. In this paper, we investigate how to build a mutual connection among the disparate social media on the Internet, using which cross-domain media recommendation can be realized. We accomplish this goal through SocialTransfer---a novel cross-domain real-time transfer learning framework. While existing transfer learning methods do not address how to utilize the real time social streams, our proposed SocialTransfer is able to effectively learn from social streams to help multimedia applications, assuming an intermediate topic space can be built across domains. It is characterized by two key components: 1) a topic space learned in real time from social streams via Online Streaming Latent Dirichlet Allocation (OSLDA), and 2) a real-time cross-domain graph spectra analysis based transfer learning method that seamlessly incorporates learned topic models from social streams into the transfer learning framework. We present as use cases of \\emph{SocialTransfer} two video recommendation applications that otherwise can hardly be achieved by conventional media analysis techniques: 1) socialized query suggestion for video search, and 2) socialized video recommendation that features socially trending topical videos. We conduct experiments on a real-world large-scale dataset, including 10.2 million tweets and 5.7 million YouTube videos and show that \\emph{SocialTransfer} outperforms traditional learners significantly, and plays a natural and interoperable connection across video and social domains, leading to a wide variety of cross-domain applications.", "title": "" }, { "docid": "041772bbad50a5bf537c0097e1331bdd", "text": "As students read expository text, comprehension is improved by pausing to answer questions that reinforce the material. We describe an automatic question generator that uses semantic pattern recognition to create questions of varying depth and type for self-study or tutoring. Throughout, we explore how linguistic considerations inform system design. In the described system, semantic role labels of source sentences are used in a domain-independent manner to generate both questions and answers related to the source sentence. Evaluation results show a 44% reduction in the error rate relative to the best prior systems, averaging over all metrics, and up to 61% reduction in the error rate on grammaticality judgments.", "title": "" }, { "docid": "f52a231bb6c1953dad1a6b3fb04f0c53", "text": "We propose to capture humans’ variable and idiosyncratic sentiment via building personalized sentiment classification models at a group level. Our solution roots in the social comparison theory that humans tend to form groups with others of similar minds and ability, and the cognitive consistency theory that mutual influence inside groups will eventually shape group norms and attitudes, with which group members will all shift to align. We formalize personalized sentiment classification as a multi-task learning problem. In particular, to exploit the clustering property of users’ opinions, we impose a non-parametric Dirichlet Process prior over the personalized models, in which group members share the same customized sentiment model adapted from a global classifier. Extensive experimental evaluations on large collections of Amazon and Yelp reviews confirm the effectiveness of the proposed solution: it outperformed user-independent classification solutions, and several stateof-the-art model adaptation and multi-task learning algorithms.", "title": "" } ]
scidocsrr
5f437af5d72e5703280d095f946cada1
Multi-Level Intrusion Detection System (ML-IDS)
[ { "docid": "b15095887da032b74a1f4ea9844d8e56", "text": "From the first appearance of network attacks, the internet worm, to the most recent one in which the servers of several famous e-business companies were paralyzed for several hours, causing huge financial losses, network-based attacks have been increasing in frequency and severity. As a powerful weapon to protect networks, intrusion detection has been gaining a lot of attention. Traditionally, intrusion detection techniques are classified into two broad categories: misuse detection and anomaly detection. Misuse detection aims to detect well-known attacks as well as slight variations of them, by characterizing the rules that govern these attacks. Due to its nature, misuse detection has low false alarms but it is unable to detect any attacks that lie beyond its knowledge. Anomaly detection is designed to capture any deviations from the established profiles of users and systems normal behavior pattern. Although in principle, anomaly detection has the ability to detect new attacks, in practice this is far from easy. Anomaly detection has the potential to generate too many false alarms, and it is very time consuming and labor expensive to sift true intrusions from the false alarms. As new network attacks emerge, the need for intrusion detection systems to detect novel attacks becomes pressing. As we stated before, this is one of the hardest tasks to accomplish, since no knowledge about the novel attacks is available. However, if we view the problem from another angle, we can find a solution. Attacks do something that is different from normal activities: if we have comprehensive knowledge about normal activities and their normal deviations, then all activities ∗This work has been funded by AFRL Rome Labs under the contract F 30602-00-2-0512. †All the authors are at George Mason University, Center for Secure Information Systems Fairfax, VA 22303", "title": "" }, { "docid": "68320c8230cb96d54a3d370b72efa8f1", "text": "Zero-day cyber attacks such as worms and spy-ware are becoming increasingly widespread and dangerous. The existing signature-based intrusion detection mechanisms are often not sufficient in detecting these types of attacks. As a result, anomaly intrusion detection methods have been developed to cope with such attacks. Among the variety of anomaly detection approaches, the Support Vector Machine (SVM) is known to be one of the best machine learning algorithms to classify abnormal behaviors. The soft-margin SVM is one of the well-known basic SVM methods using supervised learning. However, it is not appropriate to use the soft-margin SVM method for detecting novel attacks in Internet traffic since it requires pre-acquired learning information for supervised learning procedure. Such pre-acquired learning information is divided into normal and attack traffic with labels separately. Furthermore, we apply the one-class SVM approach using unsupervised learning for detecting anomalies. This means one-class SVM does not require the labeled information. However, there is downside to using one-class SVM: it is difficult to use the one-class SVM in the real world, due to its high false positive rate. In this paper, we propose a new SVM approach, named Enhanced SVM, which combines these two methods in order to provide unsupervised learning and low false alarm capability, similar to that of a supervised SVM approach. We use the following additional techniques to improve the performance of the proposed approach (referred to as Anomaly Detector using Enhanced SVM): First, we create a profile of normal packets using Self-Organized Feature Map (SOFM), for SVM learning without pre-existing knowledge. Second, we use a packet filtering scheme based on Passive TCP/IP Fingerprinting (PTF), in order to reject incomplete network traffic that either violates the TCP/IP standard or generation policy inside of well-known platforms. Third, a feature selection technique using a Genetic Algorithm (GA) is used for extracting optimized information from raw internet packets. Fourth, we use the flow of packets based on temporal relationships during data preprocessing, for considering the temporal relationships among the inputs used in SVM learning. Lastly, we demonstrate the effectiveness of the Enhanced SVM approach using the above-mentioned techniques, such as SOFM, PTF, and GA on MIT Lincoln Lab datasets, and a live dataset captured from a real network. The experimental results are verified by m-fold cross validation, and the proposed approach is compared with real world Network Intrusion Detection Systems (NIDS). ! 2007 Elsevier Inc. All rights reserved.", "title": "" } ]
[ { "docid": "614174e5e1dffe9824d7ef8fae6fb499", "text": "This paper starts with presenting a fundamental principle based on which the celebrated orthogonal frequency division multiplexing (OFDM) waveform is constructed. It then extends the same principle to construct the newly introduced generalized frequency division multiplexing (GFDM) signals. This novel derivation sheds light on some interesting properties of GFDM. In particular, our derivation seamlessly leads to an implementation of GFDM transmitter which has significantly lower complexity than what has been reported so far. Our derivation also facilitates a trivial understanding of how GFDM (similar to OFDM) can be applied in MIMO channels.", "title": "" }, { "docid": "89c7518d9e0bd7eac7d4a0e1983fe0fc", "text": "Technology such as Information and Communication Technology (ICT) is a potent force in driving economic, social, political and educational reforms. Countries, particularly developing ones, cannot afford to stay passive to ICT if they are to compete and strive in the global economy. The health of the economy of any country, poor or rich, developed or developing, depends substantially on the level and quality of the education it provides to its workforce. Education reform is occurring throughout the world and one of the tenets of the reform is the introduction and integration of ICT in the education system. The successful integration of any technology, thus ICT, into the classroom warrants careful planning and depends largely on how well policy makers understand and appreciate the dynamics of such integration. This paper offers a set of guidelines to policy makers for the successful integration of ICT into the classroom.", "title": "" }, { "docid": "dbf26db03a2be849df40416e50368bbd", "text": "This paper studies the fundamental tradeoff between storage and latency in a general wireless interference network with caches equipped at all transmitters and receivers. The tradeoff is characterized by an information-theoretic metric, normalized delivery time (NDT), which is the worst case delivery time of the actual traffic load at a transmission rate specified by degrees of freedom of a given channel. We obtain both an achievable upper bound and a theoretical lower bound of the minimum NDT for any number of transmitters, any number of receivers, and any feasible cache size tuple. We show that the achievable NDT is exactly optimal in certain cache size regions, and is within a bounded multiplicative gap to the theoretical lower bound in other regions. In the achievability analysis, we first propose a novel cooperative transmitter/receiver coded caching strategy. It offers the freedom to adjust file splitting ratios for NDT minimization. We then propose a delivery strategy that transforms the considered interference network into a new class of cooperative X-multicast channels. It leverages local caching gain, coded multicasting gain, and transmitter cooperation gain (via interference alignment and interference neutralization) opportunistically. Finally, the achievable NDT is obtained by solving a linear programming problem. This paper reveals that with caching at both transmitter and receiver sides, the network can benefit simultaneously from traffic load reduction and transmission rate enhancement, thereby effectively reducing the content delivery latency.", "title": "" }, { "docid": "dca57315342d58d96836fef9d7f52a71", "text": "We examine the evidence that speech and musical sounds exploit different acoustic cues: speech is highly dependent on rapidly changing broadband sounds, whereas tonal patterns tend to be slower, although small and precise changes in frequency are important. We argue that the auditory cortices in the two hemispheres are relatively specialized, such that temporal resolution is better in left auditory cortical areas and spectral resolution is better in right auditory cortical areas. We propose that cortical asymmetries might have developed as a general solution to the need to optimize processing of the acoustic environment in both temporal and frequency domains.", "title": "" }, { "docid": "20436a21b4105700d7e95a477a22d830", "text": "We introduce a new type of Augmented Reality games: By using a simple webcam and Computer Vision techniques, we turn a standard real game board pawns into an AR game. We use these objects as a tangible interface, and augment them with visual effects. The game logic can be performed automatically by the computer. This results in a better immersion compared to the original board game alone and provides a different experience than a video game. We demonstrate our approach on Monopoly− [1], but it is very generic and could easily be adapted to any other board game.", "title": "" }, { "docid": "1714b97ec601792446cb7ad34a70e3b6", "text": "Interaction intent prediction and the Midas touch have been a longstanding challenge for eye-tracking researchers and users of gaze-based interaction. Inspired by machine learning approaches in biometric person authentication, we developed and tested an offline framework for task-independent prediction of interaction intents. We describe the principles of the method, the features extracted, normalization methods, and evaluation metrics. We systematically evaluated the proposed approach on an example dataset of gaze-augmented problem-solving sessions. We present results of three normalization methods, different feature sets and fusion of multiple feature types. Our results show that accuracy of up to 76% can be achieved with Area Under Curve around 80%. We discuss the possibility of applying the results for an online system capable of interaction intent prediction.", "title": "" }, { "docid": "d1b18ee5e1aa984d670bf4bfd5d5795f", "text": "PURPOSE\nAlveolar ridge augmentation is essential for success in implant therapy and depends on the biological performance of bone graft materials. This literature review aims to comprehensively explain the clinically relevant capabilities and limitations of currently available bone substitutes for bone augmentation in light of biomaterial science.\n\n\nSTUDY SELECTION\nThe biological performance of calcium phosphate-based bone substitutes was categorized according to space-making capability, biocompatibility, bioabsorption, and volume maintenance over time. Each category was reviewed based on clinical studies, preclinical animal studies, and in vitro studies.\n\n\nRESULTS\nCurrently available bone substitutes provide only osteoconduction as a scaffold but not osteoinduction. Particle size, sensitivity to enzymatic or chemical dissolution, and mechanical properties affect the space-making capability of bone substitutes. The nature of collagen fibers, particulate size, and release of calcium ions influence the biocompatibility of bone substitutes. Bioabsorption of bone substitutes is determined by water solubility (chemical composition) and acid resistance (integrity of apatite structure). Bioabsorption of remnant bone substitute material and volume maintenance of the augmented bone are inversely related.\n\n\nCONCLUSION\nIt is necessary to improve the biocompatibility of currently available bone substitutes and to strike an appropriate balance between bioabsorption and volume maintenance to achieve ideal bone remodeling.", "title": "" }, { "docid": "f033cdaa2d125830894babdf64702ae1", "text": "Time series data mining has gained increasing attention in health domain. Recently, researchers attempt to employ Natural Language Processing (NLP) to health data mining, in order to learn proper representations of discrete medical concepts from Electronic Health Records (EHRs). However, existing models do not take continuous physiological records into account, which are naturally existed in EHRs. The major challenges for this task are to model non-obvious representations from observed high dimensional biosignals, and to interpret the learned features. To address these issues, we propose Wave2Vec, an end-to-end deep learning model, to bridge the gap between biosignal processing and language modeling. Wave2Vec jointly learns both inherent and embedding representations of biosignals at the same time. To evaluate the performance of our model in clinical task, we carry out experiments on two real world benchmark biosignal datasets. Experimental results show that the proposed Wave2Vec model outperforms the six feature leaning baselines in biosignal processing.", "title": "" }, { "docid": "a420674c5a89b13fb7e9e27d3b6a5209", "text": "We have developed a machine vision based liquid level inspection system which decides the liquid level of bottle to be under or overfilled using ISEF edge detection technique. The system is having a conveyer belt controlled by Siemens LOGO24RLC PLC. MATLAB image acquisition toolbox along with a normal web camera is used for image acquisition purpose. We apply ISEF Edge detection technique and an average distance algorithm to decide about the level of the liquid in the bottles. A GUI based interfacing software is made to display the over and under filled status of the bottles on the screen.", "title": "" }, { "docid": "a7a1f7fab650512fe1968f2023bad7ca", "text": "Frontal fibrosing alopecia (FFA) is more common in postmenopausal women, but it can occur in younger women. Some authors consider FFA to be a distinct frontal variant of lichen planopilaris. From a clinical point of view, this relatively uncommon condition is characterized by progressive frontotemporal recession due to inflammatory destruction of hair follicles. Dermoscopy can be very useful, as the differential diagnosis between traction alopecia, alopecia areata, FFA and cicatricial marginal alopecia may be difficult. It is not clear whether or not treatment alters the natural history of the disease - the disease stabilized with time in most of the patients with or without continuing treatment. Here we report a case of a 50-year-old woman with FFA and discuss the relevance of dermoscopy in the differential diagnosis of this disease.", "title": "" }, { "docid": "214dd26fb12f3d66e3f67f437a119fc9", "text": "Pervasive healthcare systems, smart grids, and unmanned aircraft systems are examples of Cyber-Physical Systems (CPSs) that have become highly integrated in the modern world. As this integration deepens, the importance of securing these systems increases. In order to identify gaps and propose research directions in CPS intrusion detection research, we survey the literature of this area. Our approach is to classify modern CPS Intrusion Detection System (IDS) techniques based on two design dimensions: detection technique and audit material. We summarize advantages and drawbacks of each dimension’s options. We also summarize the most and least studied CPS IDS techniques in the literature and provide insight on the effectiveness of IDS techniques as they apply to CPSs. Finally, we identify gaps in CPS IDS research and suggest future research areas.", "title": "" }, { "docid": "f244f0de1cde8f083fed3a3495aa261e", "text": "In this paper, we propose a multimodal search engine that combines visual and textual cues to retrieve items from a multimedia database aesthetically similar to the query. The goal of our engine is to enable intuitive retrieval of fashion merchandise such as clothes or furniture. Existing search engines treat textual input only as an additional source of information about the query image and do not correspond to the reallife scenario where the user looks for ”the same shirt but of denim”. Our novel method, dubbed DeepStyle, mitigates those shortcomings by using a joint neural network architecture to model contextual dependencies between features of different modalities. We prove the robustness of this approach on two different challenging datasets of fashion items and furniture where our DeepStyle engine outperforms baseline methods by 18-21% on the tested datasets. Our search engine is commercially deployed and available through a Web-based application.", "title": "" }, { "docid": "8e53a1b830917e8f718f75a6a8843b87", "text": "The final phase of CMOS technology scaling provides continued increases in already vast transistor counts, but only minimal improvements in energy efficiency, thus requiring innovation in circuits and architectures. However, even huge teams are struggling to complete large, complex designs on schedule using traditional rigid development flows. This article presents an agile hardware development methodology, which the authors adopted for 11 RISC-V microprocessor tape-outs on modern 28-nm and 45-nm CMOS processes in the past five years. The authors discuss how this approach enabled small teams to build energy-efficient, cost-effective, and industry-competitive high-performance microprocessors in a matter of months. Their agile methodology relies on rapid iterative improvement of fabricatable prototypes using hardware generators written in Chisel, a new hardware description language embedded in a modern programming language. The parameterized generators construct highly customized systems based on the free, open, and extensible RISC-V platform. The authors present a case study of one such prototype featuring a RISC-V vector microprocessor integrated with a switched-capacitor DC-DC converter alongside an adaptive clock generator in a 28-nm, fully depleted silicon-on-insulator process.", "title": "" }, { "docid": "d229c679dcd4fa3dd84c6040b95fc99c", "text": "This paper reviews the supervised learning versions of the no-free-lunch theorems in a simpli ed form. It also discusses the signi cance of those theorems, and their relation to other aspects of supervised learning.", "title": "" }, { "docid": "f6ba57b277beb545ad9b396404cd56b9", "text": "The orbitofrontal cortex contains the secondary taste cortex, in which the reward value of taste is represented. It also contains the secondary and tertiary olfactory cortical areas, in which information about the identity and also about the reward value of odours is represented. The orbitofrontal cortex also receives information about the sight of objects from the temporal lobe cortical visual areas, and neurons in it learn and reverse the visual stimulus to which they respond when the association of the visual stimulus with a primary reinforcing stimulus (such as taste) is reversed. This is an example of stimulus-reinforcement association learning, and is a type of stimulus-stimulus association learning. More generally, the stimulus might be a visual or olfactory stimulus, and the primary (unlearned) positive or negative reinforcer a taste or touch. A somatosensory input is revealed by neurons that respond to the texture of food in the mouth, including a population that responds to the mouth feel of fat. In complementary neuroimaging studies in humans, it is being found that areas of the orbitofrontal cortex are activated by pleasant touch, by painful touch, by taste, by smell, and by more abstract reinforcers such as winning or losing money. Damage to the orbitofrontal cortex can impair the learning and reversal of stimulus-reinforcement associations, and thus the correction of behavioural responses when there are no longer appropriate because previous reinforcement contingencies change. The information which reaches the orbitofrontal cortex for these functions includes information about faces, and damage to the orbitofrontal cortex can impair face (and voice) expression identification. This evidence thus shows that the orbitofrontal cortex is involved in decoding and representing some primary reinforcers such as taste and touch; in learning and reversing associations of visual and other stimuli to these primary reinforcers; and in controlling and correcting reward-related and punishment-related behavior, and thus in emotion. The approach described here is aimed at providing a fundamental understanding of how the orbitofrontal cortex actually functions, and thus in how it is involved in motivational behavior such as feeding and drinking, in emotional behavior, and in social behavior.", "title": "" }, { "docid": "099a2ee305b703a765ff3579f0e0c1c3", "text": "To enhance the security of mobile cloud users, a few proposals have been presented recently. However we argue that most of them are not suitable for mobile cloud where mobile users might join or leave the mobile networks arbitrarily. In this paper, we design a secure mobile user-based data service mechanism (SDSM) to provide confidentiality and fine-grained access control for data stored in the cloud. This mechanism enables the mobile users to enjoy a secure outsourced data services at a minimized security management overhead. The core idea of SDSM is that SDSM outsources not only the data but also the security management to the mobile cloud in a trust way. Our analysis shows that the proposed mechanism has many advantages over the existing traditional methods such as lower overhead and convenient update, which could better cater the requirements in mobile cloud computing scenarios.", "title": "" }, { "docid": "5a0e5596f77d036852621c1f15788ee2", "text": "The use of metaheuristic search techniques for the automatic generation of test data has been a burgeoning interest for many researchers in recent years. Previous attempts to automate the test generation process have been limited, having been constrained by the size and complexity of software, and the basic fact that in general, test data generation is an undecidable problem. Metaheuristic search techniques offer much promise in regard to these problems. Metaheuristic search techniques are highlevel frameworks, which utilise heuristics to seek solutions for combinatorial problems at a reasonable computational cost. To date, metaheuristic search techniques have been applied to automate test data generation for structural and functional testing; the testing of grey-box properties, for example safety constraints; and also non-functional properties, such as worst-case execution time. This paper surveys some of the work undertaken in this field, discussing possible new future directions of research for each of its different individual areas.", "title": "" }, { "docid": "b819c10fb84e576cb6444023246b91b0", "text": "BCAAs (leucine, isoleucine, and valine), particularly leucine, have anabolic effects on protein metabolism by increasing the rate of protein synthesis and decreasing the rate of protein degradation in resting human muscle. Also, during recovery from endurance exercise, BCAAs were found to have anabolic effects in human muscle. These effects are likely to be mediated through changes in signaling pathways controlling protein synthesis. This involves phosphorylation of the mammalian target of rapamycin (mTOR) and sequential activation of 70-kD S6 protein kinase (p70 S6 kinase) and the eukaryotic initiation factor 4E-binding protein 1. Activation of p70 S6 kinase, and subsequent phopsphorylation of the ribosomal protein S6, is associated with enhanced translation of specific mRNAs. When BCAAs were supplied to subjects during and after one session of quadriceps muscle resistance exercise, an increase in mTOR, p70 S6 kinase, and S6 phosphorylation was found in the recovery period after the exercise with no effect of BCAAs on Akt or glycogen synthase kinase 3 (GSK-3) phosphorylation. Exercise without BCAA intake led to a partial phosphorylation of p70 S6 kinase without activating the enzyme, a decrease in Akt phosphorylation, and no change in GSK-3. It has previously been shown that leucine infusion increases p70 S6 kinase phosphorylation in an Akt-independent manner in resting subjects; however, a relation between mTOR and p70 S6 kinase has not been reported previously. The results suggest that BCAAs activate mTOR and p70 S6 kinase in human muscle in the recovery period after exercise and that GSK-3 is not involved in the anabolic action of BCAAs on human muscle. J. Nutr. 136: 269S–273S, 2006.", "title": "" }, { "docid": "e061e276254cb541826a066dcaf7a460", "text": "Effective data visualization is a key part of the discovery process in the era of “big data”. It is the bridge between the quantitative content of the data and human intuition, and thus an essential component of the scientific path from data into knowledge and understanding. Visualization is also essential in the data mining process, directing the choice of the applicable algorithms, and in helping to identify and remove bad data from the analysis. However, a high complexity or a high dimensionality of modern data sets represents a critical obstacle. How do we visualize interesting structures and patterns that may exist in hyper-dimensional data spaces? A better understanding of how we can perceive and interact with multidimensional information poses some deep questions in the field of cognition technology and human-computer interaction. To this effect, we are exploring the use of immersive virtual reality platforms for scientific data visualization, both as software and inexpensive commodity hardware. These potentially powerful and innovative tools for multi-dimensional data visualization can also provide an easy and natural path to a collaborative data visualization and exploration, where scientists can interact with their data and their colleagues in the same visual space. Immersion provides benefits beyond the traditional “desktop” visualization tools: it leads to a demonstrably better perception of a datascape geometry, more intuitive data understanding, and a better retention of the perceived relationships in the data.", "title": "" }, { "docid": "2013d0275a7e7b9411ec7c2748f252e7", "text": "Image recognition problems are usually difficult to solve using raw pixel data. To improve the recognition it is often needed some form of feature extraction to represent the data in a feature space. We use the output of a biologically inspired model for visual recognition as a feature space. The output of the model is a binary code which is used to train a linear classifier for recognizing handwritten digits using the MNIST and USPS datasets. We evaluate the robustness of the approach to a variable number of training samples and compare its performance on these popular datasets to other published results. We achieve competitive error rates on both datasets while greatly improving relatively to related networks using a linear classifier.", "title": "" } ]
scidocsrr
766fd69c4de3ff02b370b4932433309f
The Business Value of CRM Technology: From the Perspective of Organizational Ambidexterity
[ { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "dcaf30e3e5543ba5eb9f53a07b3e7c86", "text": "Received: 24 September 2009 Revised: 22 March 2010 2nd Revision: 6 December 2010 3rd Revision: 29 July 2011 4th Revision: 5 February 2012 5th Revision: 24 April 2012 6th Revision: 10 June 2012 Accepted: 29 August 2012 Abstract The business value of investments in Information Systems (IS) has been, and is predicted to remain, one of the major research topics for IS researchers. While the vast majority of research papers on IS business value find empirical evidence in favour of both the operational and strategic relevance of IS, the fundamental question of the causal relationship between IS investments and business value remains partly unexplained. Three research tasks are essential requisites on the path towards addressing this epistemological question: the synthesis of existing knowledge, the identification of a lack of knowledge and the proposition of paths for closing the knowledge gaps. This paper considers each of these tasks. Research findings include that correlations between IS investments and productivity vary widely among companies and that the mismeasurement of IS investment impact may be rooted in delayed effects. Key limitations of current research are based on the ambiguity and fuzziness of IS business value, the neglected disaggregation of IS investments, and the unexplained process of creating internal and competitive value. Addressing the limitations we suggest research paths, such as the identification of synergy opportunities of IS assets, and the explanation of relationships between IS innovation and change in IS capabilities. European Journal of Information Systems (2013) 22, 139–169. doi:10.1057/ejis.2012.45; published online 13 November 2012", "title": "" }, { "docid": "fc6214a4b20dba903a1085bd1b6122e0", "text": "a r t i c l e i n f o Keywords: CRM technology use Marketing capability Customer-centric organizational culture Customer-centric management system Customer relationship management (CRM) technology has attracted significant attention from researchers and practitioners as a facilitator of organizational performance. Even though companies have made tremendous investments in CRM technology, empirical research offers inconsistent support that CRM technology enhances organizational performance. Given this equivocal effect and the increasing need for the generalization of CRM implementation research outside western context, the authors, using data from Korean companies, address the process concerning how CRM technology translates into business outcomes. The results highlight that marketing capability mediates the association between CRM technology use and performance. Moreover, a customer-centric organizational culture and management system facilitate CRM technology use. This study serves not only to clarify the mechanism between CRM technology use and organizational performance, but also to generalize the CRM results in the Korean context. In today's competitive business environment, the success of firm increasingly hinges on the ability to operate customer relationship management (CRM) that enables the development and implementation of more efficient and effective customer-focused strategies. Based on this belief, many companies have made enormous investment in CRM technology as a means to actualize CRM efficiently. Despite conceptual underpinnings of CRM technology and substantial financial implications , empirical research examining the CRM technology-performance link has met with equivocal results. Recent studies demonstrate that only 30% of the organizations introducing CRM technology achieved improvements in their organizational performance (Bull, 2003; Corner and Hinton, 2002). These conflicting findings hint at the potential influences of unexplored mediating or moderating factors and the need of further research on the mechanism by which CRM technology leads to improved business performance. Such inconsistent results of CRM technology implementation are not limited to western countries which most of previous CRM research originated from. Even though Korean companies have poured tremendous resources to CRM initiatives since 2000, they also cut down investment in CRM technology drastically due to disappointing returns (Knowledge Research Group, 2004). As a result, Korean companies are increasingly eager to corroborate the returns from investment in CRM. In the eastern culture like Korea that promotes holistic thinking focusing on the relationships between a focal object and overall context (Monga and John, 2007), CRM operates as a two-edged sword. Because eastern culture with holistic thinking tends to value existing relationship with firms or contact point persons …", "title": "" }, { "docid": "4a4b12e5f60a0d9cee2be7d499055dd9", "text": "This paper describes the process of inducting theory using case studies-from specifying the research questions to reaching closure. Some features of the process, such as problem definition and construct validation, are similar to hypothesis-testing research. Others, such as within-case analysis and replication logic, are unique to the inductive, case-oriented process. Overall, the process described here is highly iterative and tightly linked to data. This research approach is especially appropriate in new topic areas. The resultant theory is often novel, testable, and empirically valid. Finally, framebreaking insights, the tests of good theory (e.g., parsimony, logical coherence), and convincing grounding in the evidence are the key criteria for evaluating this type of research.", "title": "" } ]
[ { "docid": "b5372d4cad87aab69356ebd72aed0e0b", "text": "Web content nowadays can also be accessed through new generation of Internet connected TVs. However, these products failed to change users’ behavior when consuming online content. Users still prefer personal computers to access Web content. Certainly, most of the online content is still designed to be accessed by personal computers or mobile devices. In order to overcome the usability problem of Web content consumption on TVs, this paper presents a knowledge graph based video generation system that automatically converts textual Web content into videos using semantic Web and computer graphics based technologies. As a use case, Wikipedia articles are automatically converted into videos. The effectiveness of the proposed system is validated empirically via opinion surveys. Fifty percent of survey users indicated that they found generated videos enjoyable and 42 % of them indicated that they would like to use our system to consume Web content on their TVs.", "title": "" }, { "docid": "c12c9fa98f672ec1bfde404d5bf54a35", "text": "Speech recognition has become an important feature in smartphones in recent years. Different from traditional automatic speech recognition, the speech recognition on smartphones can take advantage of personalized language models to model the linguistic patterns and wording habits of a particular smartphone owner better. Owing to the popularity of social networks in recent years, personal texts and messages are no longer inaccessible. However, data sparseness is still an unsolved problem. In this paper, we propose a three-step adaptation approach to personalize recurrent neural network language models (RNNLMs). We believe that its capability to model word histories as distributed representations of arbitrary length can help mitigate the data sparseness problem. Furthermore, we also propose additional user-oriented features to empower the RNNLMs with stronger capabilities for personalization. The experiments on a Facebook dataset showed that the proposed method not only drastically reduced the model perplexity in preliminary experiments, but also moderately reduced the word error rate in n-best rescoring tests.", "title": "" }, { "docid": "1e1355e7fbe185c2e69083fe8df2d875", "text": "The problem of reproducing high dynamic range images on devices with restricted dynamic range has gained a lot of interest in the computer graphics community. There exist various approaches to this issue, which span several research areas including computer graphics, image processing, color vision, physiological aspects, etc. These approaches assume a thorough knowledge of both the objective and subjective attributes of an image. However, no comprehensive overview and analysis of such attributes has been published so far. In this contribution, we present an overview about the effects of basic image attributes in HDR tone mapping. Furthermore, we propose a scheme of relationships between these attributes, leading to the definition of an overall image quality measure. We present results of subjective psychophysical experiments that we have performed to prove the proposed relationship scheme. Moreover, we also present an evaluation of existing tone mapping methods (operators) with regard to these attributes. Finally, the execution of with-reference and without a real reference perceptual experiments gave us the opportunity to relate the obtained subjective results. Our effort is not just useful to get into the tone mapping field or when implementing a tone mapping method, but it also sets the stage for well-founded quality comparisons between tone mapping methods. By providing good definitions of the different attributes, user-driven or fully automatic comparisons are made possible.", "title": "" }, { "docid": "4304d7ef3caaaf874ad0168ce8001678", "text": "In a path-breaking paper last year Pat and Betty O’Neil and Gerhard Weikum pro posed a self-tuning improvement to the Least Recently Used (LRU) buffer management algorithm[l5]. Their improvement is called LRU/k and advocates giving priority to buffer pages baaed on the kth most recent access. (The standard LRU algorithm is denoted LRU/l according to this terminology.) If Pl’s kth most recent access is more more recent than P2’s, then Pl will be replaced after P2. Intuitively, LRU/k for k > 1 is a good strategy, because it gives low priority to pages that have been scanned or to pages that belong to a big randomly accessed file (e.g., the account file in TPC/A). They found that LRU/S achieves most of the advantage of their method. The one problem of LRU/S is the processor *Supported by U.S. Office of Naval Research #N00014-91-E 1472 and #N99914-92-J-1719, U.S. National Science Foundation grants #CC%9103953 and IFlI-9224691, and USBA #5555-19. Part of this work was performed while Theodore Johnson was a 1993 ASEE Summer Faculty Fellow at the National Space Science Data Center of NASA Goddard Space Flight Center. t Authors’ e-mail addresses : ted@cis.ufi.edu and", "title": "" }, { "docid": "f4e171367606f2fe3ea91060333c6257", "text": "To remain independent and healthy, an important factor to consider is the maintenance of skeletal muscle mass. Inactivity leads to measurable changes in muscle and bone, reduces exercise capacity, impairs the immune system, and decreases the sensitivity to insulin. Therefore, maintaining physical activity is of great importance for skeletal muscle health. One form of structured physical activity is resistance training. Generally speaking, one needs to lift weights at approximately 70% of their one repetition maximum (1RM) to have noticeable increases in muscle size and strength. Although numerous positive effects are observed from heavy resistance training, some at risk populations (e.g. elderly, rehabilitating patients, etc.) might be advised not to perform high-load resistance training and may be limited to performance of low-load resistance exercise. A technique which applies pressure cuffs to the limbs causing blood flow restriction (BFR) has been shown to attenuate atrophy and when combined with low intensity exercise has resulted in an increase in both muscle size and strength across different age groups. We have provided an evidence based model of progression from bed rest to higher load resistance training, based largely on BFR literature concentrating on more at risk populations, to highlight a possible path to recovery.", "title": "" }, { "docid": "49db1291f3f52a09037d6cfd305e8b5f", "text": "This paper examines cognitive beliefs and affect influencing one’s intention to continue using (continuance) information systems (IS). Expectationconfirmation theory is adapted from the consumer behavior literature and integrated with theoretical and empirical findings from prior IS usage research to theorize a model of IS continuance. Five research hypotheses derived from this model are empirically validated using a field survey of online banking users. The results suggest that users’ continuance intention is determined by their satisfaction with IS use and perceived usefulness of continued IS use. User satisfaction, in turn, is influenced by their confirmation of expectation from prior IS use and perceived usefulness. Postacceptance perceived usefulness is influenced by Ron Weber was the accepting senior editor for this paper. users’ confirmation level. This study draws attention to the substantive differences between acceptance and continuance behaviors, theorizes and validates one of the earliest theoretical models of IS continuance, integrates confirmation and user satisfaction constructs within our current understanding of IS use, conceptualizes and creates an initial scale for measuring IS continuance, and offers an initial explanation for the acceptancediscontinuance anomaly.", "title": "" }, { "docid": "49108ff6bdebfef7295d4dc3681897e8", "text": "Recognition of materials has proven to be a challenging problem due to the wide variation in appearance within and between categories. Global image context, such as where the material is or what object it makes up, can be crucial to recognizing the material. Existing methods, however, operate on an implicit fusion of materials and context by using large receptive fields as input (i.e., large image patches). Many recent material recognition methods treat materials as yet another set of labels like objects. Materials are, however, fundamentally different from objects as they have no inherent shape or defined spatial extent. Approaches that ignore this can only take advantage of limited implicit context as it appears during training. We instead show that recognizing materials purely from their local appearance and integrating separately recognized global contextual cues including objects and places leads to superior dense, per-pixel, material recognition. We achieve this by training a fully-convolutional material recognition network end-toend with only material category supervision. We integrate object and place estimates to this network from independent CNNs. This approach avoids the necessity of preparing an impractically-large amount of training data to cover the product space of materials, objects, and scenes, while fully leveraging contextual cues for dense material recognition. Furthermore, we perform a detailed analysis of the effects of context granularity, spatial resolution, and the network level at which we introduce context. On a recently introduced comprehensive and diverse material database [14], we confirm that our method achieves state-of-the-art accuracy with significantly less training data compared to past methods.", "title": "" }, { "docid": "7490d342ffb59bd396421e198b243775", "text": "Antioxidant activities of defatted sesame meal extract increased as the roasting temperature of sesame seed increased, but the maximum antioxidant activity was achieved when the seeds were roasted at 200 °C for 60 min. Roasting sesame seeds at 200 °C for 60 min significantly increased the total phenolic content, radical scavenging activity (RSA), reducing powers, and antioxidant activity of sesame meal extract; and several low-molecularweight phenolic compounds such as 2-methoxyphenol, 4-methoxy-3-methylthio-phenol, 5-amino-3-oxo-4hexenoic acid, 3,4-methylenedioxyphenol (sesamol), 3-hydroxy benzoic acid, 4-hydroxy benzoic acid, vanillic acid, filicinic acid, and 3,4-dimethoxy phenol were newly formed in the sesame meal after roasting sesame seeds at 200 °C for 60 min. These results indicate that antioxidant activity of defatted sesame meal extracts was significantly affected by roasting temperature and time of sesame seeds.", "title": "" }, { "docid": "8bd15d6b67bf73c85d83f5548bc48c56", "text": "Traditional time series similarity search, based on relevance feedback, combines initial, positive and negative relevant series directly to create new query sequence for the next search; it can’t make full use of the negative relevant sequence, even results in inaccurate query results due to excessive adjustment of the query sequence in some cases. In this paper, time series similarity search based on separate relevance feedback is proposed, each round of query includes positive query and negative query, and combines the results of them to generate the query results of each round. For one data sequence, positive query evaluates its similarity to the initial and positive relevant sequences, and negative query evaluates it’s similarity to the negative relevant sequences. The final similar sequences should be not only close to positive relevant series but also far away from negative relevant series. The experiments on UCR data sets showed that, compared with the retrieval method without feedback and the commonly used feedback algorithm the proposed method can improve accuracy of similarity search on some data sets.", "title": "" }, { "docid": "5c48c8a2a20408775f5eaf4f575d5031", "text": "In this paper we present a computational cognitive model of task interruption and resumption, focusing on the effects of the problem state bottleneck. Previous studies have shown that the disruptiveness of interruptions is for an important part determined by three factors: interruption duration, interrupting-task complexity, and moment of interruption. However, an integrated theory of these effects is still missing. Based on previous research into multitasking, we propose a first step towards such a theory in the form of a process model that attributes these effects to problem state requirements of both the interrupted and the interrupting task. Subsequently, we tested two predictions of this model in two experiments. The experiments confirmed that problem state requirements are an important predictor for the disruptiveness of interruptions. This suggests that interfaces should be designed to a) interrupt users at low-problem state moments and b) maintain the problem state for the user when interrupted.", "title": "" }, { "docid": "26813ea092f8bbedd3f970010a8a6fe6", "text": "Lane-border detection is one of the best-developed modules in vision-based driver assistance systems today. However, there is still a need for further improvement for challenging road and traffic situations, and a need to design tools for quantitative performance evaluation. This paper discusses and refines a previously published method to generate ground truth for lane markings from recorded video, applies two lanedetection methods to such video data, and then illustrates the proposed performance evaluation by comparing calculated ground truth with detected lane positions. This paper also proposes appropriate performance measures that are required to evaluate the proposed method.", "title": "" }, { "docid": "ff664eac9ffb8cae9b4db1bc09629935", "text": "In this paper, we apply sentiment analysis and machine learning principles to find the correlation between ”public sentiment” and ”market sentiment”. We use twitter data to predict public mood and use the predicted mood and previous days’ DJIA values to predict the stock market movements. In order to test our results, we propose a new cross validation method for financial data and obtain 75.56% accuracy using Self Organizing Fuzzy Neural Networks (SOFNN) on the Twitter feeds and DJIA values from the period June 2009 to December 2009. We also implement a naive protfolio management strategy based on our predicted values. Our work is based on Bollen et al’s famous paper which predicted the same with 87% accuracy.", "title": "" }, { "docid": "3092e0006fd965034352e04ba9933a46", "text": "In classification, it is often difficult or expensive to obtain completely accurate and reliable labels. Indeed, labels may be polluted by label noise, due to e.g. insufficient information, expert mistakes, and encoding errors. The problem is that errors in training labels that are not properly handled may deteriorate the accuracy of subsequent predictions, among other effects. Many works have been devoted to label noise and this paper provides a concise and comprehensive introduction to this research topic. In particular, it reviews the types of label noise, their consequences and a number of state of the art approaches to deal with label noise.", "title": "" }, { "docid": "e8e7665194124453124cf0d56115c33e", "text": "Fourth generation (4G) wireless networks will provide high-bandwidth connectivity with quality-of-service (QoS) support to mobile users in a seamless manner. In such a scenario, a mobile user will be able to connect to different wireless access networks such as a wireless metropolitan area network (WMAN), a cellular network, and a wireless local area network (WLAN) simultaneously. We present a game-theoretic framework for radio resource management (that is, bandwidth allocation and admission control) in such a heterogeneous wireless access environment. First, a noncooperative game is used to obtain the bandwidth allocations to a service area from the different access networks available in that service area (on a long-term basis). The Nash equilibrium for this game gives the optimal allocation which maximizes the utilities of all the connections in the network (that is, in all of the service areas). Second, based on the obtained bandwidth allocation, to prioritize vertical and horizontal handoff connections over new connections, a bargaining game is formulated to obtain the capacity reservation thresholds so that the connection-level QoS requirements can be satisfied for the different types of connections (on a long-term basis). Third, we formulate a noncooperative game to obtain the amount of bandwidth allocated to an arriving connection (in a service area) by the different access networks (on a short-term basis). Based on the allocated bandwidth and the capacity reservation thresholds, an admission control is used to limit the number of ongoing connections so that the QoS performances are maintained at the target level for the different types of connections.", "title": "" }, { "docid": "e86c2af47c55a574aecf474f95fb34d3", "text": "This paper presents a novel way to address the extrinsic calibration problem for a system composed of a 3D LIDAR and a camera. The relative transformation between the two sensors is calibrated via a nonlinear least squares (NLS) problem, which is formulated in terms of the geometric constraints associated with a trihedral object. Precise initial estimates of NLS are obtained by dividing it into two sub-problems that are solved individually. With the precise initializations, the calibration parameters are further refined by iteratively optimizing the NLS problem. The algorithm is validated on both simulated and real data, as well as a 3D reconstruction application. Moreover, since the trihedral target used for calibration can be either orthogonal or not, it is very often present in structured environments, making the calibration convenient.", "title": "" }, { "docid": "fbe58cc0d6a3a93bbc64e60661346099", "text": "Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions (e.g. happiness and anger). Such prototypic expressions, however, occur infrequently. Human emotions and intentions are communicated more often by changes in one or two discrete facial features. In this paper, we develop an automatic system to analyze subtle changes in facial expressions based on both permanent (e.g. mouth, eye, and brow) and transient (e.g. furrows and wrinkles) facial features in a nearly frontal image sequence. Multi-state facial component models are proposed for tracking and modeling different facial features. Based on these multi-state models, and without artificial enhancement, we detect and track the facial features, including mouth, eyes, brow, cheeks, and their related wrinkles and facial furrows. Moreover we recover detailed parametric descriptions of the facial features. With these features as the inputs, 11 individual action units or action unit combinations are recognized by a neural network algorithm. A recognition rate of 96.7% is obtained. The recognition results indicate that our system can identify action units regardless of whether they occurred singly or in combinations.", "title": "" }, { "docid": "2779fabc9c858ba67fa8be2545cec0f1", "text": "Abst rac t -A meta-analysis of 32 comparative studies showed that computer-based education has generally had positive effects on the achievement of elementary school pupils. These effects have been different, however, for programs of @line computer-managed instruction (CMI) and for interactive computer-assisted instruction (CAI). The average effect in 28 studies of CAI programs was an increase in pupil achievement scores of O. 47 standard deviations, or from the 50th to the 68th percentile. The average effect in four studies of CMI programs, however, was an increase in scores of only O. 07 standard deviations. Study features were not significantly related to study outcomes.", "title": "" }, { "docid": "30d19f3524527f45b15712624094b6af", "text": "In 19 normal adults reserpine administration induced significant changes in the parameters of the second glabellar response (R2): shortening of the latency and duration; decrease of the excitability threshold and complete blockade of the physiological habituation of R2 to the electrical and mechanical stimulation. No changes in the first response (R1) were observed. All the R2 changes disappeared within about 3 days of drug administration. The Parkinson-like effect of reserpine on the glabellar reflex is discussed in the light of a neurohormonal hypothesis in the control of the polysynaptic pathways biasing R2.", "title": "" }, { "docid": "cb98cf51f6cb916249a614b4680db698", "text": "During sleep, humans can strengthen previously acquired memories, but whether they can acquire entirely new information remains unknown. The nonverbal nature of the olfactory sniff response, in which pleasant odors drive stronger sniffs and unpleasant odors drive weaker sniffs, allowed us to test learning in humans during sleep. Using partial-reinforcement trace conditioning, we paired pleasant and unpleasant odors with different tones during sleep and then measured the sniff response to tones alone during the same nights' sleep and during ensuing wake. We found that sleeping subjects learned novel associations between tones and odors such that they then sniffed in response to tones alone. Moreover, these newly learned tone-induced sniffs differed according to the odor pleasantness that was previously associated with the tone during sleep. This acquired behavior persisted throughout the night and into ensuing wake, without later awareness of the learning process. Thus, humans learned new information during sleep.", "title": "" }, { "docid": "becbcb6ca7ac87a3e43dbc65748b258a", "text": "We present Mean Box Pooling, a novel visual representation that pools over CNN representations of a large number, highly overlapping object proposals. We show that such representation together with nCCA, a successful multimodal embedding technique, achieves state-of-the-art performance on the Visual Madlibs task. Moreover, inspired by the nCCA’s objective function, we extend classical CNN+LSTM approach to train the network by directly maximizing the similarity between the internal representation of the deep learning architecture and candidate answers. Again, such approach achieves a significant improvement over the prior work that also uses CNN+LSTM approach on Visual Madlibs.", "title": "" } ]
scidocsrr
544941c1f149482cefa4f5041ddb74e3
Awesome Typography: Statistics-Based Text Effects Transfer
[ { "docid": "6008f42e840e85c935bc455e13e03e19", "text": "Photo retouching enables photographers to invoke dramatic visual impressions by artistically enhancing their photos through stylistic color and tone adjustments. However, it is also a time-consuming and challenging task that requires advanced skills beyond the abilities of casual photographers. Using an automated algorithm is an appealing alternative to manual work, but such an algorithm faces many hurdles. Many photographic styles rely on subtle adjustments that depend on the image content and even its semantics. Further, these adjustments are often spatially varying. Existing automatic algorithms are still limited and cover only a subset of these challenges. Recently, deep learning has shown unique abilities to address hard problems. This motivated us to explore the use of deep neural networks (DNNs) in the context of photo editing. In this article, we formulate automatic photo adjustment in a manner suitable for this approach. We also introduce an image descriptor accounting for the local semantics of an image. Our experiments demonstrate that training DNNs using these descriptors successfully capture sophisticated photographic styles. In particular and unlike previous techniques, it can model local adjustments that depend on image semantics. We show that this yields results that are qualitatively and quantitatively better than previous work.", "title": "" }, { "docid": "1bb5e01e596d09e4ff89d7cb864ff205", "text": "A number of recent approaches have used deep convolutional neural networks (CNNs) to build texture representations. Nevertheless, it is still unclear how these models represent texture and invariances to categorical variations. This work conducts a systematic evaluation of recent CNN-based texture descriptors for recognition and attempts to understand the nature of invariances captured by these representations. First we show that the recently proposed bilinear CNN model [25] is an excellent generalpurpose texture descriptor and compares favorably to other CNN-based descriptors on various texture and scene recognition benchmarks. The model is translationally invariant and obtains better accuracy on the ImageNet dataset without requiring spatial jittering of data compared to corresponding models trained with spatial jittering. Based on recent work [13, 28] we propose a technique to visualize pre-images, providing a means for understanding categorical properties that are captured by these representations. Finally, we show preliminary results on how a unified parametric model of texture analysis and synthesis can be used for attribute-based image manipulation, e.g. to make an image more swirly, honeycombed, or knitted. The source code and additional visualizations are available at http://vis-www.cs.umass.edu/texture.", "title": "" }, { "docid": "abe9e19b8e5e388933645ce25c48b2b1", "text": "We introduce \"time hallucination\": synthesizing a plausible image at a different time of day from an input image. This challenging task often requires dramatically altering the color appearance of the picture. In this paper, we introduce the first data-driven approach to automatically creating a plausible-looking photo that appears as though it were taken at a different time of day. The time of day is specified by a semantic time label, such as \"night\".\n Our approach relies on a database of time-lapse videos of various scenes. These videos provide rich information about the variations in color appearance of a scene throughout the day. Our method transfers the color appearance from videos with a similar scene as the input photo. We propose a locally affine model learned from the video for the transfer, allowing our model to synthesize new color data while retaining image details. We show that this model can hallucinate a wide range of different times of day. The model generates a large sparse linear system, which can be solved by off-the-shelf solvers. We validate our methods by synthesizing transforming photos of various outdoor scenes to four times of interest: daytime, the golden hour, the blue hour, and nighttime.", "title": "" } ]
[ { "docid": "df08803274492f2eb2fe92e69bc3b9e6", "text": "Wikipedia is a major source of information for many people. However, false information on Wikipedia raises concerns about its credibility. One way in which false information may be presented on Wikipedia is in the form of hoax articles, i.e., articles containing fabricated facts about nonexistent entities or events. In this paper we study false information on Wikipedia by focusing on the hoax articles that have been created throughout its history. We make several contributions. First, we assess the real-world impact of hoax articles by measuring how long they survive before being debunked, how many pageviews they receive, and how heavily they are referred to by documents on the Web. We find that, while most hoaxes are detected quickly and have little impact on Wikipedia, a small number of hoaxes survive long and are well cited across the Web. Second, we characterize the nature of successful hoaxes by comparing them to legitimate articles and to failed hoaxes that were discovered shortly after being created. We find characteristic differences in terms of article structure and content, embeddedness into the rest of Wikipedia, and features of the editor who created the hoax. Third, we successfully apply our findings to address a series of classification tasks, most notably to determine whether a given article is a hoax. And finally, we describe and evaluate a task involving humans distinguishing hoaxes from non-hoaxes. We find that humans are not good at solving this task and that our automated classifier outperforms them by a big margin.", "title": "" }, { "docid": "d4b1513319396aedab8f9d78bb19c9bf", "text": "CONTEXT\nSolid-pseudopapillary tumor of the pancreas is a rare tumor which usually affects young females in their second and third decade of life. Metastasis is very rare after a resection of curative intent.\n\n\nCASE REPORT\nWe report a case of a 65-year-old white female who presented with metastasis to the liver four years after Whipple's resection for a solid-pseudopapillary tumor of the pancreas.\n\n\nCONCLUSIONS\nSolid-pseudopapillary tumors of the pancreas can present with metastasis a long time after resection of the primary tumor. Long term close follow up of these patients should be done. The survival rate even after liver metastasis is good.", "title": "" }, { "docid": "ba58ba95879516c00d91cf75754eb131", "text": "In order to assess the current knowledge on the therapeutic potential of cannabinoids, a meta-analysis was performed through Medline and PubMed up to July 1, 2005. The key words used were cannabis, marijuana, marihuana, hashish, hashich, haschich, cannabinoids, tetrahydrocannabinol, THC, dronabinol, nabilone, levonantradol, randomised, randomized, double-blind, simple blind, placebo-controlled, and human. The research also included the reports and reviews published in English, French and Spanish. For the final selection, only properly controlled clinical trials were retained, thus open-label studies were excluded. Seventy-two controlled studies evaluating the therapeutic effects of cannabinoids were identified. For each clinical trial, the country where the project was held, the number of patients assessed, the type of study and comparisons done, the products and the dosages used, their efficacy and their adverse effects are described. Cannabinoids present an interesting therapeutic potential as antiemetics, appetite stimulants in debilitating diseases (cancer and AIDS), analgesics, and in the treatment of multiple sclerosis, spinal cord injuries, Tourette's syndrome, epilepsy and glaucoma.", "title": "" }, { "docid": "d859fb8570c91206708b7b2b8f5eedcb", "text": "In this article, we describe a method for overlaying arbitrary texture image onto surface of T-shirt worn by a user. In this method, the texture image is previously divided into a number of patches. On the T-shirt, markers are printed at the positions corresponding to the vertices of the patches. The markers on the surface of the T-shirt are tracked in the motion image taken by a camera. The texture image is warped according to the tracked positions of the markers, which is overlaid onto the captured image. This article presents experimental results with the pilot system of virtual clothing implemented based on the proposed method.", "title": "" }, { "docid": "edaeccfe6263c1625765574443b79e68", "text": "The elongated structure of the hippocampus is critically involved in brain functions of profound importance. The segregation of functions along the longitudinal (septotemporal or dorsoventral) axis of the hippocampus is a slowly developed concept and currently is a widely accepted idea. The segregation of neuroanatomical connections along the hippocampal long axis can provide a basis for the interpretation of the functional segregation. However, an emerging and growing body of data strongly suggests the existence of endogenous diversification in the properties of the local neural network along the long axis of the hippocampus. In particular, recent electrophysiological research provides compelling evidence demonstrating constitutively increased network excitability in the ventral hippocampus with important implications for the endogenous initiation and propagation of physiological hippocampal oscillations yet, under favorable conditions it can also drive the local network towards hyperexcitability. In addition, important specializations in the properties of dorsal and ventral hippocampal synapses may support an optimal signal processing that contributes to the effective execution of the distinct functional roles played by the two hippocampal segments.", "title": "" }, { "docid": "7eba5af9ca0beaf8cbac4afb45e85339", "text": "This paper is concerned with the derivation of the kinematics model of the University of Tehran-Pole Climbing Robot (UT-PCR). As the first step, an appropriate set of coordinates is selected and used to describe the state of the robot. Nonholonomic constraints imposed by the wheels are then expressed as a set of differential equations. By describing these equations in terms of the state of the robot an underactuated driftless nonlinear control system with affine inputs that governs the motion of the robot is derived. A set of experimental results are also given to show the capability of the UT-PCR in climbing a stepped pole.", "title": "" }, { "docid": "58eebe0e55f038fea268b6a7a6960939", "text": "The classic answer to what makes a decision good concerns outcomes. A good decision has high outcome benefits (it is worthwhile) and low outcome costs (it is worth it). I propose that, independent of outcomes or value from worth, people experience a regulatory fit when they use goal pursuit means that fit their regulatory orientation, and this regulatory fit increases the value of what they are doing. The following postulates of this value from fit proposal are examined: (a) People will be more inclined toward goal means that have higher regulatory fit, (b) people's motivation during goal pursuit will be stronger when regulatory fit is higher, (c) people's (prospective) feelings about a choice they might make will be more positive for a desirable choice and more negative for an undesirable choice when regulatory fit is higher, (d) people's (retrospective) evaluations of past decisions or goal pursuits will be more positive when regulatory fit was higher, and (e) people will assign higher value to an object that was chosen with higher regulatory fit. Studies testing each of these postulates support the value-from-fit proposal. How value from fit can enhance or diminish the value of goal pursuits and the quality of life itself is discussed.", "title": "" }, { "docid": "5fbe283356d3a0008e671efdd5f659ab", "text": "The study demonstrates that hypercholesterinemia in patients with coronary heart disease (CHD) is associated with functional depression of microcirculation, increase in total peripheral vascular resistance, reduction in the functional efficiency of heart and decrease in activity tolerance. After receiving a course of low-intensity infrared laser radiation treatment the patients displayed positive changes in blood lipid spectrum, which was associated with improvement in microcirculation, decrease in afterload, increase in economization of heart functioning and activity tolerance. The obtained results demonstrate that the hypolipidemic effect of laser radiation is a substantial factor in the regression of CHD manifestations.", "title": "" }, { "docid": "bea36db3e7a3a97f8a6ab03ce1bdf962", "text": "The emergence of mobile communication and positioning technologies has presented advertisers and marketers with a radically innovative advertising channel: Location-Based Advertising (LBA). Despite the growing attention given to LBA, little is understood about the differential effects of text and multimedia advertising formats on the mobile consumer perceptions and behaviours. This exploratory study empirically examines the effects of multimedia advertisements vis-à-vis text-based advertisements on consumer perceptions and behaviours in a simulated LBA environment. A structural model was formulated to test their effects on consumer perceptions of entertainment, informativeness and irritation. Results show that multimedia LBA messages lead to more favourable attitude, increase the intention to use the LBA application, and have significant impact on purchase intention. Furthermore, this study indicates the role of multimedia as a double-edged sword: on the one hand, it suggests that multimedia impose a higher level of irritation; on the other hand, it suggests that multimedia enhance the informativeness and entertainment value of LBA. Implications for theory and practice are discussed. Perceived effectiveness of text vs. multimedia LBA messaging 155", "title": "" }, { "docid": "46d5ecaeb529341dedcd724cfb3696bb", "text": "Big Data stellt heute ein zentrales Thema der Informatik dar: Insbesondere durch die zunehmende Datafizierung unserer Umwelt entstehen neue und umfangreiche Datenquellen, während sich gleichzeitig die Verarbeitungsgeschwindigkeit von Daten wesentlich erhöht und diese Quellen somit immer häufiger in nahezu Echtzeit analysiert werden können. Neben der Bedeutung in der Informatik nimmt jedoch auch die Relevanz von Daten im täglichen Leben zu: Immer mehr Informationen sind das Ergebnis von Datenanalysen und immer häufiger werden Entscheidungen basierend auf Analyseergebnissen getroffen. Trotz der Relevanz von Daten und Datenverarbeitung im Alltag werden moderne Formen der Datenanalyse im Informatikunterricht bisher jedoch allenfalls am Rand betrachtet, sodass die Schülerinnen und Schüler weder die Möglichkeiten noch die Gefahren dieser Methoden erfahren können. In diesem Beitrag stellen wir daher ein prototypisches Unterrichtskonzept zum Thema Datenanalyse im Kontext von Big Data vor, in dem die Schülerinnen und Schüler wesentliche Grundlagen von Datenanalysen kennenlernen und nachvollziehen können. Um diese komplexen Systeme für den Informatikunterricht möglichst einfach zugänglich zu machen und mit realen Daten arbeiten zu können, wird dabei ein selbst implementiertes Datenstromsystem zur Verarbeitung des Datenstroms von Twitter eingesetzt.", "title": "" }, { "docid": "1d1eeb2f5a16fd8e1deed16a5839505b", "text": "Searchable symmetric encryption (SSE) is a widely popular cryptographic technique that supports the search functionality over encrypted data on the cloud. Despite the usefulness, however, most of existing SSE schemes leak the search pattern, from which an adversary is able to tell whether two queries are for the same keyword. In recent years, it has been shown that the search pattern leakage can be exploited to launch attacks to compromise the confidentiality of the client’s queried keywords. In this paper, we present a new SSE scheme which enables the client to search encrypted cloud data without disclosing the search pattern. Our scheme uniquely bridges together the advanced cryptographic techniques of chameleon hashing and indistinguishability obfuscation. In our scheme, the secure search tokens for plaintext keywords are generated in a randomized manner, so it is infeasible to tell whether the underlying plaintext keywords are the same given two secure search tokens. In this way, our scheme well avoids using deterministic secure search tokens, which is the root cause of the search pattern leakage. We provide rigorous security proofs to justify the security strengths of our scheme. In addition, we also conduct extensive experiments to demonstrate the performance. Although our scheme for the time being is not immediately applicable due to the current inefficiency of indistinguishability obfuscation, we are aware that research endeavors on making indistinguishability obfuscation practical is actively ongoing and the practical efficiency improvement of indistinguishability obfuscation will directly lead to the applicability of our scheme. Our paper is a new attempt that pushes forward the research on SSE with concealed search pattern.", "title": "" }, { "docid": "13c250fc46dfc45e9153dbb1dc184b70", "text": "This paper proposes Travel Prediction-based Data forwarding (TPD), tailored and optimized for multihop vehicle-to-vehicle communications. The previous schemes forward data packets mostly utilizing statistical information about road network traffic, which becomes much less accurate when vehicles travel in a light-traffic vehicular network. In this light-traffic vehicular network, highly dynamic vehicle mobility can introduce a large variance for the traffic statistics used in the data forwarding process. However, with the popularity of GPS navigation systems, vehicle trajectories become available and can be utilized to significantly reduce this uncertainty in the road traffic statistics. Our TPD takes advantage of these vehicle trajectories for a better data forwarding in light-traffic vehicular networks. Our idea is that with the trajectory information of vehicles in a target road network, a vehicle encounter graph is constructed to predict vehicle encounter events (i.e., timing for two vehicles to exchange data packets in communication range). With this encounter graph, TPD optimizes data forwarding process for minimal data delivery delay under a specific delivery ratio threshold. Through extensive simulations, we demonstrate that our TPD significantly outperforms existing legacy schemes in a variety of road network settings. © 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "00d081e61bfbfa64371b4d9e30fcd452", "text": "In the coming era of social companions, many researches have been pursuing natural dialog interactions and long-term relations between social companions and users. With respect to the quick decrease of user interests after the first few interactions, various emotion and memory models are developed and integrated with social companions for better user engagement. This paper reviews related works in the effort of combining memory and emotion with natural language dialog on social companions. We separate these works into three categories: (1) Affective system with dialog, (2) Task-driven memory with dialog, (3) Chat-driven memory with dialog. In addition, we discussed limitations and challenging issues to be solved. Finally, we also introduced our framework of social companions.", "title": "" }, { "docid": "844ded310fc86d452cfa948d27940182", "text": "The objective of this paper is to propose a bidirectional single-stage grid-connected inverter (BSG-inverter) for the battery energy storage system. The proposed BSG-inverter is composed of multiple bidirectional buck–boost type dc–dc converters (BBCs) and a dc–ac unfolder. Advantages of the proposed BSG-inverter include: single-stage power conversion, low battery and dc-bus voltages, pulsating charging/discharging currents, and individual power control for each battery module. Therefore, the equalization, lifetime extension, and capacity flexibility of the battery energy storage system can be achieved. Based on the developed equations, the power flow of the battery system can be controlled without the need of input current sensor. Also, with the interleaved operation between BBCs, the current ripple of the output inductor can be reduced too. The computer simulations and hardware experimental results are shown to verify the performance of the proposed BSG-inverter.", "title": "" }, { "docid": "dabfd831ec8eaf37f662db3c75e68a5b", "text": "Multitask learning has been applied successfully to a range of tasks, mostly morphosyntactic. However, little is known on when MTL works and whether there are data characteristics that help to determine its success. In this paper we evaluate a range of semantic sequence labeling tasks in a MTL setup. We examine different auxiliary tasks, amongst which a novel setup, and correlate their impact to datadependent conditions. Our results show that MTL is not always effective, significant improvements are obtained only for 1 out of 5 tasks. When successful, auxiliary tasks with compact and more uniform label distributions are preferable.", "title": "" }, { "docid": "ba57f271fbf1c6c93aa10ac51b760168", "text": "Abstract. In this paper, connected domination in fuzzy graphs using strong arcs is introduced. The strong connected domination number of different classes of fuzzy graphs is obtained. An upper bound for the strong connected domination number of fuzzy graphs is obtained. Strong connected domination in fuzzy trees is studied. It is established that the set of fuzzy cut nodes of a fuzzy tree is a strong connected dominating set. It is proved that in a fuzzy tree each node of a strong connected dominating set is incident on a fuzzy bridge. Also the characteristic properties of the existence of strong connected dominating set for a fuzzy graph and its complement are established.", "title": "" }, { "docid": "9665c72fd804d630791fdd0bc381d116", "text": "Social Sharing of Emotion (SSE) occurs when one person shares an emotional experience with another and is considered potentially beneficial. Though social sharing has been shown prevalent in interpersonal communication, research on its occurrence and communication structure in online social networks is lacking. Based on a content analysis of blog posts (n = 540) in a blog social network site (Live Journal), we assess the occurrence of social sharing in blog posts, characterize different types of online SSE, and present a theoretical model of online SSE. A large proportion of initiation expressions were found to conform to full SSE, with negative emotion posts outnumbering bivalent and positive posts. Full emotional SSE posts were found to prevail, compared to partial feelings or situation posts. Furthermore, affective feedback predominated to cognitive and provided emotional support, empathy and admiration. The study found evidence that the process of social sharing occurs in Live Journal, replicating some features of face to face SSE. Instead of a superficial view of online social sharing, our results support a prosocial and beneficial character to online SSE. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b1cabb319ce759343ad3f043c7d86b14", "text": "We consider the problem of scheduling tasks requiring certain processing times on one machine so that the busy time of the machine is maximized. The problem is to find a probabilistic online algorithm with reasonable worst case performance ratio. We answer an open problem of Lipton and Tompkins concerning the best possible ratio that can be achieved. Furthermore, we extend their results to anm-machine analogue. Finally, a variant of the problem is analyzed, in which the machine is provided with a buffer to store one job. Wir betrachten das Problem der Zuteilung von Aufgaben bestimmter Rechenzeit auf einem Rechner, um so seine Auslastung zu maximieren. Die Aufgabe besteht darin, einen probabilistischen Online-Algorithmus mit vernünftigem worst-case Performance-Verhältnis zu finden. Wir geben die Antwort auf ein offenes Problem von Lipton und Tompkins, das das bestmögliche Verhältnis betrifft. Weiter verallgemeinern wir ihre Ergebnisse auf einm-Maschinen-Analogon. Schließlich wird eine Variante des Problems analysiert, in dem der Rechner mit einem Zwischenspeicher für einen Job versehen ist.", "title": "" }, { "docid": "dd3d8d5d623a4bed6fb0939e15caa056", "text": "This paper investigates a number of computational intelligence techniques in the detection of heart disease. Particularly, comparison of six well known classifiers for the well used Cleveland data is performed. Further, this paper highlights the potential of an expert judgment based (i.e., medical knowledge driven) feature selection process (termed as MFS), and compare against the generally employed computational intelligence based feature selection mechanism. Also, this article recognizes that the publicly available Cleveland data becomes imbalanced when considering binary classification. Performance of classifiers, and also the potential of MFS are investigated considering this imbalanced data issue. The experimental results demonstrate that the use of MFS noticeably improved the performance, especially in terms of accuracy, for most of the classifiers considered and for majority of the datasets (generated by converting the Cleveland dataset for binary classification). MFS combined with the computerized feature selection process (CFS) has also been investigated and showed encouraging results particularly for NaiveBayes, IBK and SMO. In summary, the medical knowledge based feature selection method has shown promise for use in heart disease diagnostics. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "34af5ac483483fa59eda7804918bdb1c", "text": "Automatic spelling and grammatical correction systems are one of the most widely used tools within natural language applications. In this thesis, we assume the task of error correction as a type of monolingual machine translation where the source sentence is potentially erroneous and the target sentence should be the corrected form of the input. Our main focus in this project is building neural network models for the task of error correction. In particular, we investigate sequence-to-sequence and attention-based models which have recently shown a higher performance than the state-of-the-art of many language processing problems. We demonstrate that neural machine translation models can be successfully applied to the task of error correction. While the experiments of this research are performed on an Arabic corpus, our methods in this thesis can be easily applied to any language. Keywords— natural language error correction, recurrent neural networks, encoderdecoder models, attention mechanism", "title": "" } ]
scidocsrr
d89b53d6a7d49ce35bb563527d9988ef
Long Distance Pronominalisation and Global Focus
[ { "docid": "569a7cfcf7dd4cc5132dc7ffa107bfcf", "text": "We present the results of a study of definite descriptions use in written texts aimed at assessing the feasibility of annotating corpora with information about definite description interpretation. We ran two experiments, in which subjects were asked to classify the uses of definite descriptions in a corpus of 33 newspaper articles, containing a total of 1412 definite descriptions. We measured the agreement among annotators about the classes assigned to definite descriptions, as well as the agreement about the antecedent assigned to those definites that the annotators classified as being related to an antecedent in the text. Themost interesting result of this study from a corpus annotation perspective was the rather low agreement (K=0.63) that we obtained using versions of Hawkins’ and Prince’s classification schemes; better results (K=0.76) were obtained using the simplified scheme proposed by Fraurud that includes only two classes, first-mention and subsequent-mention. The agreement about antecedents was also not complete. These findings raise questions concerning the strategy of evaluating systems for definite description interpretation by comparing their results with a standardized annotation. From a linguistic point of view, the most interesting observations were the great number of discourse-newdefinites in our corpus (in one of our experiments, about 50% of the definites in the collection were classified as discourse-new, 30% as anaphoric, and 18% as associative/bridging) and the presence of definites which did not seem to require a complete disambiguation. This paper will appear in Computational Linguistics.", "title": "" } ]
[ { "docid": "45a098c09a3803271f218fafd4d951cd", "text": "Recent years have seen a tremendous increase in the demand for wireless bandwidth. To support this demand by innovative and resourceful use of technology, future communication systems will have to shift towards higher carrier frequencies. Due to the tight regulatory situation, frequencies in the atmospheric attenuation window around 300 GHz appear very attractive to facilitate an indoor, short range, ultra high speed THz communication system. In this paper, we investigate the influence of diffuse scattering at such high frequencies on the characteristics of the communication channel and its implications on the non-line-of-sight propagation path. The Kirchhoff approach is verified by an experimental study of diffuse scattering from randomly rough surfaces commonly encountered in indoor environments using a fiber-coupled terahertz time-domain spectroscopy system to perform angle- and frequency-dependent measurements. Furthermore, we integrate the Kirchhoff approach into a self-developed ray tracing algorithm to model the signal coverage of a typical office scenario.", "title": "" }, { "docid": "b2214dcc652abccf0e132f570cac1c81", "text": "Histopathological examination of the biopsy disclosed the following findings ( fig. 1 c). The epidermis showed orthokeratosis and mild acanthosis with focally elongated rete ridges. Collagen fibers in the dermis were only slightly increased. Ectatic capillaries and venules were a prominent feature of the hyperplastic papillary dermis. A sparse infiltrate of lymphocytes and histiocytes was present in the dermis. These findings conformed to the histopathological features of angiofibroma. There was no pathological finding of condyloma acuminatum.", "title": "" }, { "docid": "8e08b1ee93a1434bac2dde763c929332", "text": "Recommender Systems (RSs) are powerful and popular tools for e-commerce. To build their recommendations, RSs make use of varied data sources, which capture the characteristics of items, users, and their transactions. Despite recent advances in RS, the cold start problem is still a relevant issue that deserves further attention, and arises due to the lack of prior information about new users and new items. To minimize system degradation, a hybrid approach is presented that combines collaborative filtering recommendations with demographic information. The approach is based on an existing algorithm, SCOAL (Simultaneous Co-Clustering and Learning), and provides a hybrid recommendation approach that can address the (pure) cold start problem, where no collaborative information (ratings) is available for new users. Better predictions are produced from this relaxation of assumptions to replace the lack of information for the new user. Experiments using real-world datasets show the effectiveness of the", "title": "" }, { "docid": "5d63b20254e8732807a0c029cd86014f", "text": "Various perceptual domains have underlying compositional semantics that are rarely captured in current models. We suspect this is because directly learning the compositional structure has evaded these models. Yet, the compositional structure of a given domain can be grounded in a separate domain thereby simplifying its learning. To that end, we propose a new approach to modeling bimodal percepts that explicitly relates distinct projections across each modality and then jointly learns a bimodal sparse representation. The resulting model enables compositionality across these distinct projections and hence can generalize to unobserved percepts spanned by this compositional basis. For example, our model can be trained on red triangles and blue squares; yet, implicitly will also have learned red squares and blue triangles. The structure of the projections and hence the compositional basis is learned automatically for a given language model. To test our model, we have acquired a new bimodal dataset comprising images and spoken utterances of colored shapes in a tabletop setup. Our experiments demonstrate the benefits of explicitly leveraging compositionality in both quantitative and human evaluation studies.", "title": "" }, { "docid": "32a4c17a53643042a5c19180bffd7c21", "text": "Although mobile, tablet, large display, and tabletop computers increasingly present opportunities for using pen, finger, and wand gestures in user interfaces, implementing gesture recognition largely has been the privilege of pattern matching experts, not user interface prototypers. Although some user interface libraries and toolkits offer gesture recognizers, such infrastructure is often unavailable in design-oriented environments like Flash, scripting environments like JavaScript, or brand new off-desktop prototyping environments. To enable novice programmers to incorporate gestures into their UI prototypes, we present a \"$1 recognizer\" that is easy, cheap, and usable almost anywhere in about 100 lines of code. In a study comparing our $1 recognizer, Dynamic Time Warping, and the Rubine classifier on user-supplied gestures, we found that $1 obtains over 97% accuracy with only 1 loaded template and 99% accuracy with 3+ loaded templates. These results were nearly identical to DTW and superior to Rubine. In addition, we found that medium-speed gestures, in which users balanced speed and accuracy, were recognized better than slow or fast gestures for all three recognizers. We also discuss the effect that the number of templates or training examples has on recognition, the score falloff along recognizers' N-best lists, and results for individual gestures. We include detailed pseudocode of the $1 recognizer to aid development, inspection, extension, and testing.", "title": "" }, { "docid": "298df39e9b415bc1eed95ed56d3f32df", "text": "In this work, we present a true 3D 128 Gb 2 bit/cell vertical-NAND (V-NAND) Flash product for the first time. The use of barrier-engineered materials and gate all-around structure in the 3D V-NAND cell exhibits advantages over 1 × nm planar NAND, such as small Vth shift due to small cell coupling and narrow natural Vth distribution. Also, a negative counter-pulse scheme realizes a tightly programmed cell distribution. In order to reduce the effect of a large WL coupling, a glitch-canceling discharge scheme and a pre-offset control scheme is implemented. Furthermore, an external high-voltage supply scheme along with the proper protection scheme for a high-voltage failure is used to achieve low power consumption. The chip accomplishes 50 MB/s write throughput with 3 K endurance for typical embedded applications. Also, extended endurance of 35 K is achieved with 36 MB/s of write throughput for data center and enterprise SSD applications.", "title": "" }, { "docid": "1cefbe0177c56d92e34c4b5a88a29099", "text": "Typical tasks of future service robots involve grasping and manipulating a large variety of objects differing in size and shape. Generating stable grasps on 3D objects is considered to be a hard problem, since many parameters such as hand kinematics, object geometry, material properties and forces have to be taken into account. This results in a high-dimensional space of possible grasps that cannot be searched exhaustively. We believe that the key to find stable grasps in an efficient manner is to use a special representation of the object geometry that can be easily analyzed. In this paper, we present a novel grasp planning method that evaluates local symmetry properties of objects to generate only candidate grasps that are likely to be of good quality. We achieve this by computing the medial axis which represents a 3D object as a union of balls. We analyze the symmetry information contained in the medial axis and use a set of heuristics to generate geometrically and kinematically reasonable candidate grasps. These candidate grasps are tested for force-closure. We present the algorithm and show experimental results on various object models using an anthropomorphic hand of a humanoid robot in simulation.", "title": "" }, { "docid": "7ce9f8cbba0bf56e68443f1ed759b6d3", "text": "We present a Connected Learning Analytics (CLA) toolkit, which enables data to be extracted from social media and imported into a Learning Record Store (LRS), as defined by the new xAPI standard. A number of implementation issues are discussed, and a mapping that will enable the consistent storage and then analysis of xAPI verb/object/activity statements across different social media and online environments is introduced. A set of example learning activities are proposed, each facilitated by the Learning Analytics beyond the LMS that the toolkit enables.", "title": "" }, { "docid": "98465c0b863fd7eb07e7ba2596fb5dee", "text": "In this paper, multimodal Deep Boltzmann Machines (DBM) is employed to learn important genes (biomarkers) on gene expression data from human carcinoma colorectal. The learning process involves gene expression data and several patient phenotypes such as lymph node and distant metastasis occurrence. The proposed framework in this paper uses multimodal DBM to train records with metastasis occurrence. Later, the trained model is tested using records with no metastasis occurrence. After that, Mean Squared Error (MSE) is measured from the reconstructed and the original gene expression data. Genes are ranked based on the MSE value. The first gene has the highest MSE value. After that, k-means clustering is performed using various number of genes. Features that give the highest purity index are considered as the important genes. The important genes obtained from the proposed framework and two sample t-test are being compared. From the accuracy of metastasis classification, the proposed framework gives higher results compared to the top genes from two sample t-test.", "title": "" }, { "docid": "fd03cf7e243571e9b3e81213fe91fd29", "text": "Most real-world recommender services measure their performance based on the top-N results shown to the end users. Thus, advances in top-N recommendation have far-ranging consequences in practical applications. In this paper, we present a novel method, called Collaborative Denoising Auto-Encoder (CDAE), for top-N recommendation that utilizes the idea of Denoising Auto-Encoders. We demonstrate that the proposed model is a generalization of several well-known collaborative filtering models but with more flexible components. Thorough experiments are conducted to understand the performance of CDAE under various component settings. Furthermore, experimental results on several public datasets demonstrate that CDAE consistently outperforms state-of-the-art top-N recommendation methods on a variety of common evaluation metrics.", "title": "" }, { "docid": "ead196a54f4ea7b5a1fe4b5b85f0b2c6", "text": "Supervised machine learning and opinion lexicon are the most frequent approaches for opinion mining, but they require considerable effort to prepare the training data and to build the opinion lexicon, respectively. In this paper, a novel unsupervised clustering approach is proposed for opinion mining. Three swarm algorithms based on Particle Swarm Optimization are evaluated using three corpora with different levels of complexity with respect to size, number of opinions, domains, languages, and class balancing. K-means and Agglomerative clustering algorithms, as well as, the Artificial Bee Colony and Cuckoo Search swarm-based algorithms were selected for comparison. The proposed swarm-based algorithms achieved better accuracy using the word bigram feature model as the pre-processing technique, the Global Silhouette as optimization function, and on datasets with two classes: positive and negative. Although the swarm-based algorithms obtained lower result for datasets with three classes, they are still competitive considering that neither labeled data, nor opinion lexicons are required for the opinion clustering approach.", "title": "" }, { "docid": "0d57c3d4067d94f867e7e06becd48519", "text": "This thesis investigates the evolutionary plausibility of the Minimalist Program. Is such a theory of language reasonable given the assumption that the human linguistic capacity has been subject to the usual forces and processes of evolution? More generally, this thesis is a comment on the manner in which theories of language can and should be constrained. What are the constraints that must be taken into account when constructing a theory of language? These questions are addressed by applying evidence gathered in evolutionary biology to data from linguistics. The development of generative syntactic theorising in the late 20th century has led to a much redesigned conception of the human language faculty. The driving question ‘why is language the way it is?’ has prompted assumptions of simplicity, perfection, optimality, and economy for language; a minimal system operating in an economic fashion to fit into the larger cognitive architecture in a perfect manner. Studies in evolutionary linguistics, on the other hand, have been keen to demonstrate that language is complex, redundant, and adaptive, Pinker & Bloom’s (1990) seminal paper being perhaps the prime example of this. The question is whether these opposing views can be married in any way. Interdisciplinary evidence is brought to bear on this problem, demonstrating that any reconciliation is impossible. Evolutionary biology shows that perfection, simplicity, and economy do not arise in typically evolving systems, yet the Minimalist Program attaches these characteristics to language. It shows that evolvable systems exhibit degeneracy, modularity, and robustness, yet the Minimalist Program must rule these features out for language. It shows that evolution exhibits a trend towards complexity, yet the Minimalist Program excludes such a depiction of language.", "title": "" }, { "docid": "9ec2c66e67dd969e902b8db93f68dc61", "text": "The target in a tracking sequence can be considered as a set of spatiotemporal data with various locations in different frames, and the problem how to extract spatiotemporal information of the target effectively has drawn increasing interest recently. In this paper, we exploit spatiotemporal information by different-scale-context aggregation through the proposed pyramid multi-directional recurrent network (PRNet) together with the FlowNet. The PRNet is proposed to memorize the multi-scale spatiotemporal information of self-structure of the target. The FlowNet is employed to capture motion information for discriminating targets from the background. And the two networks form the FPRNet, being trained jointly to learn more useful spatiotemporal representations for visual tracking. The proposed tracker is evaluated on OTB50, OTB100 and TC128 benchmarks, and the experimental results show that the proposed FPRNet can effectively address different challenging cases and achieve better performance than the state-of-theart trackers.", "title": "" }, { "docid": "bd3776d1dc36d6a91ea73d3c12ca326c", "text": "Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89.0% and 82.1% without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow at https: //github.com/tensorflow/models/tree/master/research/deeplab.", "title": "" }, { "docid": "efc9991dfb514b5a8c84e5915a45e16a", "text": "In this paper, we propose a structure of the DLC (data link control) protocol layer, which consists of the functional component, with radio resource channel allocation method. It is operated by the state of current traffic volume for the efficiency of radio resource utilization. Different adequate components will be taken by the current traffic state, especially fraction based data transmission buffer control method for the QoS (quality of service) assurance", "title": "" }, { "docid": "8e19c3513be332705f4e2bf5a8aa4429", "text": "The introduction of crowdsourcing offers numerous business opportunities. In recent years, manifold forms of crowdsourcing have emerged on the market -- also in logistics. Thereby, the ubiquitous availability and sensor-supported assistance functions of mobile devices support crowdsourcing applications, which promotes contextual interactions between users at the right place at the right time. This paper presents the results of an in-depth-analysis on crowdsourcing in logistics in the course of ongoing research in the field of location-based crowdsourcing (LBCS). This paper analyzes LBCS for both, 'classic' logistics as well as 'information' logistics. Real-world examples of crowdsourcing applications are used to underpin the two evaluated types of logistics using crowdsourcing. Potential advantages and challenges of logistics with the crowd ('crowd-logistics') are discussed. Accordingly, this paper aims to provide the necessary basis for a novel interdisciplinary research field.", "title": "" }, { "docid": "916a76aa0c4209567a6309885e0b9b32", "text": "The term \"Industry 4.0\" symbolizes new forms of technology and artificial intelligence within production technologies. Smart robots are going to be the game changers within the factories of the future and will work with humans in indispensable teams within many processes. With this fourth industrial revolution, classical production lines are going through comprehensive modernization, e.g. in terms of in-the-box manufacturing, where humans and machines work side by side in so-called \"hybrid teams\". Questions about how to prepare for newly needed engineering competencies for the age of Industry 4.0, how to assess them and how to teach and train e.g. human-robot-teams have to be tackled in future engineering education. The paper presents theoretical aspects and empirical results of a series of studies, carried out to investigate the competencies of virtual collaboration and joint problem solving in virtual worlds.", "title": "" }, { "docid": "0374d93d82ec404b7beee18aaa9bfbf1", "text": "A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezuma’s Revenge and Pitfall. On both games, current RL algorithms perform poorly, even those with intrinsic motivation, which is the dominant method to encourage exploration and improve performance on hardexploration domains. To address this shortfall, we introduce a new algorithm called Go-Explore. It exploits the following principles: (1) remember states that have previously been visited, (2) first return to a promising state (without exploration), then explore from it, and (3) solve simulated environments through exploiting any available means (including by introducing determinism), then robustify (create a policy that can reliably perform the solution) via imitation learning. The combined effect of these principles generates dramatic performance improvements on hardexploration problems. On Montezuma’s Revenge, without being provided any domain knowledge, Go-Explore scores over 43,000 points, almost 4 times the previous state of the art. Go-Explore can also easily harness human-provided domain knowledge, and when augmented with it Go-Explore scores a mean of over 650,000 points on Montezuma’s Revenge. Its max performance of nearly 18 million surpasses the human world record by an order of magnitude, thus meeting even the strictest definition of “superhuman” performance. On Pitfall, Go-Explore with domain knowledge is the first algorithm to score above zero. Its mean performance of almost 60,000 points also exceeds expert human performance. Because GoExplore can produce many high-performing demonstrations automatically and cheaply, it also outperforms previous imitation learning work in which the solution was provided in the form of a human demonstration. Go-Explore opens up many new research directions into improving it and weaving its insights into current RL algorithms. It may also enable progress on previously unsolvable hard-exploration problems in a variety of domains, especially the many that often harness a simulator during training (e.g. robotics).", "title": "" }, { "docid": "ca6b0e6e97054bf70cee8179114d94f1", "text": "Although the maximum transmission speed in IEEE 802.11a WLAN is 54 Mbps, the real throughput is actually limited to 20~30 Mbps. Except for the main effect from multi-path, we should also consider some non-ideal effects from imperfect hardware design, such as the IQ imbalance from direct conversion in RF front-end. IQ imbalance is not apparent in lower-order QAM modulation. However, in higher-order QAM modulation, it will become serious interference. In this paper, an IQ imbalance compensation circuit in IEEE802.11a baseband receiver is proposed. A low complexity time-domain compensation algorithm is used to replace the traditional high-order equalizer. MATLAB is used to simulate the whole transceiver including the channel model. After system verification, we use Verilog to implement the IQ imbalance compensation circuit with UMC 0.18 mum CMOS 1p6m technology. Post-layout simulation results show that this scheme contributes to a very robust and easily implemented OFDM WLAN receiver", "title": "" } ]
scidocsrr
6b209124aea4909a212524031a1c9bed
Circular markers for camera pose estimation
[ { "docid": "f2ab3ba4503f4c6173e3ea1d273791ac", "text": "Our starting point for developing the Studierstube system was the belief that augmented reality, the less obtrusive cousin of virtual reality, has a better chance of becoming a viable user interface for applications requiring manipulation of complex three-dimensional information as a daily routine. In essence, we are searching for a 3-D user interface metaphor as powerful as the desktop metaphor for 2-D. At the heart of the Studierstube system, collaborative augmented reality is used to embed computer-generated images into the real work environment. In the first part of this paper, we review the user interface of the initial Studierstube system, in particular the implementation of collaborative augmented reality, and the Personal Interaction Panel, a two-handed interface for interaction with the system. In the second part, an extended Studierstube system based on a heterogeneous distributed architecture is presented. This system allows the user to combine multiple approaches augmented reality, projection displays, and ubiquitous computingto the interface as needed. The environment is controlled by the Personal Interaction Panel, a twohanded, pen-and-pad interface that has versatile uses for interacting with the virtual environment. Studierstube also borrows elements from the desktop, such as multitasking and multi-windowing. The resulting software architecture is a user interface management system for complex augmented reality applications. The presentation is complemented by selected application examples.", "title": "" }, { "docid": "a122e4c4e59f39fddd18953ab71aaabd", "text": "A wearable low-power hybrid vision-inertial tracker has been demonstrated based on a flexible sensor fusion core architecture, which allows easy reconfiguration by plugging-in different kinds of sensors. A particular prototype implementation consists of one inertial measurement unit and one outward-looking wide-angle Smart Camera, with a built-in DSP to run all required image-processing tasks. The Smart Camera operates on newly designed 2-D bar-coded fiducials printed on a standard black-and-white printer. The fiducial design allows having thousands of different codes, thus enabling uninterrupted tracking throughout a large building or even a campus at very reasonable cost. The system operates in various real-world lighting conditions without any user intervention due to homomorphic image processing algorithms for extracting fiducials in the presence of very non-uniform lighting .", "title": "" } ]
[ { "docid": "97e2077fc8b801656f046f8619fe6647", "text": "In this paper we present a fairy tale corpus that was semantically organized and tagged. The proposed method uses latent semantic mapping to represent the stories and a top-n item-to-item recommendation algorithm to define clusters of similar stories. Each story can be placed in more than one cluster and stories in the same cluster are related to the same concepts. The results were manually evaluated regarding the groupings as perceived by human judges. The evaluation resulted in a precision of 0.81, a recall of 0.69, and an f-measure of 0.75 when using tf*idf for word frequency. Our method is topicand language-independent, and, contrary to traditional clustering methods, automatically defines the number of clusters based on the set of documents. This method can be used as a setup for traditional clustering or classification. The resulting corpus will be used for recommendation purposes, although it can also be used for emotion extraction, semantic role extraction, meaning extraction, text classification, among others.", "title": "" }, { "docid": "a1f2d91de4ba7899c03bfbe7a7a8f422", "text": "Pervasive gaming is a genre of gaming systematically blurring and breaking the traditional boundaries of game. The limits of the magic circle are explored in spatial, temporal and social dimensions. These ways of expanding the game are not new, since many intentional and unintentional examples of similar expansions can be found from earlier games, but the recently emerged fashion of pervasive gaming is differentiated with the use of these expansions in new, efficient ways to produce new kinds of gameplay experiences. These new game genres include alternate reality games, reality games, trans-reality games and crossmedia games.", "title": "" }, { "docid": "71b6f02598ac24efbc4625ca060f1bae", "text": "Estimates of the worldwide incidence and mortality from 27 cancers in 2008 have been prepared for 182 countries as part of the GLOBOCAN series published by the International Agency for Research on Cancer. In this article, we present the results for 20 world regions, summarizing the global patterns for the eight most common cancers. Overall, an estimated 12.7 million new cancer cases and 7.6 million cancer deaths occur in 2008, with 56% of new cancer cases and 63% of the cancer deaths occurring in the less developed regions of the world. The most commonly diagnosed cancers worldwide are lung (1.61 million, 12.7% of the total), breast (1.38 million, 10.9%) and colorectal cancers (1.23 million, 9.7%). The most common causes of cancer death are lung cancer (1.38 million, 18.2% of the total), stomach cancer (738,000 deaths, 9.7%) and liver cancer (696,000 deaths, 9.2%). Cancer is neither rare anywhere in the world, nor mainly confined to high-resource countries. Striking differences in the patterns of cancer from region to region are observed.", "title": "" }, { "docid": "874cff80953c4a1e929134ce59cb1fee", "text": "Automatically detecting controversy on the Web is a useful capability for a search engine to help users review web content with a more balanced and critical view. The current state-of-the art approach is to find K-Nearest-Neighbors in Wikipedia to the document query, and to aggregate their controversy scores that are automatically computed from the Wikipedia edit-history features. In this paper, we discover two major weakness in the prior work and propose modifications. First, the generated single query from document to find KNN Wikipages easily becomes ambiguous. Thus, we propose to generate multiple queries from smaller but more topically coherent paragraph of the document. Second, the automatically computed controversy scores of Wikipedia articles that depend on \"edit war\" features have a drawback that without an edit history, there can be no edit wars. To infer more reliable controversy scores for articles with little edit history, we smooth the original score from the scores of the neighbors with more established edit history. We show that the modified framework is improved by up to 5% for binary controversy classification in a publicly available dataset.", "title": "" }, { "docid": "ebf38b92c4d9337a2d651eb4f5f4c927", "text": "As intended by its name, physically unclonable functions (PUFs) are considered as an ultimate solution to deal with insecure storage, hardware counterfeiting, and many other security problems. However, many different successful attacks have already revealed vulnerabilities of certain digital intrinsic PUFs. This paper demonstrates that legacy arbiter PUF and its popular extended versions (i.e., feed-forward and XOR-enhanced) can be completely and linearly characterized by means of photonic emission analysis. Our experimental setup is capable of measuring every PUF internal delay with a resolution of 6 ps. Due to this resolution, we indeed require only the theoretical minimum number of linear independent equations (i.e., physical measurements) to directly solve the underlying inhomogeneous linear system. Moreover, it is not required to know the actual PUF responses for our physical delay extraction. We present our practical results for an arbiter PUF implementation on a complex programmable logic device manufactured with a 180 nm process. Finally, we give an insight into photonic emission analysis of arbiter PUF on smaller chip architectures by performing experiments on a field programmable gate array manufactured with a 60 nm process.", "title": "" }, { "docid": "58c5638fee085223f59162f36321c295", "text": "Although recent years have seen a surge of interest in the computational aspects of social choice, no specific attention has previously been devoted to elections with multiple winners, e.g., elections of an assembly or committee. In this paper, we characterize the worst-case complexity of manipulation and control in the context of four prominent multiwinner voting systems, under different formulations of the strategic agent’s goal.", "title": "" }, { "docid": "3b34e09d2b7109c9cbc8249aec3f23c2", "text": "The purpose of this paper is to explore the concept of brand equity and discuss its different perspectives, we try to review existing literature of brand equity and evaluate various Customer-based brand equity models to provide a collection from well-known databases for further research in this area.", "title": "" }, { "docid": "d95ae6900ae353fa0ed32167e0c23f16", "text": "As well known, fully convolutional network (FCN) becomes the state of the art for semantic segmentation in deep learning. Currently, new hardware designs for deep learning have focused on improving the speed and parallelism of processing units. This motivates memristive solutions, in which the memory units (i.e., memristors) have computing capabilities. However, designing a memristive deep learning network is challenging, since memristors work very differently from the traditional CMOS hardware. This paper proposes a complete solution to implement memristive FCN (MFCN). Voltage selectors are firstly utilized to realize max-pooling layers with the detailed MFCN deconvolution hardware circuit by the massively parallel structure, which is effective since the deconvolution kernel and the input feature are similar in size. Then, deconvolution calculation is realized by converting the image into a column matrix and converting the deconvolution kernel into a sparse matrix. Meanwhile, the convolution realization in MFCN is also studied with the traditional sliding window method rather than the large matrix theory to overcome the shortcoming of low efficiency. Moreover, the conductance values of memristors are predetermined in Tensorflow with ex-situ training method. In other words, we train MFCN in software, then download the trained parameters to the simulink system by writing memristor. The effectiveness of the designed MFCN scheme is verified with improved accuracy over some existing machine learning methods. The proposed scheme is also adapt to LFW dataset with three-classification tasks. However, the MFCN training is time consuming as the computational burden is heavy with thousands of weight parameters with just six layers. In future, it is necessary to sparsify the weight parameters and layers of the MFCN network to speed up computing.", "title": "" }, { "docid": "d58624091f0b7bdc307de1e7003cb82b", "text": "Rotor eddy current losses are one of the main reasons of permanent magnet demagnetization in high-speed permanent magnet machines. In this paper the rotor eddy current losses of high-speed permanent magnet machines with different slotless windings have been analysed. The analysis of the losses was performed using 2D and 3D analytical models. In the study, test machines with different windings and the same torque production capability have been analysed. Presented paper shows the dependency of rotor eddy current losses on sine- and square-wave PWM supply voltages and rotor sleeve properties. Several recommendations for reduction of rotor eddy current losses in high-speed permanent magnet machines are given.", "title": "" }, { "docid": "99de33cf9b9b1a4bc060c17e95d62b6e", "text": "Tracked vehicles have the advantage of stable locomotion on uneven terrain, and, as a result, such mechanisms are used for locomotion on outdoor robots, including those used for search and rescue. However, such mechanisms always slip when a tracked vehicle follows a curve, and the slippage generates large accumulated positioning errors in the vehicle compared with conventional wheeled mobile robots. To improve the accuracy of the odometry and enable a path-following control, the estimation of the track slippage is essential. In this paper, we propose an improved method of odometry for tracked vehicles to follow a straight line or a curve. In this method, the vehicle estimates the slip ratios using two encoders (attached to the actuators) and a gyro-sensor. Based on the improved odometry, the path-following control of tracked vehicles is significantly improved. The validity of the method was confirmed with experiments involving our tracked vehicle on several types of surfaces.", "title": "" }, { "docid": "46db4cfa5ccb08da3ca884ad794dc419", "text": "Mutation testing of Python programs raises a problem of incompetent mutants. Incompetent mutants cause execution errors due to inconsistency of types that cannot be resolved before run-time. We present a practical approach in which incompetent mutants can be generated, but the solution is transparent for a user and incompetent mutants are detected by a mutation system during test execution. Experiments with 20 traditional and object-oriented operators confirmed that the overhead can be accepted. The paper presents an experimental evaluation of the first- and higher-order mutation. Four algorithms to the 2nd and 3rd order mutant generation were applied. The impact of code coverage consideration on the process efficiency is discussed. The experiments were supported by the MutPy system for mutation testing of Python programs.", "title": "" }, { "docid": "72ee3bf58497eddeda11f19488fc8e55", "text": "People can benefit from disclosing negative emotions or stigmatized facets of their identities, and psychologists have noted that imagery can be an effective medium for expressing difficult emotions. Social network sites like Instagram offer unprecedented opportunity for image-based sharing. In this paper, we investigate sensitive self-disclosures on Instagram and the responses they attract. We use visual and textual qualitative content analysis and statistical methods to analyze self-disclosures, associated comments, and relationships between them. We find that people use Instagram to engage in social exchange and story-telling about difficult experiences. We find considerable evidence of social support, a sense of community, and little aggression or support for harmful or pro-disease behaviors. Finally, we report on factors that influence engagement and the type of comments these disclosures attract. Personal narratives, food and beverage, references to illness, and self-appearance concerns are more likely to attract positive social support. Posts seeking support attract significantly more comments. CAUTION: This paper includes some detailed examples of content about eating disorders and self-injury illnesses.", "title": "" }, { "docid": "3482354f79c4185ad9d63412184ddce4", "text": "In this paper we address the problem of learning the Markov blanket of a quantity from data in an efficient manner Markov blanket discovery can be used in the feature selection problem to find an optimal set of features for classification tasks, and is a frequently-used preprocessing phase in data mining, especially for high-dimensional domains. Our contribution is a novel algorithm for the induction of Markov blankets from data, called Fast-IAMB, that employs a heuristic to quickly recover the Markov blanket. Empirical results show that Fast-IAMB performs in many cases faster and more reliably than existing algorithms without adversely affecting the accuracy of the recovered Markov blankets.", "title": "" }, { "docid": "a90dd405d9bd2ed912cacee098c0f9db", "text": "Many telecommunication companies today have actively started to transform the way they do business, going beyond communication infrastructure providers are repositioning themselves as data-driven service providers to create new revenue streams. In this paper, we present a novel industrial application where a scalable Big data approach combined with deep learning is used successfully to classify massive mobile web log data, to get new aggregated insights on customer web behaviors that could be applied to various industry verticals.", "title": "" }, { "docid": "22e21aab5d41c84a26bc09f9b7402efa", "text": "Skeem for their thoughtful comments and suggestions.", "title": "" }, { "docid": "775899ca7173538516b891615fb2f523", "text": "An algorithm was developed through an evolution of refinements in surgical technique with the goal to minimize risk and morbidity in otoplasty. Key principles were avoidance of cartilage incisions and transections and the use of multiple surgical techniques to distribute the \"surgical load\" evenly among these techniques. The present retrospective study was designed to test safety and efficacy of the concept in 100 consecutive patients and to discuss the results in light of the literature. Data detailing the surgery, preoperative, and postoperative period were extracted from the record and during patient interviews. Patients were contacted to complete a questionnaire to rate the postoperative pain and their satisfaction with the final outcome on a 6-point visual analog scale (VAS). An expert and a lay panel assessed preoperative and postoperative frontal-view photographs, using the same VAS. Pain in the postoperative was rated as minor (pain level VAS average score, 2.33) and patients' satisfaction was excellent (satisfaction level VAS average score, 1.82). The assessment by the panels of expert and lay evaluators paralleled these outcomes with a postoperative average VAS score of 1.69 and 1.87, respectively. Cartilage incision and transection can be effectively avoided in otoplasty. Even distribution of the surgical load among multiple techniques avoids the problems associated with \"overload\" of a single technique. The innovative technique of cortical mastoid drill-out is described. High satisfaction with the results, excellent patient comfort, and a favorable safety profile are associated with the present algorithm.", "title": "" }, { "docid": "0df4c7d4d020b235ce22cdc22cdfcfb3", "text": "Self-organizing teams have been recognized and studied in various forms-as autonomous groups in socio-technical systems, enablers of organizational theories, agents of knowledge management, and as examples of complex-adaptive systems. Over the last decade, self-organizing teams have taken center stage in software engineering when they were incorporated as a hallmark of Agile methods. Despite the long and rich history of self-organizing teams and their recent popularity with Agile methods, there has been little research on the topic within software wngineering. Particularly, there is a dearth of research on how Agile teams organize themselves in practice. Through a Grounded Theory research involving 58 Agile practitioners from 23 software organizations in New Zealand and India over a period of four years, we identified informal, implicit, transient, and spontaneous roles that make Agile teams self-organizing. These roles-Mentor, Coordinator, Translator, Champion, Promoter, and Terminator-are focused toward providing initial guidance and encouraging continued adherence to Agile methods, effectively managing customer expectations and coordinating customer collaboration, securing and sustaining senior management support, and identifying and removing team members threatening the self-organizing ability of the team. Understanding these roles will help software development teams and their managers better comprehend and execute their roles and responsibilities as a self-organizing team.", "title": "" }, { "docid": "e5ab86a5e50a5aacabdf75dd5f90f365", "text": "In this paper, classification of traffic signs in Turkey with the help of their some features such as color and shape is explained. In the algorithm that is generated in MATLAB, firstly trafic signs are distinguished from the other objects of the image and then filtered by their colors. After filtering, edge detection is processed and then Hough Transform and SVM are used for shape classification.", "title": "" }, { "docid": "b5b26158a44457bb5e30eb26428d5cb7", "text": "In this paper we propose the utterance-level Permutation Invariant Training (uPIT) technique. uPIT is a practically applicable, end-to-end, deep learning based solution for speaker independent multi-talker speech separation. Specifically, uPIT extends the recently proposed Permutation Invariant Training (PIT) technique with an utterance-level cost function, hence eliminating the need for solving an additional permutation problem during inference, which is otherwise required by frame-level PIT. We achieve this using Recurrent Neural Networks (RNNs) that, during training, minimize the utterance-level separation error, hence forcing separated frames belonging to the same speaker to be aligned to the same output stream. In practice, this allows RNNs, trained with uPIT, to separate multi-talker mixed speech without any prior knowledge of signal duration, number of speakers, speaker identity or gender. We evaluated uPIT on the WSJ0 and Danish twoand three-talker mixed-speech separation tasks and found that uPIT outperforms techniques based on Non-negative Matrix Factorization (NMF) and Computational Auditory Scene Analysis (CASA), and compares favorably with Deep Clustering (DPCL) and the Deep Attractor Network (DANet). Furthermore, we found that models trained with uPIT generalize well to unseen speakers and languages. Finally, we found that a single model, trained with uPIT, can handle both two-speaker, and three-speaker speech mixtures.", "title": "" }, { "docid": "30decb72388cd024661c552670a28b11", "text": "The increasing volume and unstructured nature of data available on the World Wide Web (WWW) makes information retrieval a tedious and mechanical task. Lots of this information is not semantic driven, and hence not machine process able, but its only in human readable form. The WWW is designed to builds up a source of reference for web of meaning. Ontology information on different subjects spread globally is made available at one place. The Semantic Web (SW), moreover as an extension of WWW is designed to build as a foundation of vocabularies and effective communication of Semantics. The promising area of Semantic Web is logical and lexical semantics. Ontology plays a major role to represent information more meaningfully for humans and machines for its later effective retrieval. This paper constitutes the requisite with a unique approach for a representation and reasoning with ontology for semantic analysis of various type of document and also surveys multiple approaches for ontology learning that enables reasoning with uncertain, incomplete and contradictory information in a domain context.", "title": "" } ]
scidocsrr
e3d4a31f2814505c595e0ed7c8f5f23e
A new secure model for the use of cloud computing in big data analytics
[ { "docid": "fde3a2559dc66c18923f29350a005597", "text": "Motivated by privacy and usability requirements in various scenarios where existing cryptographic tools (like secure multi-party computation and functional encryption) are not adequate, we introduce a new cryptographic tool called Controlled Functional Encryption (C-FE). As in functional encryption, C-FE allows a user (client) to learn only certain functions of encrypted data, using keys obtained from an authority. However, we allow (and require) the client to send a fresh key request to the authority every time it wants to evaluate a function on a ciphertext. We obtain efficient solutions by carefully combining CCA2 secure public-key encryption (or rerandomizable RCCA secure public-key encryption, depending on the nature of security desired) with Yao's garbled circuit. Our main contributions in this work include developing and for- mally defining the notion of C-FE; designing theoretical and practical constructions of C-FE schemes achieving these definitions for specific and general classes of functions; and evaluating the performance of our constructions on various application scenarios.", "title": "" }, { "docid": "c0a05cad5021b1e779682b50a53f25fd", "text": "We initiate the formal study of functional encryption by giving precise definitions of the concept and its security. Roughly speaking, functional encryption supports restricted secret keys that enable a key holder to learn a specific function of encrypted data, but learn nothing else about the data. For example, given an encrypted program the secret key may enable the key holder to learn the output of the program on a specific input without learning anything else about the program. We show that defining security for functional encryption is non-trivial. First, we show that a natural game-based definition is inadequate for some functionalities. We then present a natural simulation-based definition and show that it (provably) cannot be satisfied in the standard model, but can be satisfied in the random oracle model. We show how to map many existing concepts to our formalization of functional encryption and conclude with several interesting open problems in this young area. ∗Supported by NSF, MURI, and the Packard foundation. †Supported by NSF CNS-0716199, CNS-0915361, and CNS-0952692, Air Force Office of Scientific Research (AFO SR) under the MURI award for “Collaborative policies and assured information sharing” (Project PRESIDIO), Department of Homeland Security Grant 2006-CS-001-000001-02 (subaward 641), and the Alfred P. Sloan Foundation.", "title": "" } ]
[ { "docid": "30fda7dabb70dffbf297096671802c93", "text": "Much attention has recently been given to a printing method because they are easily designable, have a low cost, and can be mass produced. Numerous electronic devices are fabricated using printing methods because of these advantages. In paper mechatronics, attempts have been made to fabricate robots by printing on paper substrates. The robots are given structures through self-folding and functions using printed actuators. We developed a new system and device to fabricate more sophisticated printed robots. First, we successfully fabricated complex self-folding structures by applying an automatic cutting. Second, a rapidly created and low-voltage electrothermal actuator was developed using an inkjet printed circuit. Finally, a printed robot was fabricated by combining two techniques and two types of paper; a structure design paper and a circuit design paper. Gripper and conveyor robots were fabricated, and their functions were verified. These works demonstrate the possibility of paper mechatronics for rapid and low-cost prototyping as well as of printed robots.", "title": "" }, { "docid": "d558db90f72342eae413ed7937e9120f", "text": "Latent Dirichlet Allocation (LDA) models trained without stopword removal often produce topics with high posterior probabilities on uninformative words, obscuring the underlying corpus content. Even when canonical stopwords are manually removed, uninformative words common in that corpus will still dominate the most probable words in a topic. In this work, we first show how the standard topic quality measures of coherence and pointwise mutual information act counter-intuitively in the presence of common but irrelevant words, making it difficult to even quantitatively identify situations in which topics may be dominated by stopwords. We propose an additional topic quality metric that targets the stopword problem, and show that it, unlike the standard measures, correctly correlates with human judgements of quality. We also propose a simple-to-implement strategy for generating topics that are evaluated to be of much higher quality by both human assessment and our new metric. This approach, a collection of informative priors easily introduced into most LDA-style inference methods, automatically promotes terms with domain relevance and demotes domain-specific stop words. We demonstrate this approach’s effectiveness in three very different domains: Department of Labor accident reports, online health forum posts, and NIPS abstracts. Overall we find that current practices thought to solve this problem do not do so adequately, and that our proposal offers a substantial improvement for those interested in interpreting their topics as objects in their own right.", "title": "" }, { "docid": "2ea2c86a3c23ff7238b13b0508a592a1", "text": "In earlier work we have introduced the “Recursive Sparse Blocks” (RSB) sparse matrix storage scheme oriented towards cache efficient matrix-vector multiplication (SpMV ) and triangular solution (SpSV ) on cache based shared memory parallel computers. Both the transposed (SpMV T ) and symmetric (SymSpMV ) matrix-vector multiply variants are supported. RSB stands for a meta-format: it recursively partitions a rectangular sparse matrix in quadrants; leaf submatrices are stored in an appropriate traditional format — either Compressed Sparse Rows (CSR) or Coordinate (COO). In this work, we compare the performance of our RSB implementation of SpMV, SpMV T, SymSpMV to that of the state-of-the-art Intel Math Kernel Library (MKL) CSR implementation on the recent Intel’s Sandy Bridge processor. Our results with a few dozens of real world large matrices suggest the efficiency of the approach: in all of the cases, RSB’s SymSpMV (and in most cases, SpMV T as well) took less than half of MKL CSR’s time; SpMV ’s advantage was smaller. Furthermore, RSB’s SpMV T is more scalable than MKL’s CSR, in that it performs almost as well as SpMV. Additionally, we include comparisons to the state-of-the art format Compressed Sparse Blocks (CSB) implementation. We observed RSB to be slightly superior to CSB in SpMV T, slightly inferior in SpMV, and better (in most cases by a factor of two or more) in SymSpMV. Although RSB is a non-traditional storage format and thus needs a special constructor, it can be assembled from CSR or any other similar rowordered representation arrays in the time of a few dozens of matrix-vector multiply executions. Thanks to its significant advantage over MKL’s CSR routines for symmetric or transposed matrix-vector multiplication, in most of the observed cases the assembly cost has been observed to amortize with fewer than fifty iterations.", "title": "" }, { "docid": "cc4548925973baa6220ad81082a93c86", "text": "Usually benefits for transportation investments are analysed within a framework of cost-benefit analysis or its related techniques such as financial analysis, cost-effectiveness analysis, life-cycle costing, economic impact analysis, and others. While these tools are valid techniques in general, their application to intermodal transportation would underestimate the overall economic impact by missing important aspects of productivity enhancement. Intermodal transportation is an example of the so-called general purpose technologies (GPTs) that are characterized by statistically significant spillover effects. Diffusion, secondary innovations, and increased demand for specific human capital are basic features of GPTs. Eventually these features affect major macroeconomic variables, especially productivity. Recent economic literature claims that in order to study GPTs, micro and macro evidence should be combined to establish a better understanding of the connecting mechanisms from the micro level to the overall performance of an economy or the macro level. This study analyses these issues with respect to intermodal transportation. The goal is to understand the basic micro and macro mechanisms behind intermodal transportation in order to further develop a rigorous framework for evaluation of benefits from intermodal transportation. In doing so, lessons from computer simulation of the basic features of intermodal transportation are discussed and conclusions are made regarding an agenda for work in the field. 1 Dr. Yuri V. Yevdokimov, Assistant Professor of Economics and Civil Engineering, University of New Brunswick, Canada, Tel. (506) 447-3221, Fax (506) 453-4514, E-mail: yuri@unb.ca Introduction Intermodal transportation can be thought of as a process for transporting freight and passengers by means of a system of interconnected networks, involving various combinations of modes of transportation, in which all of the components are seamlessly linked and efficiently combined. Intermodal transportation is rapidly gaining acceptance as an integral component of the systems approach of conducting business in an increasingly competitive and interdependent global economy. For example, the United States Code with respect to transportation states: AIt is the policy of the United States Government to develop a National Intermodal Transportation System that is economically efficient and environmentally sound, provides the foundation for the United States to compete in the global economy and will move individuals and property in an energy efficient way. The National Intermodal Transportation System shall consist of all forms of transportation in a unified, interconnected manner, including the transportation systems of the future, to reduce energy consumption and air pollution while promoting economic development and supporting the United States= pre-eminent position in international commerce.@ (49 USC, Ch. 55, Sec. 5501, 1998) David Collenette (1997), the Transport Minister of Canada, noted: AWith population growth came development, and the relative advantages and disadvantages of the different modes changed as the transportation system became more advanced.... Intermodalism today is about safe, efficient transportation by the most appropriate combination of modes.@ (The Summit on North American Intermodal Transportation, 1997) These statements define intermodal transportation as a macroeconomic concept, because an effective transportation system is a vital factor in assuring the efficiency of an economic system as a whole. Moreover, intermodal transportation is an important socio-economic phenomenon which implies that the benefits of intermodal transportation have to be evaluated at the macroeconomic level, or at least at the regional level, involving all elements of the economic system that gain from having a more efficient transportation network in place. Defining Economic Benefits of Intermodal Transportation Traditionally, the benefits of a transportation investment have been primarily evaluated through reduced travel time and reduced vehicle maintenance and operation costs. However, according to Weisbrod and Treyz (1998), such methods underestimate the total benefits of transportation investment by Amissing other important aspects of productivity enhancement.@ It is so because transportation does not have an intrinsic purpose in itself and is rather intended to enable other economic activities such as production, consumption, leisure, and dissemination of knowledge to take place. Hence, in order to measure total economic benefits of investing in intermodal transportation, it is necessary to understand their basic relationships with different economic activities. Eventually, improvements in transportation reduce transportation costs. The immediate benefit of the reduction is the fall in total cost of production in an economic system under study which results in growth of the system=s output. This conclusion has been known in economic development literature since Tinbergen=s paper in 1957 (Tinbergen, 1957). However, the literature does not explicitly identify why transportation costs will fall. This issue is addressed in this discussion with respect to intermodal transportation. Transportation is a multiple service to multiple users. It is produced in transportation networks that provide infrastructure for economic activities. It appears that transportation networks have economies of scale. As discussed below, intermodal transportation magnifies these scale effects resulting in increasing returns to scale (IRS) of a specific nature. It implies that there are positive externalities that arise because of the scale effects, externalities that can initiate cumulative economic growth at the regional level as well as at the national level (see, for example, Brathen and Hervick, 1997, and Hussain and Westin, 1997). The phenomenon is known as a spill-over effect. Previously the effect has been evaluated through the contribution of transportation infrastructure investment to economic growth. Since Auschauer=s (1989) paper many economists have found evidence of such a contribution (see, for example, Bonaglia and Ferrara, 2000 and Khanam, 1996). Intermodal transportation as it was defined at the very beginning is more than mere improvements in transportation infrastructure. From a theoretical standpoint, it posseses some characteristics of the general-purpose technologies (GPT), and it seems appropriate to regard it as an example of the GPT, which is discussed below. It appears reasonable to study intermodal transportation as a two-way improvement of an economic system=s productivity. On the one hand, it improves current operational functions of the system. On the other hand, it expands those functions. Both improvements are achieved by consolidating different transportation systems into a seamless transportation network that utilizes the comparative advantages of different transportation modes. Improvements due to intermodal transportation are associated with the increased productivity of transportation services and a reduction in logistic costs. The former results in an increased volume of transportation per unit cost, while the latter directly reduces costs of commodity production. Expansion of the intermodal transportation network is associated with economies of scale and better accessibility to input and output markets. The overall impact of intermodal transportation can be divided into four elements: (i) an increase in the volume of transportation in an existing transportation network; (ii) a reduction in logistic costs of current operations; (iii) the economies of scale associated with transportation network expansion; (iv) better accessibility to input and output markets. These four elements are discussed below in a sequence. Increase in volume of transportation in the existing network An increase in volume of transportation can lead to economies of density a specific scale effect. The economies of density exist if an increase in the volume of transportation in the network does not require a proportional increase in all inputs of the network. Usually the phenomenon is associated with an increase in the frequency of transportation (traffic) within the existing network (see Boyer, 1998 for a formal definition, Ciccone and Hall, 1996 for general discussion of economies of density, and Fujii, Im and Mak, 1992 for examples of economies of density in transportation). In the case of intermodal transportation, economies of density are achieved through cargo containerization, cargo consolidation and computer-guiding systems at intermodal facilities. Cargo containerization and consolidation result in an increased load factor of transportation vehicles and higher capacity utilization of the transportation fixed facilities, while utilization of computer-guiding systems results in higher labour productivity. For instance, in 1994 Burlington Northern Santa Fe Railway (BNSF) introduced the Alliance Intermodal Facility at Fort Worth, Texas, into its operations between Chicago and Los Angeles. According to OmniTRAX specialists, who operates the facility, BNSF has nearly doubled its volume of throughput at the intermodal facility since 1994. First, containerization of commodities being transported plus hubbing or cargo consolidation at the intermodal facility resulted in longer trains with higher frequency. Second, all day-to-day operations at the intermodal facility are governed by the Optimization Alternatives Strategic Intermodal Scheduler (OASIS) computer system, which allowed BNSF to handle more operations with less labour. Reduction in Logistic Costs Intermodal transportation is characterized by optimal frequency of service and modal choice and increased reliability. Combined, these two features define the just-in-time delivery -a major service produced by intermodal transportation. Furthermore, Blackburn (1991) argues that just-in-time d", "title": "" }, { "docid": "a41bb1fe5670cc865bf540b34848f45f", "text": "The general idea of discovering knowledge in large amounts of data is both appealing and intuitive. Typically we focus our attention on learning algorithms, which provide the core capability of generalizing from large numbers of small, very specific facts to useful high-level rules; these learning techniques seem to hold the most excitement and perhaps the most substantive scientific content in the knowledge discovery in databases (KDD) enterprise. However, when we engage in real-world discovery tasks, we find that they can be extremely complex, and that induction of rules is only one small part of the overall process. While others have written overviews of \"the concept of KDD, and even provided block diagrams for \"knowledge discovery systems,\" no one has begun to identify all of the building blocks in a realistic KDD process. This is what we attempt to do here. Besides bringing into the discussion several parts of the process that have received inadequate attention in the KDD community, a careful elucidation of the steps in a realistic knowledge discovery process can provide a framework for comparison of different technologies and tools that are almost impossible to compare without a clean model.", "title": "" }, { "docid": "c5ffd6108b05b27172d92ee578437859", "text": "Fairness in machine learning has predominantly been studied in static classification settings without concern for how decisions change the underlying population over time. Conventional wisdom suggests that fairness criteria promote the longterm well-being of those groups they aim to protect. We study how static fairness criteria interact with temporal indicators of well-being, such as long-term improvement, stagnation, and decline in a variable of interest. We demonstrate that even in a one-step feedback model, common fairness criteria in general do not promote improvement over time, and may in fact cause harm in cases where an unconstrained objective would not. We completely characterize the delayed impact of three standard criteria, contrasting the regimes in which these exhibit qualitatively different behavior. In addition, we find that a natural form of measurement error broadens the regime in which fairness criteria perform favorably. Our results highlight the importance of measurement and temporal modeling in the evaluation of fairness criteria, suggesting a range of new challenges and trade-offs.", "title": "" }, { "docid": "4703b02dc285a55002f15d06d98251e7", "text": "Nowadays, most Photovoltaic installations are grid connected system. From distribution system point of view, the main point and concern related to PV grid-connected are overvoltage or overcurrent in the distribution network. This paper describes the simulation study which focuses on ferroresonance phenomenon of PV system on lower side of distribution transformer. PSCAD program is selected to simulate the ferroresonance phenomenon in this study. The example of process that creates ferroresonance by the part of PV system and ferroresonance effect will be fully described in detail.", "title": "" }, { "docid": "03764875c88a1480264050b0b0a16437", "text": "Social media anomaly detection is of critical importance to prevent malicious activities such as bullying, terrorist attack planning, and fraud information dissemination. With the recent popularity of social media, new types of anomalous behaviors arise, causing concerns from various parties. While a large amount of work have been dedicated to traditional anomaly detection problems, we observe a surge of research interests in the new realm of social media anomaly detection. In this paper, we present a survey on existing approaches to address this problem. We focus on the new type of anomalous phenomena in the social media and review the recent developed techniques to detect those special types of anomalies. We provide a general overview of the problem domain, common formulations, existing methodologies and potential directions. With this work, we hope to call out the attention from the research community on this challenging problem and open up new directions that we can contribute in the future.", "title": "" }, { "docid": "bffd230e76ec32eefe70904a9290bf41", "text": "This paper introduces a new idea in describing people using their first names, i.e., the name assigned at birth. We show that describing people in terms of similarity to a vector of possible first names is a powerful description of facial appearance that can be used for face naming and building facial attribute classifiers. We build models for 100 common first names used in the United States and for each pair, construct a pair wise first-name classifier. These classifiers are built using training images downloaded from the Internet, with no additional user interaction. This gives our approach important advantages in building practical systems that do not require additional human intervention for labeling. We use the scores from each pair wise name classifier as a set of facial attributes. We show several surprising results. Our name attributes predict the correct first names of test faces at rates far greater than chance. The name attributes are applied to gender recognition and to age classification, outperforming state-of-the-art methods with all training images automatically gathered from the Internet.", "title": "" }, { "docid": "31e052aaf959a4c5d6f1f3af6587d6cd", "text": "We introduce a learning framework called learning using privileged information (LUPI) to the computer vision field. We focus on the prototypical computer vision problem of teaching computers to recognize objects in images. We want the computers to be able to learn faster at the expense of providing extra information during training time. As additional information about the image data, we look at several scenarios that have been studied in computer vision before: attributes, bounding boxes and image tags. The information is privileged as it is available at training time but not at test time. We explore two maximum-margin techniques that are able to make use of this additional source of information, for binary and multiclass object classification. We interpret these methods as learning easiness and hardness of the objects in the privileged space and then transferring this knowledge to train a better classifier in the original space. We provide a thorough analysis and comparison of information transfer from privileged to the original data spaces for both LUPI methods. Our experiments show that incorporating privileged information can improve the classification accuracy. Finally, we conduct user studies to understand which samples are easy and which are hard for human learning, and explore how this information is related to easy and hard samples when learning a classifier.", "title": "" }, { "docid": "fb1092ee4fe5f29394148ae0b134dd08", "text": "The landscape of online learning has evolved in a synchronous fashion with the development of the every-growing repertoire of technologies, especially with the recent addition of Massive Online Open Courses (MOOCs). Since MOOC platforms allow thousands of students to participate at the same time, MOOC participants can have fairly varied motivation. Meanwhile, a low course completion rate has been observed across different MOOC platforms. The first and initiated stage of the proposed research here is a preliminary attempt to study how different motivational aspects of MOOC learners correlate with course participation and completion, with motivation measured using a survey and participation measured using log analytics. The exploratory stage of the study has been conducted within the context of an educational data mining MOOC, within Coursera. In the long run, research results can be expected to inform future interventions, and the design of MOOCs, as well as increasing understanding of the emergent needs of MOOC learners as data collection extends beyond the current scope by incorporating wider disciplinary areas.", "title": "" }, { "docid": "7e4b20e6fe3030fecbd05d37dc079d63", "text": "Women's reproductive fertility peaks for a few days in the middle of their cycle around ovulation. Because conception is most likely to occur inside this brief fertile window, evolutionary theories suggest that men possess adaptations designed to maximize their reproductive success by mating with women during their peak period of fertility. In this article, we provide evidence from 3 studies that subtle cues of fertility prime mating motivation in men, thus facilitating psychological and behavioral processes associated with the pursuit of a sexual partner. In Study 1, men exposed to the scent of a woman near peak levels of fertility displayed increased accessibility to sexual concepts. Study 2 demonstrated that, among men who reported being sensitive to odors, scent cues of fertility triggered heightened perceptions of women's sexual arousal. Study 3 revealed that, in a face-to-face interaction, high levels of female fertility were associated with a greater tendency for men to make risky decisions and to behaviorally mimic a female partner. Hence, subtle cues of fertility led to a cascade of mating-related processes-from lower order cognition to overt behavior-that reflected heightened mating motivation. Implications for theories of goal pursuit, romantic attraction, and evolutionary psychology are discussed.", "title": "" }, { "docid": "6090d8c6e8ef8532c5566908baa9a687", "text": "Cardiovascular diseases (CVD) are known to be the most widespread causes to death. Therefore, detecting earlier signs of cardiac anomalies is of prominent importance to ease the treatment of any cardiac complication or take appropriate actions. Electrocardiogram (ECG) is used by doctors as an important diagnosis tool and in most cases, it's recorded and analyzed at hospital after the appearance of first symptoms or recorded by patients using a device named holter ECG and analyzed afterward by doctors. In fact, there is a lack of systems able to capture ECG and analyze it remotely before the onset of severe symptoms. With the development of wearable sensor devices having wireless transmission capabilities, there is a need to develop real time systems able to accurately analyze ECG and detect cardiac abnormalities. In this paper, we propose a new CVD detection system using Wireless Body Area Networks (WBAN) technology. This system processes the captured ECG using filtering and Undecimated Wavelet Transform (UWT) techniques to remove noises and extract nine main ECG diagnosis parameters, then the system uses a Bayesian Network Classifier model to classify ECG based on its parameters into four different classes: Normal, Premature Atrial Contraction (PAC), Premature Ventricular Contraction (PVC) and Myocardial Infarction (MI). The experimental results on ECGs from real patients databases show that the average detection rate (TPR) is 96.1% for an average false alarm rate (FPR) of 1.3%.", "title": "" }, { "docid": "04cc398c2a95119b4af7e0351d1d798a", "text": "A 16-year-old boy presented to the Emergency Department having noted the pictured skin markings on his left forearm several hours earlier. He stated that the markings had not been present earlier that afternoon, and had remained unchanged since first noted after track and field practice. There was no history of trauma, ingestions, or any systemic symptoms. The markings were neither tender nor pruritic. His parents denied any family history of malignancy. Physical examination revealed the raised black markings with minimal surrounding erythema, as seen in Figure 1. The rest of the dermatologic and remaining physical examinations were, and remained, unremarkable.", "title": "" }, { "docid": "dc4aba1d336c602b896fbff3e614be39", "text": "Requirements in computational power have grown dramatically in recent years. This is also the case in many language processing tasks, due to the overwhelming and ever increasing amount of textual information that must be processed in a reasonable time frame. This scenario has led to a paradigm shift in the computing architectures and large-scale data processing strategies used in the Natural Language Processing field. In this paper we present a new distributed architecture and technology for scaling up text analysis running a complete chain of linguistic processors on several virtual machines. Furthermore, we also describe a series of experiments carried out with the goal of analyzing the scaling capabilities of the language processing pipeline used in this setting. We explore the use of Storm in a new approach for scalable distributed language processing across multiple machines and evaluate its effectiveness and efficiency when processing documents on a medium and large scale. The experiments have shown that there is a big room for improvement regarding language processing performance when adopting parallel architectures, and that we might expect even better results with the use of large clusters with many processing", "title": "" }, { "docid": "92c72aa180d3dccd5fcc5504832780e9", "text": "The site of S1-S2 root activation following percutaneous high-voltage electrical (ES) and magnetic stimulation were located by analyzing the variations of the time interval from M to H soleus responses elicited by moving the stimulus point from lumbar to low thoracic levels. ES was effective in activating S1-S2 roots at their origin. However supramaximal motor root stimulation required a dorsoventral montage, the anode being a large, circular surface electrode placed ventrally, midline between the apex of the xiphoid process and the umbilicus. Responses to magnetic stimuli always resulted from the activation of a fraction of the fiber pool, sometimes limited to the low-thresholds afferent component, near its exit from the intervertebral foramina, or even more distally. Normal values for conduction velocity in motor and 1a afferent fibers in the proximal nerve tract are provided.", "title": "" }, { "docid": "69b909b2aaa2d79b71c1fb4c4ac15724", "text": "Chronic musculoskeletal pain (CMP) is one of the main reasons for referral to a pediatric rheumatologist and is the third most common cause of chronic pain in children and adolescents. Causes of CMP include amplified musculoskeletal pain, benign limb pain of childhood, hypermobility, overuse syndromes, and back pain. CMP can negatively affect physical, social, academic, and psychological function so it is essential that clinicians know how to diagnose and treat these conditions. This article provides an overview of the epidemiology and impact of CMP, the steps in a comprehensive pain assessment, and the management of the most common CMPs.", "title": "" }, { "docid": "0d82a64bdcc3ca4c0522ca7c945b1d55", "text": "Thin sheets have long been known to experience an increase in stiffness when they are bent, buckled, or assembled into smaller interlocking structures. We introduce a unique orientation for coupling rigidly foldable origami tubes in a \"zipper\" fashion that substantially increases the system stiffness and permits only one flexible deformation mode through which the structure can deploy. The flexible deployment of the tubular structures is permitted by localized bending of the origami along prescribed fold lines. All other deformation modes, such as global bending and twisting of the structural system, are substantially stiffer because the tubular assemblages are overconstrained and the thin sheets become engaged in tension and compression. The zipper-coupled tubes yield an unusually large eigenvalue bandgap that represents the unique difference in stiffness between deformation modes. Furthermore, we couple compatible origami tubes into a variety of cellular assemblages that can enhance mechanical characteristics and geometric versatility, leading to a potential design paradigm for structures and metamaterials that can be deployed, stiffened, and tuned. The enhanced mechanical properties, versatility, and adaptivity of these thin sheet systems can provide practical solutions of varying geometric scales in science and engineering.", "title": "" }, { "docid": "b9065d678b3a9aab8d9f98d7367ad7bb", "text": "Ms. Pac-Man is a challenging, classic arcade game that provides an interesting platform for Artificial Intelligence (AI) research. This paper reports the first Monte-Carlo approach to develop a ghost avoidance module of an intelligent agent that plays the game. Our experimental results show that the look-ahead ability of Monte-Carlo simulation often prevents Ms. Pac-Man being trapped by ghosts and reduces the chance of losing Ms. Pac-Man's life significantly. Our intelligent agent has achieved a high score of around 21,000. It is sometimes capable of clearing the first three stages and playing at the level of a novice human player.", "title": "" }, { "docid": "6b27ae277c5ec0fb74d89a13dbba473d", "text": "This article surveys recent work in active learning aimed at making it more practical for real-world use. In general, active learning systems aim to make machine learning more economical, since they can participate in the acquisition of their own training data. An active learner might iteratively select informative query instances to be labeled by an oracle, for example. Work over the last two decades has shown that such approaches are effective at maintaining accuracy while reducing training set size in many machine learning applications. However, as we begin to deploy active learning in real ongoing learning systems and data annotation projects, we are encountering unexpected problems—due in part to practical realities that violate the basic assumptions of earlier foundational work. I review some of these issues, and discuss recent work being done to address the challenges.", "title": "" } ]
scidocsrr
45e34aaac34f32c5fd26e4e609edfb13
Content-Adaptive Sketch Portrait Generation by Decompositional Representation Learning
[ { "docid": "2729af242339c8cbc51f49047ed9d049", "text": "We address the problem of interactive facial feature localization from a single image. Our goal is to obtain an accurate segmentation of facial features on high-resolution images under a variety of pose, expression, and lighting conditions. Although there has been significant work in facial feature localization, we are addressing a new application area, namely to facilitate intelligent high-quality editing of portraits, that brings requirements not met by existing methods. We propose an improvement to the Active Shape Model that allows for greater independence among the facial components and improves on the appearance fitting step by introducing a Viterbi optimization process that operates along the facial contours. Despite the improvements, we do not expect perfect results in all cases. We therefore introduce an interaction model whereby a user can efficiently guide the algorithm towards a precise solution. We introduce the Helen Facial Feature Dataset consisting of annotated portrait images gathered from Flickr that are more diverse and challenging than currently existing datasets. We present experiments that compare our automatic method to published results, and also a quantitative evaluation of the effectiveness of our interactive method.", "title": "" }, { "docid": "e5eb7c604a2793dbbaef6ca9473f8350", "text": "This paper presents a hierarchical-compositional model of human faces, as a three-layer AND-OR graph to account for the structural variabilities over multiple resolutions. In the AND-OR graph, an AND-node represents a decomposition of certain graphical structure, which expands to a set of OR-nodes with associated relations; an OR-node serves as a switch variable pointing to alternative AND-nodes. Faces are then represented hierarchically: The first layer treats each face as a whole, the second layer refines the local facial parts jointly as a set of individual templates, and the third layer further divides the face into 15 zones and models detail facial features such as eye corners, marks, or wrinkles. Transitions between the layers are realized by measuring the minimum description length (MDL) given the complexity of an input face image. Diverse face representations are formed by drawing from dictionaries of global faces, parts, and skin detail features. A sketch captures the most informative part of a face in a much more concise and potentially robust representation. However, generating good facial sketches is extremely challenging because of the rich facial details and large structural variations, especially in the high-resolution images. The representing power of our generative model is demonstrated by reconstructing high-resolution face images and generating the cartoon facial sketches. Our model is useful for a wide variety of applications, including recognition, nonphotorealisitc rendering, superresolution, and low-bit rate face coding.", "title": "" } ]
[ { "docid": "6c0f3240b86677a0850600bf68e21740", "text": "In this article, we revisit two popular convolutional neural networks in person re-identification (re-ID): verification and identification models. The two models have their respective advantages and limitations due to different loss functions. Here, we shed light on how to combine the two models to learn more discriminative pedestrian descriptors. Specifically, we propose a Siamese network that simultaneously computes the identification loss and verification loss. Given a pair of training images, the network predicts the identities of the two input images and whether they belong to the same identity. Our network learns a discriminative embedding and a similarity measurement at the same time, thus taking full usage of the re-ID annotations. Our method can be easily applied on different pretrained networks. Albeit simple, the learned embedding improves the state-of-the-art performance on two public person re-ID benchmarks. Further, we show that our architecture can also be applied to image retrieval. The code is available at https://github.com/layumi/2016_person_re-ID.", "title": "" }, { "docid": "ba7701a94880b59bbbd49fbfaca4b8c3", "text": "Many rural roads lack sharp, smoothly curving edges and a homogeneous surface appearance, hampering traditional vision-based road-following methods. However, they often have strong texture cues parallel to the road direction in the form of ruts and tracks left by other vehicles. This paper describes an unsupervised algorithm for following ill-structured roads in which dominant texture orientations computed with Gabor wavelet filters vote for a consensus road vanishing point location. The technique is first described for estimating the direction of straight-road segments, then extended to curved and undulating roads by tracking the vanishing point indicated by a differential “strip” of voters moving up toward the nominal vanishing line. Finally, the vanishing point is used to constrain a search for the road boundaries by maximizing textureand color-based region discriminant functions. Results are shown for a variety of road scenes including gravel roads, dirt trails, and highways.", "title": "" }, { "docid": "5e63c7f6d86b634d8a2b7e0746eaa0d2", "text": "A famous theorem of Szemerédi asserts that given any density 0 < δ ≤ 1 and any integer k ≥ 3, any set of integers with density δ will contain infinitely many proper arithmetic progressions of length k. For general k there are essentially four known proofs of this fact; Szemerédi’s original combinatorial proof using the Szemerédi regularity lemma and van der Waerden’s theorem, Furstenberg’s proof using ergodic theory, Gowers’ proof using Fourier analysis and the inverse theory of additive combinatorics, and the more recent proofs of Gowers and Rödl-Skokan using a hypergraph regularity lemma. Of these four, the ergodic theory proof is arguably the shortest, but also the least elementary, requiring passage (via the Furstenberg correspondence principle) to an infinitary measure preserving system, and then decomposing a general ergodic system relative to a tower of compact extensions. Here we present a quantitative, self-contained version of this ergodic theory proof, and which is “elementary” in the sense that it does not require the axiom of choice, the use of infinite sets or measures, or the use of the Fourier transform or inverse theorems from additive combinatorics. It also gives explicit (but extremely poor) quantitative bounds.", "title": "" }, { "docid": "b752e7513d4acbd0a0cd8991022f093e", "text": "One common strategy for dealing with large, complex models is to partition them into pieces that are easier to handle. While decomposition into convex components results in pieces that are easy to process, such decompositions can be costly to construct and often result in representations with an unmanageable number of components. In this paper, we propose an alternative partitioning strategy that decomposes a given polyhedron into “approximately convex” pieces. For many applications, the approximately convex components of this decomposition provide similar benefits as convex components, while the resulting decomposition is both significantly smaller and can be computed more efficiently. Indeed, for many models, an approximate convex decomposition can more accurately represent the important structural features of the model by providing a mechanism for ignoring insignificant features, such as wrinkles and other surface texture. We propose a simple algorithm to compute approximate convex decompositions of polyhedra of arbitrary genus to within a user specified tolerance. This algorithm measures the significance of the model’s features and resolves them in order of priority. As a by product, it also produces an elegant hierarchical representation of the model. We illustrate its utility in constructing an approximate skeleton of the model that results in significant performance gains over skeletons based on an exact convex decomposition. This research supported in part by NSF CAREER Award CCR-9624315, NSF Grants IIS-9619850, ACI-9872126, EIA-9975018, EIA-0103742, EIA-9805823, ACI-0113971, CCR-0113974, EIA-9810937, EIA-0079874, and by the Texas Higher Education Coordinating Board grant ARP-036327-017. Figure 1: Each component is approximately convex (concavity less than 10 by our measure). There are a total of 17 components.", "title": "" }, { "docid": "b623437391b298c2e618b0f42d3e19a9", "text": "In the era of the Social Web, crowdfunding has become an increasingly more important channel for entrepreneurs to raise funds from the crowd to support their startup projects. Previous studies examined various factors such as project goals, project durations, and categories of projects that might influence the outcomes of the fund raising campaigns. However, textual information of projects has rarely been studied for analyzing crowdfunding successes. The main contribution of our research work is the design of a novel text analytics-based framework that can extract latent semantics from the textual descriptions of projects to predict the fund raising outcomes of these projects. More specifically, we develop the Domain-Constraint Latent Dirichlet Allocation (DC-LDA) topic model for effective extraction of topical features from texts. Based on two real-world crowdfunding datasets, our experimental results reveal that the proposed framework outperforms a classical LDA-based method in predicting fund raising success by an average of 11% in terms of F1 score. The managerial implication of our research is that entrepreneurs can apply the proposed methodology to identify the most influential topical features embedded in project descriptions, Corresponding author at: School of Information, Renmin University of China, Beijing, 100872, P.R. China. Email address: hui.yuan@my.cityu.edu.hk (H. Yuan), raylau@cityu.edu.hk (R.Y.K. Lau), weixu@ruc.edu.cn (W. Xu) AC C EP TE D M AN U SC R IP T ACCEPTED MANUSCRIPT 2 and hence to better promote their projects and improving the chance of raising sufficient funds for their projects.", "title": "" }, { "docid": "cffdc6b7698fc29893199fdde061f30b", "text": "Language-users reduce words in predictable contexts. Previous research indicates that reduction may be stored in lexical representation if a word is often reduced. Because representation influences production regardless of context, production should be biased by how often each word has been reduced in the speaker's prior experience. This study investigates whether speakers have a context-independent bias to reduce low-informativity words, which are usually predictable and therefore usually reduced. Content word durations were extracted from the Buckeye and Switchboard speech corpora, and analyzed for probabilistic reduction effects using a language model based on spontaneous speech in the Fisher corpus. The analysis supported the hypothesis: low-informativity words have shorter durations, even when the effects of local contextual predictability, frequency, speech rate, and several other variables are controlled for. Additional models that compared word types against only other words of the same segmental length further supported this conclusion. Words that usually appear in predictable contexts are reduced in all contexts, even those in which they are unpredictable. The result supports representational models in which reduction is stored, and where sufficiently frequent reduction biases later production. The finding provides new evidence that probabilistic reduction interacts with lexical representation.", "title": "" }, { "docid": "9f746a67a960b01c9e33f6cd0fcda450", "text": "Motivated by large-scale multimedia applications we propose to learn mappings from high-dimensional data to binary codes that preserve semantic similarity. Binary codes are well suited to large-scale applications as they are storage efficient and permit exact sub-linear kNN search. The framework is applicable to broad families of mappings, and uses a flexible form of triplet ranking loss. We overcome discontinuous optimization of the discrete mappings by minimizing a piecewise-smooth upper bound on empirical loss, inspired by latent structural SVMs. We develop a new loss-augmented inference algorithm that is quadratic in the code length. We show strong retrieval performance on CIFAR-10 and MNIST, with promising classification results using no more than kNN on the binary codes.", "title": "" }, { "docid": "e6bbe7de06295817435acafbbb7470cc", "text": "Cortical circuits work through the generation of coordinated, large-scale activity patterns. In sensory systems, the onset of a discrete stimulus usually evokes a temporally organized packet of population activity lasting ∼50–200 ms. The structure of these packets is partially stereotypical, and variation in the exact timing and number of spikes within a packet conveys information about the identity of the stimulus. Similar packets also occur during ongoing stimuli and spontaneously. We suggest that such packets constitute the basic building blocks of cortical coding.", "title": "" }, { "docid": "e7095fc3fe4bd91ac7da71fe48d40426", "text": "Anaphora Resolution in Danish", "title": "" }, { "docid": "354089f03ce4b80deb11f0d8c60efc44", "text": "Digitization of music has led to easier access to different forms music across the globe. Increasing work pressure denies the necessary time to listen and evaluate music for a creation of a personal music library. One solution might be developing a music search engine or recommendation system based on different moods. In fact mood label is considered as an emerging metadata in the digital music libraries and online music repositories. In this paper, we proposed mood taxonomy for Hindi songs and prepared a mood annotated lyrics corpus based on this taxonomy. We also annotated lyrics with positive and negative polarity. Instead of adopting a traditional approach to music mood classification based solely on audio features, the present study describes a mood classification system from lyrics as well by combining a wide range of semantic and stylistic features extracted from textual lyrics. We also developed a supervised system to identify the sentiment of the Hindi song lyrics based on the above features. We achieved the maximum average F-measure of 68.30% and 38.49% for classifying the polarities and moods of the Hindi lyrics, respectively.", "title": "" }, { "docid": "3a7f3e75a5d534f6475c40204ba2403f", "text": "In recent years, deep neural network approaches have been widely adopted for machine learning tasks, including classification. However, they were shown to be vulnerable to adversarial perturbations: carefully crafted small perturbations can cause misclassification of legitimate images. We propose Defense-GAN, a new framework leveraging the expressive capability of generative models to defend deep neural networks against such attacks. Defense-GAN is trained to model the distribution of unperturbed images and, at inference time, finds a close output to a given image. This output will not contain the adversarial changes and is fed to the classifier. Our proposed method can be used with any classification model and does not modify the classifier structure or training procedure. It can also be used as a defense against any attack as it does not assume knowledge of the process for generating the adversarial examples. We empirically show that Defense-GAN is consistently effective against different attack methods and improves on existing defense strategies.", "title": "" }, { "docid": "a96f27e15c3bbc60810b73a5de21a06c", "text": "Illumination always affects image quality seriously in practice. To weaken illumination effect on image quality, this paper proposes an adaptive gamma correction method. First, a mapping between pixel and gamma values is built. The gamma values are then revised using two non-linear functions to prevent image distortion. Experimental results demonstrate that the proposed method performs better in readjusting image illumination condition and improving image quality.", "title": "" }, { "docid": "76a6a5f7d66f8e527640828a3ebf450a", "text": "In early days, the data transfer was done by wired media like co-axial cable(s), fiber optic cable(s) etc. The era has gone. Nowadays wired media is replaced by wireless means, for which Wi-Fi, ZigBee, Bluetooth and Dash7 (Wireless Sensor Networks) are used. Out of these techniques, Bluetooth is mostly used nowadays. This study emphasizes that wireless communication system for secured data transfer can be done by Bluetooth connectivity. Bluetooth devices are short range and meant for low power utilization, allowing communication between various devices. Various algorithms have been developed for the purpose of providing security to the data to be transferred. Main techniques are DES (Data Encryption Standards), AES (Advanced Encryption Standards), and EES (Escrowed Encryption Standards). Out of them, the Advanced Encryption Standards is the most widely used. This study analyzes the development of fully secured wireless connection terminals on a FPGA where connection is established using Bluetooth technology and advanced encryption standards (AES) are used to initialize the secured algorithm for data exchange. RC-10 Prototyping board with Xilinx Spartan-III XC3S1500L-4-FG320 FPGA device is used for hardware evaluation of system design.", "title": "" }, { "docid": "3bafb678f33675aadb1ee18a4481c4a7", "text": "Chi-Ying Liang Huei Peng Department of Mechanical Engineering and Applied Mechanics University of Michigan 2272 G.G. Brown Ann Arbor, MI 48109-2125, USA hpeng@umich.edu TEL: (734) 936-0352 FAX: (734) 647-3170 Abstract A two-level Adaptive Cruise Control (ACC) synthesis method is presented in this paper. At the upper level, desired vehicle acceleration is computed based on vehicle range and range rate measurement. At the lower (servo) level, an adaptive control algorithm is designed to ensure the vehicle follows the upper level acceleration command accurately. It is shown that the servo-level dynamics can be included in the overall design and string stability can be guaranteed. In other words, the proposed control design produces minimum negative impact on surrounding vehicles. The performance of the proposed ACC algorithm is examined by using a microscopic simulation program—ACCSIM created at the University of Michigan. The architecture and basic functions of ACCSIM are described in this paper. Simulation results under different ACC penetration rate and actuator/engine bandwidth are reported.", "title": "" }, { "docid": "95037e7dc3ae042d64a4b343ad4efd39", "text": "We classify human actions occurring in depth image sequences using features based on skeletal joint positions. The action classes are represented by a multi-level Hierarchical Dirichlet Process – Hidden Markov Model (HDP-HMM). The non-parametric HDP-HMM allows the inference of hidden states automatically from training data. The model parameters of each class are formulated as transformations from a shared base distribution, thus promoting the use of unlabelled examples during training and borrowing information across action classes. Further, the parameters are learnt in a discriminative way. We use a normalized gamma process representation of HDP and margin based likelihood functions for this purpose. We sample parameters from the complex posterior distribution induced by our discriminative likelihood function using elliptical slice sampling. Experiments with two different datasets show that action class models learnt using our technique produce good classification results.", "title": "" }, { "docid": "6f5afc38b09fa4fd1e47d323cfe850c9", "text": "In the past several years there has been extensive research into honeypot technologies, primarily for detection and information gathering against external threats. However, little research has been done for one of the most dangerous threats, the advance insider, the trusted individual who knows your internal organization. These individuals are not after your systems, they are after your information. This presentation discusses how honeypot technologies can be used to detect, identify, and gather information on these specific threats.", "title": "" }, { "docid": "085fc099f3800615eda9d7513d4b7c78", "text": "We propose pouch motors, a new family of printable soft actuators integrated with computational design. The pouch motor consists of one or more inflatable gas-tight bladders made of sheet materials. This printable actuator is designed and fabricated in a planar fashion. It allows both easy prototyping and mass fabrication of affordable robotic systems. We provide theoretical models of the actuators compared with the experimental data. The measured maximum stroke and tension of the linear pouch motor are up to 28% and 100 N, respectively. The measured maximum range of motion and torque of the angular pouch motor are up to 80 and 0.2 N, respectively. We also develop an algorithm that automatically generates the patterns of the pouches and their fluidic channels. A custom-built fabrication machine streamlines the automated process from design to fabrication. We demonstrate a computer-generated life-sized hand that can hold a foam ball and perform gestures with 12 pouch motors, which can be fabricated in 15 min.", "title": "" }, { "docid": "e327e992a6973a91d84573390920c48f", "text": "The research regarding Web information extraction focuses on learning rules to extract some selected information from Web documents. Many proposals are ad hoc and cannot benefit from the advances in machine learning; furthermore, they are likely to fade away as the Web evolves, and their intrinsic assumptions are not satisfied. Some authors have explored transforming Web documents into relational data and then using techniques that got inspiration from inductive logic programming. In theory, such proposals should be easier to adapt as the Web evolves because they build on catalogues of features that can be adapted without changing the proposals themselves. Unfortunately, they are difficult to scale as the number of documents or features increases. In the general field of machine learning, there are propositio-relational proposals that attempt to provide effective and efficient means to learn from relational data using propositional techniques, but they have seldom been explored regarding Web information extraction. In this article, we present a new proposal called Roller: it relies on a search procedure that uses a dynamic flattening technique to explore the context of the nodes that provide the information to be extracted; it is configured with an open catalogue of features, so that it can adapt to the evolution of the Web; it also requires a base learner and a rule scorer, which helps it benefit from the continuous advances in machine learning. Our experiments confirm that it outperforms other state-of-the-art proposals in terms of effectiveness and that it is very competitive in terms of efficiency; we have also confirmed that our conclusions are solid from a statistical point of view.", "title": "" }, { "docid": "5b92aa85d93c2fbb09df5a0b96fc9c1f", "text": "Social networking services have been prevalent at many online communities such as Twitter.com and Weibo.com, where millions of users keep interacting with each other every day. One interesting and important problem in the social networking services is to rank users based on their vitality in a timely fashion. An accurate ranking list of user vitality could benefit many parties in social network services such as the ads providers and site operators. Although it is very promising to obtain a vitality-based ranking list of users, there are many technical challenges due to the large scale and dynamics of social networking data. In this paper, we propose a unique perspective to achieve this goal, which is quantifying user vitality by analyzing the dynamic interactions among users on social networks. Examples of social network include but are not limited to social networks in microblog sites and academical collaboration networks. Intuitively, if a user has many interactions with his friends within a time period and most of his friends do not have many interactions with their friends simultaneously, it is very likely that this user has high vitality. Based on this idea, we develop quantitative measurements for user vitality and propose our first algorithm for ranking users based vitality. Also, we further consider the mutual influence between users while computing the vitality measurements and propose the second ranking algorithm, which computes user vitality in an iterative way. Other than user vitality ranking, we also introduce a vitality prediction problem, which is also of great importance for many applications in social networking services. Along this line, we develop a customized prediction model to solve the vitality prediction problem. To evaluate the performance of our algorithms, we collect two dynamic social network data sets. The experimental results with both data sets clearly demonstrate the advantage of our ranking and prediction methods.", "title": "" } ]
scidocsrr
e49fe7b4aa3e5e380870566bc84d5d51
A Survey of Cloudlet Based Mobile Computing
[ { "docid": "e3b91b1133a09d7c57947e2cd85a17c7", "text": "Although mobile devices are gaining more and more capabilities (i.e. CPU power, memory, connectivity, ...), they still fall short to execute complex rich media and data analysis applications. Offloading to the cloud is not always a solution, because of the high WAN latencies, especially for applications with real-time constraints such as augmented reality. Therefore the cloud has to be moved closer to the mobile user in the form of cloudlets. Instead of moving a complete virtual machine from the cloud to the cloudlet, we propose a more fine grained cloudlet concept that manages applications on a component level. Cloudlets do not have to be fixed infrastructure close to the wireless access point, but can be formed in a dynamic way with any device in the LAN network with available resources. We present a cloudlet architecture together with a prototype implementation, showing the advantages and capabilities for a mobile real-time augmented reality application.", "title": "" } ]
[ { "docid": "7ca863355d1fb9e4954c360c810ece53", "text": "The detection of community structure is a widely accepted means of investigating the principles governing biological systems. Recent efforts are exploring ways in which multiple data sources can be integrated to generate a more comprehensive model of cellular interactions, leading to the detection of more biologically relevant communities. In this work, we propose a mathematical programming model to cluster multiplex biological networks, i.e. multiple network slices, each with a different interaction type, to determine a single representative partition of composite communities. Our method, known as SimMod, is evaluated through its application to yeast networks of physical, genetic and co-expression interactions. A comparative analysis involving partitions of the individual networks, partitions of aggregated networks and partitions generated by similar methods from the literature highlights the ability of SimMod to identify functionally enriched modules. It is further shown that SimMod offers enhanced results when compared to existing approaches without the need to train on known cellular interactions.", "title": "" }, { "docid": "2f0eb4a361ff9f09bda4689a1f106ff2", "text": "The growth of Quranic digital publishing increases the need to develop a better framework to authenticate Quranic quotes with the original source automatically. This paper aims to demonstrate the significance of the quote authentication approach. We propose an approach to verify the e-citation of the Quranic quote as compared with original texts from the Quran. In this paper, we will concentrate mainly on discussing the Algorithm to verify the fundamental text for Quranic quotes.", "title": "" }, { "docid": "dd11d7291d8f0ee2313b74dc5498acfa", "text": "Going further At this point, the theorem is proved. While for every summarizer σ there exists at least one tuple (θ,O), in practice there exist multiple tuples, and the one proposed by the proof would not be useful to rank models of summary quality. We can formulate an algorithm which constructs θ from σ and which yields an ordering of candidate summaries. Let σD\\{s1,...,sn} be the summarizer σ which still uses D as initial document collection, but which is not allowed to output sentences from {s1, . . . , sn} in the final summary. For a given summary S to score, let Rσ,S be the smallest set of sentences {s1, . . . , sn} that one has to remove fromD such that σD\\R outputs S. Then the definition of θσ follows:", "title": "" }, { "docid": "8ae1ef032c0a949aa31b3ca8bc024cb5", "text": "Measuring intellectual capital is on the agenda of most 21st century organisations. This paper takes a knowledge-based view of the firm and discusses the importance of measuring organizational knowledge assets. Knowledge assets underpin capabilities and core competencies of any organisation. Therefore, they play a key strategic role and need to be measured. This paper reviews the existing approaches for measuring knowledge based assets and then introduces the knowledge asset map which integrates existing approaches in order to achieve comprehensiveness. The paper then introduces the knowledge asset dashboard to clarify the important actor/infrastructure relationship, which elucidates the dynamic nature of these assets. Finally, the paper suggests to visualise the value pathways of knowledge assets before designing strategic key performance indicators which can then be used to test the assumed causal relationships. This will enable organisations to manage and report these key value drivers in today’s economy. Introduction In the last decade management literature has paid significant attention to the role of knowledge for global competitiveness in the 21st century. It is recognised as a durable and more sustainable strategic resource to acquire and maintain competitive advantages (Barney, 1991a; Drucker, 1988; Grant, 1991a). Today’s business world is characterised by phenomena such as e-business, globalisation, higher degrees of competitiveness, fast evolution of new technology, rapidly changing client demands, as well as changing economic and political structures. In this new context companies need to develop clearly defined strategies that will give them a competitive advantage (Porter, 2001; Barney, 1991a). For this, organisations have to understand which capabilities they need in order to gain and maintain this competitive advantage (Barney, 1991a; Prahalad and Hamel, 1990). Organizational capabilities are based on knowledge. Thus, knowledge is a resource that forms the foundation of the company’s capabilities. Capabilities combine to The Emerald Research Register for this journal is available at The current issue and full text archive of this journal is available at www.emeraldinsight.com/researchregister www.emeraldinsight.com/1463-7154.htm The authors would like to thank, Göran Roos, Steven Pike, Oliver Gupta, as well as the two anonymous reviewers for their valuable comments which helped us to improve this paper. Intellectual capital", "title": "" }, { "docid": "aba1bbd9163e5f9d16ef2d98d16ce1c2", "text": "The basic reproduction number (0) is arguably the most important quantity in infectious disease epidemiology. The next-generation matrix (NGM) is the natural basis for the definition and calculation of (0) where finitely many different categories of individuals are recognized. We clear up confusion that has been around in the literature concerning the construction of this matrix, specifically for the most frequently used so-called compartmental models. We present a detailed easy recipe for the construction of the NGM from basic ingredients derived directly from the specifications of the model. We show that two related matrices exist which we define to be the NGM with large domain and the NGM with small domain. The three matrices together reflect the range of possibilities encountered in the literature for the characterization of (0). We show how they are connected and how their construction follows from the basic model ingredients, and establish that they have the same non-zero eigenvalues, the largest of which is the basic reproduction number (0). Although we present formal recipes based on linear algebra, we encourage the construction of the NGM by way of direct epidemiological reasoning, using the clear interpretation of the elements of the NGM and of the model ingredients. We present a selection of examples as a practical guide to our methods. In the appendix we present an elementary but complete proof that (0) defined as the dominant eigenvalue of the NGM for compartmental systems and the Malthusian parameter r, the real-time exponential growth rate in the early phase of an outbreak, are connected by the properties that (0) > 1 if and only if r > 0, and (0) = 1 if and only if r = 0.", "title": "" }, { "docid": "1e4daa242bfee88914b084a1feb43212", "text": "In this paper, we present a novel approach of human activity prediction. Human activity prediction is a probabilistic process of inferring ongoing activities from videos only containing onsets (i.e. the beginning part) of the activities. The goal is to enable early recognition of unfinished activities as opposed to the after-the-fact classification of completed activities. Activity prediction methodologies are particularly necessary for surveillance systems which are required to prevent crimes and dangerous activities from occurring. We probabilistically formulate the activity prediction problem, and introduce new methodologies designed for the prediction. We represent an activity as an integral histogram of spatio-temporal features, efficiently modeling how feature distributions change over time. The new recognition methodology named dynamic bag-of-words is developed, which considers sequential nature of human activities while maintaining advantages of the bag-of-words to handle noisy observations. Our experiments confirm that our approach reliably recognizes ongoing activities from streaming videos with a high accuracy.", "title": "" }, { "docid": "a934474bb38e37e8246ff561efd74bd3", "text": "While it is possible to understand utopias and dystopias as particular kinds of sociopolitical systems, in this text we argue that utopias and dystopias can also be understood as particular kinds of information systems in which data is received, stored, generated, processed, and transmitted by the minds of human beings that constitute the system’s ‘nodes’ and which are connected according to specific network topologies. We begin by formulating a model of cybernetic information-processing properties that characterize utopias and dystopias. It is then shown that the growing use of neuroprosthetic technologies for human enhancement is expected to radically reshape the ways in which human minds access, manipulate, and share information with one another; for example, such technologies may give rise to posthuman ‘neuropolities’ in which human minds can interact with their environment using new sensorimotor capacities, dwell within shared virtual cyberworlds, and link with one another to form new kinds of social organizations, including hive minds that utilize communal memory and decision-making. Drawing on our model, we argue that the dynamics of such neuropolities will allow (or perhaps even impel) the creation of new kinds of utopias and dystopias that were previously impossible to realize. Finally, we suggest that it is important that humanity begin thoughtfully exploring the ethical, social, and political implications of realizing such technologically enabled societies by studying neuropolities in a place where they have already been ‘pre-engineered’ and provisionally exist: in works of audiovisual science fiction such as films, television series, and role-playing games", "title": "" }, { "docid": "bf784d447f523c89e4863edffb334c8b", "text": "We investigate the use of a nonlinear control allocation scheme for automotive vehicles. Such a scheme is useful in e.g. yaw or roll stabilization of the vehicle. The control allocation allows a modularization of the control task, such that a higher level control system specifies a desired moment to work on the vehicle, while the control allocation distributes this moment among the individual wheels by commanding appropriate wheel slips. The control allocation problem is defined as a nonlinear optimization problem, to which an explicit piecewise linear approximate solution function is computed off-line. Such a solution function can computationally efficiently be implemented in real time with at most a few hundred arithmetic operations per sample. Yaw stabilization of the vehicle yaw dynamics is used as an example of use of the control allocation. Simulations show that the controller stabilizes the vehicle in an extreme manoeuvre where the vehicle yaw dynamics otherwise becomes unstable.", "title": "" }, { "docid": "bb05c05cb57dbc22afeceaa13a651630", "text": "In this letter, a broadband and compact phase shifter using omega particles is designed. Bandwidth of the 90 <sup>°</sup> and 45 <sup>°</sup> versions of the designed phase shifter are around 55% with the accuracy of 3 <sup>°</sup> and 60% with the accuracy of 2.5 <sup>°</sup>, respectively. The proposed phase shifter has compact size compared with previously published SIW based phase shifter designs. A prototype of the proposed 90 <sup>°</sup> phase shifter is fabricated and comparison of the measured and simulated results is provided.", "title": "" }, { "docid": "e6bca434e626f770ecab60d022abc2ad", "text": "This paper presents and investigates Clustered Shading for deferred and forward rendering. In Clustered Shading, view samples with similar properties (e.g. 3D-position and/or normal) are grouped into clusters. This is comparable to tiled shading, where view samples are grouped into tiles based on 2D-position only. We show that Clustered Shading creates a better mapping of light sources to view samples than tiled shading, resulting in a significant reduction of lighting computations during shading. Additionally, Clustered Shading enables using normal information to perform per-cluster back-face culling of lights, again reducing the number of lighting computations. We also show that Clustered Shading not only outperforms tiled shading in many scenes, but also exhibits better worst case behaviour under tricky conditions (e.g. when looking at high-frequency geometry with large discontinuities in depth). Additionally, Clustered Shading enables real-time scenes with two to three orders of magnitudes more lights than previously feasible (up to around one million light sources).", "title": "" }, { "docid": "343ed18e56e6f562fa509710e4cf8dc6", "text": "The automated analysis of facial expressions has been widely used in different research areas, such as biometrics or emotional analysis. Special importance is attached to facial expressions in the area of sign language, since they help to form the grammatical structure of the language and allow for the creation of language disambiguation, and thus are called Grammatical Facial Expressions (GFEs). In this paper we outline the recognition of GFEs used in the Brazilian Sign Language. In order to reach this objective, we have captured nine types of GFEs using a KinectTMsensor, designed a spatial-temporal data representation, modeled the research question as a set of binary classification problems, and employed a Machine Learning technique.", "title": "" }, { "docid": "eedd8784dcda161ef993e67b0ac190f8", "text": "3 origami (˛ orig¯ a·mi) The Japanese art of making elegant designs using folds in all kinds of paper. One style of functional programming is based purely on recursive equations. Such equations are easy to explain, and adequate for any computational purpose , but hard to use well as programs get bigger and more complicated. In a sense, recursive equations are the 'assembly language' of functional programming , and direct recursion the goto. As computer scientists discovered in the 1960s with structured programming, it is better to identify common patterns of use of such too-powerful tools, and capture these patterns as new constructions and abstractions. In functional programming, in contrast to imperative programming, we can often express the new constructions as higher-order operations within the language, whereas the move from un-structured to structured programming entailed the development of new languages. There are advantages in expressing programs as instances of common patterns, rather than from first principles — the same advantages as for any kind of abstraction. Essentially, one can discover general properties of the abstraction once and for all, and infer those properties of the specific instances for free. These properties may be theorems, design idioms, implementations, optimisations, and so on. In this chapter we will look at folds and unfolds as abstractions. In a precise technical sense, folds and unfolds are the natural patterns of computation over recursive datatypes; unfolds generate data structures and folds consume them. Functional programmers are very familiar with the foldr function on lists, and its directional dual foldl; they are gradually coming to terms with the generalisation to folds on other datatypes (IFPH §3.3, §6.1.3, §6.4). The", "title": "" }, { "docid": "d07a75f66e8fc53cf91904aadd0585c7", "text": "Hashing techniques have been intensively investigated for large scale vision applications. Recent research has shown that leveraging supervised information can lead to high quality hashing. However, most existing supervised hashing methods only construct similarity-preserving hash codes. Observing that semantic structures carry complementary information, we propose the idea of cotraining for hashing, by jointly learning projections from image representations to hash codes and classification. Specifically, a novel deep semanticpreserving and ranking-based hashing (DSRH) architecture is presented, which consists of three components: a deep CNN for learning image representations, a hash stream of a binary mapping layer by evenly dividing the learnt representations into multiple bags and encoding each bag into one hash bit, and a classification stream. Meanwhile, our model is learnt under two constraints at the top loss layer of hash stream: a triplet ranking loss and orthogonality constraint. The former aims to preserve the relative similarity ordering in the triplets, while the latter makes different hash bit as independent as possible. We have conducted experiments on CIFAR-10 and NUS-WIDE image benchmarks, demonstrating that our approach can provide superior image search accuracy than other state-of-theart hashing techniques.", "title": "" }, { "docid": "5ed744299cb2921bcb42f57cf1809f69", "text": "Credit risk prediction models seek to predict quality factors such as whether an individual will default (bad applicant) on a loan or not (good applicant). This can be treated as a kind of machine learning (ML) problem. Recently, the use of ML algorithms has proven to be of great practical value in solving a variety of risk problems including credit risk prediction. One of the most active areas of recent research in ML has been the use of ensemble (combining) classifiers. Research indicates that ensemble individual classifiers lead to a significant improvement in classification performance by having them vote for the most popular class. This paper explores the predicted behaviour of five classifiers for different types of noise in terms of credit risk prediction accuracy, and how such accuracy could be improved by using classifier ensembles. Benchmarking results on four credit datasets and comparison with the performance of each individual classifier on predictive accuracy at various attribute noise levels are presented. The experimental evaluation shows that the ensemble of classifiers technique has the potential to improve prediction accuracy. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f21b0f519f4bf46cb61b2dc2861014df", "text": "Player experience is difficult to evaluate and report, especially using quantitative methodologies in addition to observations and interviews. One step towards tying quantitative physiological measures of player arousal to player experience reports are Biometric Storyboards (BioSt). They can visualise meaningful relationships between a player's physiological changes and game events. This paper evaluates the usefulness of BioSt to the game industry. We presented the Biometric Storyboards technique to six game developers and interviewed them about the advantages and disadvantages of this technique.", "title": "" }, { "docid": "05b1a669fe426a4ec29b821800727432", "text": "In this paper we suggest a user-subjective approach to Personal Information Management (PIM) system design. This approach advocates that PIM systems relate to the subjective value-added attributes that the user gives to the data stored in the PIM system. These attributes should facilitate system use: help the user find the information item again, recall it when needed and use it effectively in the next interaction with the item. Driven from the user-subjective approach are three generic principles which are described and discussed: (a) The subjective classification principle stating that all information items related to the same subjective topic should be classified together regardless of their technological format; (b) The subjective importance principle proposing that the subjective importance of information should determine its degree of visual salience and accessibility; and (c) The subjective context principle suggesting that information should be retrieved and viewed by the user in the same context in which it was previously used. We claim that these principles are only sporadically implemented in operating systems currently available on personal computers, and demonstrate alternatives for interface design. USER-SUBJECTIVE APPROACH TO PIM SYSTEMS 3", "title": "" }, { "docid": "9d4b97f66055979079940b267257758f", "text": "A model that predicts the static friction for elastic-plastic contact of rough surface presented. The model incorporates the results of accurate finite element analyses elastic-plastic contact, adhesion and sliding inception of a single asperity in a statis representation of surface roughness. The model shows strong effect of the externa and nominal contact area on the static friction coefficient in contrast to the classical of friction. It also shows that the main dimensionless parameters affecting the s friction coefficient are the plasticity index and adhesion parameter. The effect of adh on the static friction is discussed and found to be negligible at plasticity index va larger than 2. It is shown that the classical laws of friction are a limiting case of present more general solution and are adequate only for high plasticity index and n gible adhesion. Some potential limitations of the present model are also discussed ing to possible improvements. A comparison of the present results with those obt from an approximate CEB friction model shows substantial differences, with the l severely underestimating the static friction coefficient. @DOI: 10.1115/1.1609488 #", "title": "" }, { "docid": "6a616f2aaa08ecf57236510cda926cad", "text": "While much work has focused on the design of actuators for inputting energy into robotic systems, less work has been dedicated to devices that remove energy in a controlled manner, especially in the field of soft robotics. Such devices have the potential to significantly modulate the dynamics of a system with very low required input power. In this letter, we leverage the concept of layer jamming, previously used for variable stiffness devices, to create a controllable, high force density, soft layer jamming brake (SLJB). We introduce the design, modeling, and performance analysis of the SLJB and demonstrate variable tensile resisting forces through the regulation of vacuum pressure. Further, we measure and model the tensile force with respect to different layer materials, vacuum pressures, and lengthening velocities, and show its ability to absorb energy during collisions. We hope to apply the SLJB in a number of applications in wearable technology.", "title": "" }, { "docid": "124f9f9764a05047fca3f8d956dc5d48", "text": "There is no doubt to say that researchers have made significant contributions by developing numerous tools and techniques of various Requirements Engineering (RE) processes but at the same time, the field still demands further research to come up with the novel solutions for many ongoing issues. Some of the key challenges in RE may be the issues in describing the system limit, issues in understanding among the different groups affected by the improvement of a given system, and challenges in dealing with the explosive nature of requirements. These challenges may lead to poor requirements and the termination of system progress, or else the disappointing or inefficient result of a system, which increases high maintenance costs or suffers from frequent changes. RE can be decomposed into various sub-phases: requirements elicitation, specification, documentation and validation. Through proper requirements elicitation, RE process can be upgraded, resulting in enriched system requirements and possibly a much better system. Keeping in view the importance of the area, major elicitation techniques have already been identified in one of our previous papers. This paper is an extension of our previous work and here, an attempt is made to identify and describe the recurring issues and challenges in various requirements elicitation techniques.", "title": "" } ]
scidocsrr
6a4b8e38be688899c7ed25ed70ac339a
In Defense of Locality-Sensitive Hashing
[ { "docid": "6f0ebd6314cd5c012f791d0e5c448045", "text": "This paper presents a framework of discriminative least squares regression (LSR) for multiclass classification and feature selection. The core idea is to enlarge the distance between different classes under the conceptual framework of LSR. First, a technique called ε-dragging is introduced to force the regression targets of different classes moving along opposite directions such that the distances between classes can be enlarged. Then, the ε-draggings are integrated into the LSR model for multiclass classification. Our learning framework, referred to as discriminative LSR, has a compact model form, where there is no need to train two-class machines that are independent of each other. With its compact form, this model can be naturally extended for feature selection. This goal is achieved in terms of L2,1 norm of matrix, generating a sparse learning model for feature selection. The model for multiclass classification and its extension for feature selection are finally solved elegantly and efficiently. Experimental evaluation over a range of benchmark datasets indicates the validity of our method.", "title": "" }, { "docid": "df163d94fbf0414af1dde4a9e7fe7624", "text": "This paper introduces a web image dataset created by NUS's Lab for Media Search. The dataset includes: (1) 269,648 images and the associated tags from Flickr, with a total of 5,018 unique tags; (2) six types of low-level features extracted from these images, including 64-D color histogram, 144-D color correlogram, 73-D edge direction histogram, 128-D wavelet texture, 225-D block-wise color moments extracted over 5x5 fixed grid partitions, and 500-D bag of words based on SIFT descriptions; and (3) ground-truth for 81 concepts that can be used for evaluation. Based on this dataset, we highlight characteristics of Web image collections and identify four research issues on web image annotation and retrieval. We also provide the baseline results for web image annotation by learning from the tags using the traditional k-NN algorithm. The benchmark results indicate that it is possible to learn effective models from sufficiently large image dataset to facilitate general image retrieval.", "title": "" }, { "docid": "0784d5907a8e5f1775ad98a25b1b0b31", "text": "The Internet contains billions of images, freely available online. Methods for efficiently searching this incredibly rich resource are vital for a large number of applications. These include object recognition, computer graphics, personal photo collections, online image search tools. In this paper, our goal is to develop efficient image search and scene matching techniques that are not only fast, but also require very little memory, enabling their use on standard hardware or even on handheld devices. Our approach uses recently developed machine learning techniques to convert the Gist descriptor (a real valued vector that describes orientation energies at different scales and orientations within an image) to a compact binary code, with a few hundred bits per image. Using our scheme, it is possible to perform real-time searches with millions from the Internet using a single large PC and obtain recognition results comparable to the full descriptor. Using our codes on high quality labeled images from the LabelMe database gives surprisingly powerful recognition results using simple nearest neighbor techniques.", "title": "" }, { "docid": "e08cfc5d9c67a5c806750dc7c747c52f", "text": "To build large-scale query-by-example image retrieval systems, embedding image features into a binary Hamming space provides great benefits. Supervised hashing aims to map the original features to compact binary codes that are able to preserve label based similarity in the binary Hamming space. Most existing approaches apply a single form of hash function, and an optimization process which is typically deeply coupled to this specific form. This tight coupling restricts the flexibility of those methods, and can result in complex optimization problems that are difficult to solve. In this work we proffer a flexible yet simple framework that is able to accommodate different types of loss functions and hash functions. The proposed framework allows a number of existing approaches to hashing to be placed in context, and simplifies the development of new problem-specific hashing methods. Our framework decomposes the hashing learning problem into two steps: binary code (hash bit) learning and hash function learning. The first step can typically be formulated as binary quadratic problems, and the second step can be accomplished by training a standard binary classifier. For solving large-scale binary code inference, we show how it is possible to ensure that the binary quadratic problems are submodular such that efficient graph cut methods may be used. To achieve efficiency as well as efficacy on large-scale high-dimensional data, we propose to use boosted decision trees as the hash functions, which are nonlinear, highly descriptive, and are very fast to train and evaluate. Experiments demonstrate that the proposed method significantly outperforms most state-of-the-art methods, especially on high-dimensional data.", "title": "" }, { "docid": "3b54c700cf096551d8064e2c84aeea2f", "text": "Fast retrieval methods are critical for many large-scale and data-driven vision applications. Recent work has explored ways to embed high-dimensional features or complex distance functions into a low-dimensional Hamming space where items can be efficiently searched. However, existing methods do not apply for high-dimensional kernelized data when the underlying feature embedding for the kernel is unknown. We show how to generalize locality-sensitive hashing to accommodate arbitrary kernel functions, making it possible to preserve the algorithm's sublinear time similarity search guarantees for a wide class of useful similarity functions. Since a number of successful image-based kernels have unknown or incomputable embeddings, this is especially valuable for image retrieval tasks. We validate our technique on several data sets, and show that it enables accurate and fast performance for several vision problems, including example-based object classification, local feature matching, and content-based retrieval.", "title": "" }, { "docid": "958fea977cf31ddabd291da68754367d", "text": "Recently, learning based hashing techniques have attracted broad research interests because they can support efficient storage and retrieval for high-dimensional data such as images, videos, documents, etc. However, a major difficulty of learning to hash lies in handling the discrete constraints imposed on the pursued hash codes, which typically makes hash optimizations very challenging (NP-hard in general). In this work, we propose a new supervised hashing framework, where the learning objective is to generate the optimal binary hash codes for linear classification. By introducing an auxiliary variable, we reformulate the objective such that it can be solved substantially efficiently by employing a regularization algorithm. One of the key steps in this algorithm is to solve a regularization sub-problem associated with the NP-hard binary optimization. We show that the sub-problem admits an analytical solution via cyclic coordinate descent. As such, a high-quality discrete solution can eventually be obtained in an efficient computing manner, therefore enabling to tackle massive datasets. We evaluate the proposed approach, dubbed Supervised Discrete Hashing (SDH), on four large image datasets and demonstrate its superiority to the state-of-the-art hashing methods in large-scale image retrieval.", "title": "" } ]
[ { "docid": "4284b2bffa631ad4a614267a59960336", "text": "To assess whether emotional intelligence (EI) is related to self-assessed relationship quality, an ability test of EI and measures of relationship quality were administered to 86 heterosexual couples in a university setting. Results indicated that female partners were significantly higher in EI than male partners and that EI scores were uncorrelated within couples. Two 2 2 multiple analyses of variance (performed separately for positive and negative outcomes) assessed how relationship quality measures differed across four different types of couples (high-EI female/high-EI male, low-EI female/low-EI male, etc.). As predicted, couples with both partners low on EI tended to have the lowest scores on depth, support, and positive relationship quality and the highest scores on conflict and negative relationship quality. Counter to our hypotheses, couples with both partners high on EI did not consistently have higher scores on positive outcomes and lower scores on negative outcomes than couples with one high-EI partner. What emotional abilities predict quality relationships among dating or married couples? Researchers have shown that positive emotions (Gottman, 1982; Gottman & Levenson, 1992), emotional stability (Kelly & Conley, 1987; Russell & Wells, 1994), self-esteem (Arrindell & Luteijn, 2000; Luteijn, 1994), and secure attachment style (Feeney, 1999) all correlate with partners’ reports of happiness. Several negative emotional traits such as impulsivity, fearfulness, and depression also reliably predict partner reports of maladjustment (O’leary & Smith, 1991). The purpose of this study is to assess whether emotional intelligence (EI)—as defined by Mayer and Salovey (1997)—is related to perceived positive and negative relationship qualities among couples. Although no study as yet has directly assessed whether EI abilities (i.e., the ability to perceive, use, understand, and regulate emotions) are related to these outcomes, recent research indicates that EI, measured with new ability tests, predicts both selfand informant reports of emotional support, conflict, and positive social relations (Brackett, Mayer, & Warner, 2004; Lopes et al., 2004; Lopes, Salovey, & Straus, 2003; Mayer, Caruso, & Salovey, 1999). Researchers also have speculated about potential links between EI and relationship quality among couples (Fitness, 2001; Mayer, Caruso, & Salovey, 1999). Noller, Beach, and Osgarby (1997), for example, reviewed research showing that accuracy in expressing and recognizing emotions correlates with couples’ reports of marital happiness. Carton, Kessler, and Pape (1999) also found that sensitivity and accuracy in nonverbal communication predicts happiness. One skill that is assessed by EI is the perception of emotion; thus, it is reasonable to predict that higher EI might predict greater relationship satisfaction in couples, whereas lower EI might result in relationship dissatisfaction and higher The authors would like to acknowledge the assistance of Erin Fisher who helped to gather many of the resources that helped the authors write this article. Correspondence should be addressed to Marc A. Brackett, Yale University, Department of Psychology, New Haven, CT 06511, e-mail: marc.brackett@yale.edu. Personal Relationships, 12 (2005), 197–212. Printed in the United States of America. Copyright 2005 IARR. 1350-4126=05", "title": "" }, { "docid": "2d0f0a934be8c6900b053383aa209baa", "text": "This paper, discusses about navigation control of mobile robot using adaptive neuro-fuzzy inference system (ANFIS) in a real word dynamic environment. In the ANFIS controller after the input layer there is a fuzzy layer and rest of the layers are neural network layers. The adaptive neuro-fuzzy hybrid system combines the advantages of fuzzy logic system, which deal with explicit knowledge that can be explained and understood, and neural network, which deal with implicit knowledge, which can be acquired by learning. The inputs to fuzzy logic layer are front obstacle distance, left obstacle distance, right obstacle distance and target steering. A learning algorithm based on neural network technique has been developed to tune the parameters of fuzzy membership functions, which smooth the trajectory generated by the fuzzy logic system. Using the developed ANFIS controller, the mobile robots are able to avoid static and dynamic obstacles, and reach the target successfully in cluttered environments. The experimental results agree well with the simulation results, proves the authenticity of the theory developed.", "title": "" }, { "docid": "a5ef1435960b9371bd3803de603b0216", "text": "We present two optimization strategies to improve connected-component labeling algorithms. Taking together, they form an efficient two-pass labeling algorithm that is fast and theoretically optimal. The first optimization strategy reduces the number of neighboring pixels accessed through the use of a decision tree, and the second one streamlines the union-find algorithms used to track equivalent labels. We show that the first strategy reduces the average number of neighbors accessed by a factor of about 2. We prove our streamlined union-find algorithms have the same theoretical optimality as the more sophisticated ones in literature. This result generalizes an earlier one on using union-find in labeling algorithms by Fiorio and Gustedt (Theor Comput Sci 154(2):165–181, 1996). In tests, the new union-find algorithms improve a labeling algorithm by a factor of 4 or more. Through analyses and experiments, we demonstrate that our new two-pass labeling algorithm scales linearly with the number of pixels in the image, which is optimal in computational complexity theory. Furthermore, the new labeling algorithm outperforms the published labeling algorithms irrespective of test platforms. In comparing with the fastest known labeling algorithm for two-dimensional (2D) binary images called contour tracing algorithm, our new labeling algorithm is up to ten times faster than the contour tracing program distributed by the original authors.", "title": "" }, { "docid": "2b985f234933a34b150ef3819305b282", "text": "The constraint of difference is known to the constraint programming community since Lauriere introduced Alice in 1978. Since then, several strategies have been designed to solve the alldifferent constraint. This paper surveys the most important developments over the years regarding the alldifferent constraint. First we summarize the underlying concepts and results from graph theory and integer programming. Then we give an overview and an abstract comparison of different solution strategies. In addition, the symmetric alldifferent constraint is treated. Finally, we show how to apply cost-based filtering to the alldifferent constraint. A preliminary version of this paper appeared as [14].", "title": "" }, { "docid": "23daa694d8d22bc2f3135d3179a15514", "text": "Bio-inspired robots often “come” from the animal world while the plant world has not yet been deeply observed and considered. In this work we addressed a special class of climbing plants, that has evolved to gain height while minimizing the energy expenditure, as a new bio-robotic template: the tendril-bearer plants. These are able to grasp and coil around a support and, after that, push the stem towards the grasped element by recovering a spring-like shape from a wire condition. After the biological analysis, the idea of replicating the grasping by coiling and the pushing by shortening has been focused and replicated by Shape Memory Alloy-based proof of concept prototypes. The results show the feasibility of the approach.", "title": "" }, { "docid": "a1d58b3a9628dc99edf53c1112dc99b8", "text": "Multiple criteria decision-making (MCDM) research has developed rapidly and has become a main area of research for dealing with complex decision problems. The purpose of the paper is to explore the performance evaluation model. This paper develops an evaluation model based on the fuzzy analytic hierarchy process and the technique for order performance by similarity to ideal solution, fuzzy TOPSIS, to help the industrial practitioners for the performance evaluation in a fuzzy environment where the vagueness and subjectivity are handled with linguistic values parameterized by triangular fuzzy numbers. The proposed method enables decision analysts to better understand the complete evaluation process and provide a more accurate, effective, and systematic decision support tool. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a0d6e020f230e872957ae00ed258b2b1", "text": "This paper presents an end-to-end framework for task-oriented dialog systems using a variant of Deep Recurrent QNetworks (DRQN). The model is able to interface with a relational database and jointly learn policies for both language understanding and dialog strategy. Moreover, we propose a hybrid algorithm that combines the strength of reinforcement learning and supervised learning to achieve faster learning speed. We evaluated the proposed model on a 20 Question Game conversational game simulator. Results show that the proposed method outperforms the modular-based baseline and learns a distributed representation of the latent dialog state.", "title": "" }, { "docid": "be7412a48578741d830e267bff0c1c6a", "text": "In recent years, greater attention has been given to vessels’ seakeeping characteristics. This is due to a number of factors: proliferation of high-speed semi-displacement passenger vessels; increasing demand for passenger comfort (passengers are often able to vote with their feet by taking alternative transport, e.g. English Channel Tunnel); deployment of increasingly sophisticated systems on ever smaller naval vessels (Hunt 1999); greater pressure from regulatory bodies and the broader public for safer vessels; staggering advancements in desktop computer power; and developments in prediction and analysis tools.", "title": "" }, { "docid": "3256b2050c603ca16659384a0e98a22c", "text": "In this paper, we propose a Hough transform-based method to identify low-contrast defects in unevenly illuminated images, and especially focus on the inspection of mura defects in liquid crystal display (LCD) panels. The proposed method works on 1-D gray-level profiles in the horizontal and vertical directions of the surface image. A point distinctly deviated from the ideal line of a profile can be identified as a defect one. A 1-D gray-level profile in the unevenly illuminated image results in a nonstationary line signal. The most commonly used technique for straight line detection in a noisy image is Hough transform (HT). The standard HT requires a sufficient number of points lie exactly on the same straight line at a given parameter resolution so that the accumulator will show a distinct peak in the parameter space. It fails to detect a line in a nonstationary signal. In the proposed HT scheme, the points that contribute to the vote do not have to lie on a line. Instead, a distance tolerance to the line sought is first given. Any point with the distance to the line falls within the tolerance will be accumulated by taking the distance as the voting weight. A fast search procedure to tighten the possible ranges of line parameters is also proposed for mura detection in LCD images.", "title": "" }, { "docid": "5398b76e55bce3c8e2c1cd89403b8bad", "text": "To cite: He A, Kwatra SG, Kazi N, et al. BMJ Case Rep Published online: [please include Day Month Year] doi:10.1136/bcr-2016215335 DESCRIPTION A woman aged 45 years presented for evaluation of skin lesions. She reported an 8–9-year history of occasionally tender, waxing-and-waning skin nodules refractory to dapsone, prednisone and methotrexate. Examination revealed multiple indurated subcutaneous nodules distributed on the upper extremities, with scattered patches of lipoatrophy in areas of nodule regression (figure 1). Her medical history was unremarkable; CBC and CMP were within normal limits, with no history of radiotherapy or evidence of internal organ involvement. She had a positive ANA titre (1:160, speckled), but negative anti-dsDNA, anti-Smith, anti-Ro and anti-La antibodies. Differential diagnosis included erythema nodosum (EN), erythema induratum of Bazin (EIB), lupus profundus (LP) and cutaneous lymphoma. Initial wedge biopsy in 2008 disclosed a predominantly lobular panniculitic process with some septal involvement (figure 2A). Broad zones of necrosis were present (figure 2B). The infiltrate consisted of a pleomorphic population of lymphocytes with occasional larger atypical lymphocytes (figure 2C). There were foci of adipocyte rimming by the atypical lymphocytes (figure 2C). Immunophenotyping revealed predominance of CD3+ T cells with some CD20+ B-cell aggregates. The atypical cells stained CD4 and CD8 in approximately equal ratios. TIA-1 was positive in many of the atypical cells but not prominently enough to render a diagnosis of cytotoxic T-cell lymphoma. T-cell receptor PCR studies showed polyclonality. Subsequent biopsies performed annually after treatment with prednisone in 2008 and 2010, dapsone in 2009 and methotrexate in 2012 showed very similar pathological and molecular features. Adipocyte rimming and TCR polyclonality persisted. EN is characterised by subcutaneous nodules on the lower extremities in association with elevated erythrocyte sedimentation rate (ESR) and C reactive protein (CRP), influenza-like prodrome preceding nodule formation and self-limiting course. Histologically, EN shows a mostly septal panniculitis with radial granulomas. EN was ruled out on the basis of normal ESR (6) and CRP (<0.1), chronic relapsing course and predominantly lobular panniculitis process histologically. EIB typically presents with violaceous nodules located on the posterior lower extremities, with arms rarely affected, of patients with a history of tuberculosis (TB). Histologically, EIB shows granulomatous inflammation with focal necrosis, vasculitis and septal fibrosis. Our patient had no evidence or history of TB infection and presented with nodules of a different clinical morphology. Ultimately, this constellation of histological and immunophenotypic findings showed an atypical panniculitic T-lymphocytic infiltrate. Although the lesion showed a lobular panniculitis with features that could be seen in subcutaneous panniculitis-like T-cell lymphoma (SPTCL), the presence of plasma cells, absence of CD8 and TIA restriction and T-cell polyclonality did not definitively support that", "title": "" }, { "docid": "42ebaee6fdbfc487ae2a21e8a55dd3e4", "text": "Human motion prediction, forecasting human motion in a few milliseconds conditioning on a historical 3D skeleton sequence, is a long-standing problem in computer vision and robotic vision. Existing forecasting algorithms rely on extensive annotated motion capture data and are brittle to novel actions. This paper addresses the problem of few-shot human motion prediction, in the spirit of the recent progress on few-shot learning and meta-learning. More precisely, our approach is based on the insight that having a good generalization from few examples relies on both a generic initial model and an effective strategy for adapting this model to novel tasks. To accomplish this, we propose proactive and adaptive meta-learning (PAML) that introduces a novel combination of model-agnostic meta-learning and model regression networks and unifies them into an integrated, end-to-end framework. By doing so, our meta-learner produces a generic initial model through aggregating contextual information from a variety of prediction tasks, while effectively adapting this model for use as a task-specific one by leveraging learningto-learn knowledge about how to transform few-shot model parameters to many-shot model parameters. The resulting PAML predictor model significantly improves the prediction performance on the heavily benchmarked H3.6M dataset in the small-sample size regime.", "title": "" }, { "docid": "72c4c247c1314ebcbbec4f43becd46f0", "text": "The evolutionary origin of the eukaryotic cell represents an enigmatic, yet largely incomplete, puzzle. Several mutually incompatible scenarios have been proposed to explain how the eukaryotic domain of life could have emerged. To date, convincing evidence for these scenarios in the form of intermediate stages of the proposed eukaryogenesis trajectories is lacking, presenting the emergence of the complex features of the eukaryotic cell as an evolutionary deus ex machina. However, recent advances in the field of phylogenomics have started to lend support for a model that places a cellular fusion event at the basis of the origin of eukaryotes (symbiogenesis), involving the merger of an as yet unknown archaeal lineage that most probably belongs to the recently proposed 'TACK superphylum' (comprising Thaumarchaeota, Aigarchaeota, Crenarchaeota and Korarchaeota) with an alphaproteobacterium (the protomitochondrion). Interestingly, an increasing number of so-called ESPs (eukaryotic signature proteins) is being discovered in recently sequenced archaeal genomes, indicating that the archaeal ancestor of the eukaryotic cell might have been more eukaryotic in nature than presumed previously, and might, for example, have comprised primitive phagocytotic capabilities. In the present paper, we review the evolutionary transition from archaeon to eukaryote, and propose a new model for the emergence of the eukaryotic cell, the 'PhAT (phagocytosing archaeon theory)', which explains the emergence of the cellular and genomic features of eukaryotes in the light of a transiently complex phagocytosing archaeon.", "title": "" }, { "docid": "a900a7b1b6eff406fa42906ec5a31597", "text": "From wearables to smart appliances, the Internet of Things (IoT) is developing at a rapid pace. The challenge is to find the best fitting solution within a range of different technologies that all may be appropriate at the first sight to realize a specific embedded device. A single tool for measuring power consumption of various wireless technologies and low power modes helps to optimize the development process of modern IoT systems. In this paper, we present an accurate but still cost-effective measurement solution for tracking the highly dynamic power consumption of wireless embedded systems. We extended the conventional measurement of a single shunt resistor's voltage drop by using a dual shunt resistor stage with an automatic switch-over between two stages, which leads to a large dynamic measurement range from μA up to several hundreds mA. To demonstrate the usability of our simple-to-use power measurement system different use cases are presented. Using two independent current measurement channels allows to evaluate the timing relation of proprietary RF communication. Furthermore a forecast is given on the expected battery lifetime of a Wifi-based data acquisition system using measurement results of the presented tool.", "title": "" }, { "docid": "d9a22d66618371128078056f34a363a7", "text": "Vector embedding is a foundational building block of many deep learning models, especially in natural language processing. In this paper, we present a theoretical framework for understanding the effect of dimensionality on vector embeddings. We observe that the distributional hypothesis, a governing principle of statistical semantics, requires a natural unitary-invariance for vector embeddings. Motivated by the unitary-invariance observation, we propose the Pairwise Inner Product (PIP) loss, a unitary-invariant metric on the similarity between two embeddings. We demonstrate that the PIP loss captures the difference in functionality between embeddings, and that the PIP loss is tightly connect with two basic properties of vector embeddings, namely similarity and compositionality. By formulating the embedding training process as matrix factorization with noise, we reveal a fundamental bias-variance trade-off between the signal spectrum and noise power in the dimensionality selection process. This bias-variance trade-off sheds light on many empirical observations which have not been thoroughly explained, for example the existence of an optimal dimensionality. Moreover, we discover two new results about vector embeddings, namely their robustness against over-parametrization and their forward stability. The bias-variance trade-off of the PIP loss explicitly answers the fundamental open problem of dimensionality selection for vector embeddings.", "title": "" }, { "docid": "ff9ac94a02a799e63583127ac300b0b4", "text": "Latent variable models have been widely applied for the analysis and visualization of large datasets. In the case of sequential data, closed-form inference is possible when the transition and observation functions are linear. However, approximate inference techniques are usually necessary when dealing with nonlinear dynamics and observation functions. Here, we propose a novel variational inference framework for the explicit modeling of time series, Variational Inference for Nonlinear Dynamics (VIND), that is able to uncover nonlinear observation and transition functions from sequential data. The framework includes a structured approximate posterior, and an algorithm that relies on the fixed-point iteration method to find the best estimate for latent trajectories. We apply the method to several datasets and show that it is able to accurately infer the underlying dynamics of these systems, in some cases substantially outperforming state-of-the-art methods.", "title": "" }, { "docid": "0d8cb05f7ba3840e558247b4ee70dff6", "text": "Even though information visualization (InfoVis) research has matured in recent years, it is generally acknowledged that the field still lacks supporting, encompassing theories. In this paper, we argue that the distributed cognition framework can be used to substantiate the theoretical foundation of InfoVis. We highlight fundamental assumptions and theoretical constructs of the distributed cognition approach, based on the cognitive science literature and a real life scenario. We then discuss how the distributed cognition framework can have an impact on the research directions and methodologies we take as InfoVis researchers. Our contributions are as follows. First, we highlight the view that cognition is more an emergent property of interaction than a property of the human mind. Second, we argue that a reductionist approach to study the abstract properties of isolated human minds may not be useful in informing InfoVis design. Finally we propose to make cognition an explicit research agenda, and discuss the implications on how we perform evaluation and theory building.", "title": "" }, { "docid": "80c3aa4530dfa8c7c909da7dea9bed3a", "text": "We present a state-of-the-art algorithm for measuring the semantic similarity of word pairs using novel combinations of word embeddings, WordNet, and the concept dictionary 4lang. We evaluate our system on the SimLex-999 benchmark data. Our top score of 0.76 is higher than any published system that we are aware of, well beyond the average inter-annotator agreement of 0.67, and close to the 0.78 average correlation between a human rater and the average of all other ratings, suggesting that our system has achieved nearhuman performance on this benchmark.", "title": "" }, { "docid": "48a4d6b30131097d721905ae148a03dd", "text": "68 AI MAGAZINE ■ I claim that achieving real human-level artificial intelligence would necessarily imply that most of the tasks that humans perform for pay could be automated. Rather than work toward this goal of automation by building special-purpose systems, I argue for the development of general-purpose, educable systems that can learn and be taught to perform any of the thousands of jobs that humans can perform. Joining others who have made similar proposals, I advocate beginning with a system that has minimal, although extensive, built-in capabilities. These would have to include the ability to improve through learning along with many other abilities.", "title": "" }, { "docid": "12db7d3dfc43cef474acea4eaf5ba4c3", "text": "A growing list of medically important developmental defects and disease mechanisms can be traced to disruption of the planar cell polarity (PCP) pathway. The PCP system polarizes cells in epithelial sheets along an axis orthogonal to their apical-basal axis. Studies in the fruitfly, Drosophila, have suggested that components of the PCP signaling system function in distinct modules, and that these modules and the effector systems with which they interact function together to produce emergent patterns. Experimental methods allow the manipulation of individual PCP signaling molecules in specified groups of cells; these interventions not only perturb the polarization of the targeted cells at a subcellular level, but also perturb patterns of polarity at the multicellular level, often affecting nearby cells in characteristic ways. These kinds of experiments should, in principle, allow one to infer the architecture of the PCP signaling system, but the relationships between molecular interactions and tissue-level pattern are sufficiently complex that they defy intuitive understanding. Mathematical modeling has been an important tool to address these problems. This article explores the emergence of a local signaling hypothesis, and describes how a local intercellular signal, coupled with a directional cue, can give rise to global pattern. We will discuss the critical role mathematical modeling has played in guiding and interpreting experimental results, and speculate about future roles for mathematical modeling of PCP. Mathematical models at varying levels of inhibition have and are expected to continue contributing in distinct ways to understanding the regulation of PCP signaling.", "title": "" }, { "docid": "70cc8c058105b905eebdf941ca2d3f2e", "text": "Cloud computing is an emerging computing paradigm in which resources of the computing infrastructure are provided as services over the Internet. As promising as it is, this paradigm also brings forth many new challenges for data security and access control when users outsource sensitive data for sharing on cloud servers, which are not within the same trusted domain as data owners. To keep sensitive user data confidential against untrusted servers, existing solutions usually apply cryptographic methods by disclosing data decryption keys only to authorized users. However, in doing so, these solutions inevitably introduce a heavy computation overhead on the data owner for key distribution and data management when fine-grained data access control is desired, and thus do not scale well. The problem of simultaneously achieving fine-grainedness, scalability, and data confidentiality of access control actually still remains unresolved. This paper addresses this challenging open issue by, on one hand, defining and enforcing access policies based on data attributes, and, on the other hand, allowing the data owner to delegate most of the computation tasks involved in fine-grained data access control to untrusted cloud servers without disclosing the underlying data contents. We achieve this goal by exploiting and uniquely combining techniques of attribute-based encryption (ABE), proxy re-encryption, and lazy re-encryption. Our proposed scheme also has salient properties of user access privilege confidentiality and user secret key accountability. Extensive analysis shows that our proposed scheme is highly efficient and provably secure under existing security models.", "title": "" } ]
scidocsrr
0d72df77a7761f3cd6a6797be0a952eb
Differences in Critical Success Factors in ERP Systems Implementation in Australia and China: A Cultural Analysis
[ { "docid": "f61a7e280cffe673a9068cf33fd6f803", "text": "Enterprise Resource Planning (ERP) systems are highly integrated enterprise-wide information systems that automate core business processes. The ERP packages of vendors such as SAP, Baan, J.D. Edwards, Peoplesoft and Intentia represent more than a standard business platform, they prescribe information blueprints of how an organisation’s business processes should operate. In this paper the scale and strategic importance of ERP systems are identified and the problem of ERP implementation is defined. A Critical Success Factors (CSFs) framework is proposed to aid managers develop an ERP implementation strategy. The framework is illustrated using two case examples from a research sample of eight companies. The case analysis highlights the critical impact of legacy systems upon the implementation process, the importance of selecting an appropriate ERP strategy and identifies the importance of Business Process Change (BPC) and software configuration in addition to factors already cited in the literature. The implications of the results for managerial practice are described and future research opportunities are outlined.", "title": "" }, { "docid": "d170d7cf20b0a848bb0d81c5d163b505", "text": "The organizational and social issues associated with the development, implementation and use of computer-based information systems have increasingly attracted the attention of information systems researchers. Interest in qualitative research methods such as action research, case study research and ethnography, which focus on understanding social phenomena in their natural setting, has consequently grown. Case study research is the most widely used qualitative research method in information systems research, and is well suited to understanding the interactions between information technology-related innovations and organizational contexts. Although case study research is useful as ameans of studying information systems development and use in the field, there can be practical difficulties associated with attempting to undertake case studies as a rigorous and effective method of research. This paper addresses a number of these difficulties and offers some practical guidelines for successfully completing case study research. The paper focuses on the pragmatics of conducting case study research, and draws from the discussion at a panel session conducted by the authors at the 8th Australasian Conference on Information Systems, September 1997 (ACIS 97), from the authors' practical experiences, and from the case study research literature.", "title": "" } ]
[ { "docid": "d763198d3bfb1d30b153e13245c90c08", "text": "Inspired by the aerial maneuvering ability of lizards, we present the design and control of MSU (Michigan State University) tailbot - a miniature-tailed jumping robot. The robot can not only wheel on the ground, but also jump up to overcome obstacles. Moreover, once leaping into the air, it can control its body angle using an active tail to dynamically maneuver in midair for safe landings. We derive the midair dynamics equation and design controllers, such as a sliding mode controller, to stabilize the body at desired angles. To the best of our knowledge, this is the first miniature (maximum size 7.5 cm) and lightweight (26.5 g) robot that can wheel on the ground, jump to overcome obstacles, and maneuver in midair. Furthermore, tailbot is equipped with on-board energy, sensing, control, and wireless communication capabilities, enabling tetherless or autonomous operations. The robot in this paper exemplifies the integration of mechanical design, embedded system, and advanced control methods that will inspire the next-generation agile robots mimicking their biological counterparts. Moreover, it can serve as mobile sensor platforms for wireless sensor networks with many field applications.", "title": "" }, { "docid": "250fe1b4b9cb3ea8efc8e7b039dcba45", "text": "In this paper we present a WebVRGIS based Interactive On line 3D Virtual Community which is achieved based on WebGIS technology and web VR technology. It is Multi-Dimensional(MD) web geographic information system (WebGIS) based 3D interactive on line virtual community which is a virtual real-time 3D communication systems and web systems development platform. It is capable of running on a variety of browsers. In this work, four key issues are studied: (1) Multi-source MD geographical data fusion of the WebGIS, (2) scene combination with 3D avatar, (3) massive data network dispatch, and (4) multi-user avatar real-time interactive. Our system is divided into three modules: data preprocessing, background management and front end user interaction. The core of the front interaction module is packaged in the MD map expression engine 3GWebMapper and the free plug-in network 3D rendering engine WebFlashVR. We have evaluated the robustness of our system on three campus of Ocean University of China(OUC) as a testing base. The results shows high efficiency, easy to use and robustness of our system.", "title": "" }, { "docid": "bbf5561f88f31794ca95dd991c074b98", "text": "O CTO B E R 2014 | Volume 18, Issue 4 GetMobile Every time you use a voice command on your smartphone, you are benefitting from a technique called cloud offload. Your speech is captured by a microphone, pre-processed, then sent over a wireless network to a cloud service that converts speech to text. The result is then forwarded to another cloud service or sent back to your mobile device, depending on the application. Speech recognition and many other resource-intensive mobile services require cloud offload. Otherwise, the service would be too slow and drain too much of your battery. Research projects on cloud offload are hot today, with MAUI [4] in 2010, Odessa [13] and CloneCloud [2] in 2011, and COMET [8] in 2012. These build on a rich heritage of work dating back to the mid-1990s on a theme that is broadly characterized as cyber foraging. They are also relevant to the concept of cloudlets [18] that has emerged as an important theme in mobile-cloud convergence. Reflecting my participation in this evolution from its origins, this article is a personal account of the key developments in this research area. It focuses on mobile computing, ignoring many other uses of remote execution since the 1980s such as distributed processing, query processing, distributed object systems, and distributed partitioning.", "title": "" }, { "docid": "d1fa477646e636a3062312d6f6444081", "text": "This paper proposes a novel attention model for semantic segmentation, which aggregates multi-scale and context features to refine prediction. Specifically, the skeleton convolutional neural network framework takes in multiple different scales inputs, by which means the CNN can get representations in different scales. The proposed attention model will handle the features from different scale streams respectively and integrate them. Then location attention branch of the model learns to softly weight the multi-scale features at each pixel location. Moreover, we add an recalibrating branch, parallel to where location attention comes out, to recalibrate the score map per class. We achieve quite competitive results on PASCAL VOC 2012 and ADE20K datasets, which surpass baseline and related works.", "title": "" }, { "docid": "12f2fe4f71f399dd3d40f67bc94b5607", "text": "This paper presents a novel 3D shape retrieval method, which uses Bag-of-Features and an efficient multi-view shape matching scheme. In our approach, a properly normalized object is first described by a set of depth-buffer views captured on the surrounding vertices of a given unit geodesic sphere. We then represent each view as a word histogram generated by the vector quantization of the view’s salient local features. The dissimilarity between two 3D models is measured by the minimum distance of their all (24) possible matching pairs. This paper also investigates several critical issues including the influence of the number of views, codebook, training data, and distance function. Experiments on four commonly-used benchmarks demonstrate that: 1) Our approach obtains superior performance in searching for rigid models. 2) The local feature and global feature based methods are somehow complementary. Moreover, a linear combination of them significantly outperforms the state-of-the-art in terms of retrieval accuracy.", "title": "" }, { "docid": "02775a335447c9d1b01d7012e25032f6", "text": "Automatic number plate recognition (ANPR) is an important image processing technology used to recognise number plates of vehicles. Plate localisation and character recognition are two stages of ANPR. In this paper, a methodology has been proposed to develop robust ANPR system. A new algorithm has been proposed for number plate localisation which is based on character positioning method. Character recognition is done with support vector machine in which feature vector is calculated by recursive sub-divisions of character image. The problem of similar shape characters has been solved by syntactic analysis of number plate format for a particular geographical region. The system has been tested on 419 sample images from various countries with various variations in viewing angles, illuminations and distances. Experimental results show that the proposed system detects number plates and recognise characters successfully. The overall success rate of plate localisation is 97.21% and recognition of number is 95.06%.", "title": "" }, { "docid": "9365a612900a8bf0ddef8be6ec17d932", "text": "Stabilization exercise program has become the most popular treatment method in spinal rehabilitation since it has shown its effectiveness in some aspects related to pain and disability. However, some studies have reported that specific exercise program reduces pain and disability in chronic but not in acute low back pain, although it can be helpful in the treatment of acute low back pain by reducing recurrence rate (Ferreira et al., 2006).", "title": "" }, { "docid": "670e3f4fdb4a66de74ae740ae19aa260", "text": "The adsorption and desorption of D2O on hydrophobic activated carbon fiber (ACF) occurs at a smaller pressure than the adsorption and desorption of H2O. The behavior of the critical desorption pressure difference between D2O and H2O in the pressure range of 1.25-1.80kPa is applied to separate low concentrated D2O from water using the hydrophobic ACF, because the desorption branches of D2O and H2O drop almost vertically. The deuterium concentration of all desorbed water in the above pressure range is lower than that of water without adsorption-treatment on ACF. The single adsorption-desorption procedure on ACF at 1.66kPa corresponding to the maximum difference of adsorption amount between D2O and H2O reduced the deuterium concentration of desorbed water to 130.6ppm from 143.0ppm. Thus, the adsorption-desorption procedure of water on ACF is a promising separation and concentration method of low concentrated D2O from water.", "title": "" }, { "docid": "ecaf322e67c43b7d54a05de495a443eb", "text": "Recently, considerable effort has been devoted to deep domain adaptation in computer vision and machine learning communities. However, most of existing work only concentrates on learning shared feature representation by minimizing the distribution discrepancy across different domains. Due to the fact that all the domain alignment approaches can only reduce, but not remove the domain shift, target domain samples distributed near the edge of the clusters, or far from their corresponding class centers are easily to be misclassified by the hyperplane learned from the source domain. To alleviate this issue, we propose to joint domain alignment and discriminative feature learning, which could benefit both domain alignment and final classification. Specifically, an instance-based discriminative feature learning method and a center-based discriminative feature learning method are proposed, both of which guarantee the domain invariant features with better intra-class compactness and inter-class separability. Extensive experiments show that learning the discriminative features in the shared feature space can significantly boost the performance of deep domain adaptation methods.", "title": "" }, { "docid": "bbd378407abb1c2a9a5016afee40c385", "text": "One approach to the generation of natural-sounding synthesized speech waveforms is to select and concatenate units from a large speech database. Units (in the current work, phonemes) are selected to produce a natural realisation of a target phoneme sequence predicted from text which is annotated with prosodic and phonetic context information. We propose that the units in a synthesis database can be considered as a state transition network in which the state occupancy cost is the distance between a database unit and a target, and the transition cost is an estimate of the quality of concatenation of two consecutive units. This framework has many similarities to HMM-based speech recognition. A pruned Viterbi search is used to select the best units for synthesis from the database. This approach to waveform synthesis permits training from natural speech: two methods for training from speech are presented which provide weights which produce more natural speech than can be obtained by hand-tuning.", "title": "" }, { "docid": "472e9807c2f4ed6d1e763dd304f22c64", "text": "Commercial analytical database systems suffer from a high \"time-to-first-analysis\": before data can be processed, it must be modeled and schematized (a human effort), transferred into the database's storage layer, and optionally clustered and indexed (a computational effort). For many types of structured data, this upfront effort is unjustifiable, so the data are processed directly over the file system using the Hadoop framework, despite the cumulative performance benefits of processing this data in an analytical database system. In this paper we describe a system that achieves the immediate gratification of running MapReduce jobs directly over a file system, while still making progress towards the long-term performance benefits of database systems. The basic idea is to piggyback on MapReduce jobs, leverage their parsing and tuple extraction operations to incrementally load and organize tuples into a database system, while simultaneously processing the file system data. We call this scheme Invisible Loading, as we load fractions of data at a time at almost no marginal cost in query latency, but still allow future queries to run much faster.", "title": "" }, { "docid": "ee73fa4e07cea9aeae79c5144923a018", "text": "Omega-6 (n-6) polyunsaturated fatty acids (PUFA) (e.g., arachidonic acid (AA)) and omega-3 (n-3) PUFA (e.g., eicosapentaenoic acid (EPA)) are precursors to potent lipid mediator signalling molecules, termed \"eicosanoids,\" which have important roles in the regulation of inflammation. In general, eicosanoids derived from n-6 PUFA are proinflammatory while eicosanoids derived from n-3 PUFA are anti-inflammatory. Dietary changes over the past few decades in the intake of n-6 and n-3 PUFA show striking increases in the (n-6) to (n-3) ratio (~15 : 1), which are associated with greater metabolism of the n-6 PUFA compared with n-3 PUFA. Coinciding with this increase in the ratio of (n-6) : (n-3) PUFA are increases in chronic inflammatory diseases such as nonalcoholic fatty liver disease (NAFLD), cardiovascular disease, obesity, inflammatory bowel disease (IBD), rheumatoid arthritis, and Alzheimer's disease (AD). By increasing the ratio of (n-3) : (n-6) PUFA in the Western diet, reductions may be achieved in the incidence of these chronic inflammatory diseases.", "title": "" }, { "docid": "c96e8afc0c3e0428a257ba044cd2a35a", "text": "The tumor necrosis factor ligand superfamily member receptor activator of nuclear factor-kB (NF-kB) ligand (RANKL), its cellular receptor, receptor activator of NF-kB (RANK), and the decoy receptor, osteoprotegerin (OPG) represent a novel cytokine triad with pleiotropic effects on bone metabolism, the immune system, and endocrine functions (1). RANKL is produced by osteoblastic lineage cells and activated T lymphocytes (2– 4) and stimulates its receptor, RANK, which is located on osteoclasts and dendritic cells (DC) (4, 5). The effects of RANKL within the skeleton include osteoblast –osteoclast cross-talks, resulting in enhanced differentiation, fusion, activation, and survival of osteoclasts (3, 6), while in the immune system, RANKL promotes the survival and immunostimulatory capacity of DC (1, 7). OPG acts as a soluble decoy receptor that neutralizes RANKL, thus preventing activation of RANK (8). The RANKL/RANK/OPG system has been implicated in various skeletal and immune-mediated diseases characterized by increased bone resorption and bone loss, including several forms of osteoporosis (postmenopausal, glucocorticoid-induced, and senile osteoporosis) (9), bone metastases (10), periodontal disease (11), and rheumatoid arthritis (2). While a relative deficiency of OPG has been found to be associated with osteoporosis in various animal models (9), the parenteral administration of OPG to postmenopausal women (3 mg/kg) was beneficial in rapidly reducing enhanced biochemical markers of bone turnover by 30–80% (12). These studies have clearly established the RANKL/ OPG system as a key cytokine network involved in the regulation of bone cell biology, osteoblast–osteoclast and bone-immune cross-talks, and maintenance of bone mass. In addition to providing substantial and detailed insights into the pathogenesis of various metabolic bone diseases, the administration of OPG may become a promising therapeutic option in the prevention and treatment of benign and malignant bone disease. Several studies have attempted to evaluate the clinical relevance and potential applications of serum OPG measurements in humans. Yano et al. were the first to assess systematically OPG serum levels (by an ELISA system) in women with osteoporosis (13). Intriguingly, OPG serum levels were negatively correlated with bone mineral density (BMD) at various sites (lumbar spine, femoral neck, and total body) and positively correlated with biochemical markers of bone turnover. In view of the established protective effects of OPG on bone, these findings came as a surprise, and were interpreted as an insufficient counter-regulatory mechanism to prevent bone loss. Another group which employed a similar design (but a different OPG ELISA system) could not detect a correlation between OPG serum levels and biochemical markers of bone turnover (14), but confirmed the negative correlation of OPG serum concentrations with BMD in postmenopausal women (15). In a recent study, Szulc and colleagues (16) evaluated OPG serum levels in an age-stratified male cohort, and observed positive correlations of OPG serum levels with bioavailable testosterone and estrogen levels, negative correlations with parathyroid hormone (PTH) serum levels and urinary excretion of total deoxypyridinoline, but no correlation with BMD at any site (16). The finding that PTH serum levels and gene expression of OPG by bone cells are inversely correlated was also reported in postmenopausal women (17), and systemic administration of human PTH(1-34) to postmenopausal women with osteoporosis inhibited circulating OPG serum levels (18). Finally, a study of patients with renal diseases showed a decline of serum OPG levels following initiation of systemic glucocorticoid therapy (19). The regulation pattern of OPG by systemic hormones has been described in vitro, and has led to the hypothesis that most hormones and cytokines regulate bone resorption by modulating either RANKL, OPG, or both (9). Interestingly, several studies showed that serum OPG levels increased with ageing and were higher in postmenopausal women (who have an increased rate of bone loss) as compared with men, thus supporting the hypothesis of a counter-regulatory function of OPG in order to prevent further bone loss (13 –16). In this issue of the Journal, Ueland and associates (20) add another important piece to the picture of OPG regulation in humans in vivo. By studying well-characterized patient cohorts with endocrine and immune diseases such as Cushing’s syndrome, acromegaly, growth hormone deficiency, HIV infection, and common variable immunodeficiency (CVI), the investigators reported European Journal of Endocrinology (2001) 145 681–683 ISSN 0804-4643", "title": "" }, { "docid": "8308358ee1d9040b3f62b646edcc8578", "text": "The application of GaN on SiC technology to wideband power amplifier MMICs is explored. The unique characteristics of GaN on SiC applied to reactively matched and distributed wideband circuit topologies are discussed, including comparison to GaAs technology. A 2 – 18 GHz 11W power amplifier MMIC is presented as an example.", "title": "" }, { "docid": "b19c8dab4c214b8afbc232b91ab35b25", "text": "BACKGROUND\nMobile health (mHealth) apps for weight loss (weight loss apps) can be useful diet and exercise tools for individuals in need of losing weight. Most studies view weight loss app users as these types of individuals, but not all users have the same needs. In fact, users with disordered eating behaviors who desire to be underweight are also utilizing weight loss apps; however, few studies give a sense of the prevalence of these users in weight loss app communities and their perceptions of weight loss apps in relation to disordered eating behaviors.\n\n\nOBJECTIVE\nThe aim of this study was to provide an analysis of users' body mass indices (BMIs) in a weight loss app community and examples of how users with underweight BMI goals perceive the impact of the app on disordered eating behaviors.\n\n\nMETHODS\nWe focused on two aspects of a weight loss app (DropPounds): profile data and forum posts, and we moved from a broader picture of the community to a narrower focus on users' perceptions. We analyzed profile data to better understand the goal BMIs of all users, highlighting the prevalence of users with underweight BMI goals. Then we explored how users with a desire to be underweight discussed the weight loss app's impact on disordered eating behaviors.\n\n\nRESULTS\nWe found three main results: (1) no user (regardless of start BMI) starts with a weight gain goal, and most users want to lose weight; (2) 6.78% (1261/18,601) of the community want to be underweight, and most identify as female; (3) users with underweight BMI goals tend to view the app as positive, especially for reducing bingeing; however, some acknowledge its role in exacerbating disordered eating behaviors.\n\n\nCONCLUSIONS\nThese findings are important for our understanding of the different types of users who utilize weight loss apps, the perceptions of weight loss apps related to disordered eating, and how weight loss apps may impact users with a desire to be underweight. Whereas these users had underweight goals, they often view the app as helpful in reducing disordered eating behaviors, which led to additional questions. Therefore, future research is needed.", "title": "" }, { "docid": "236df0b29650785a11562d7285b064db", "text": "Despite the large number of both commercial and academic methods for Automatic License Plate Recognition (ALPR), most existing approaches are focused on a specific license plate (LP) region (e.g. European, US, Brazilian, Taiwanese, etc.), and frequently explore datasets containing approximately frontal images. This work proposes a complete ALPR system focusing on unconstrained capture scenarios, where the LP might be considerably distorted due to oblique views. Our main contribution is the introduction of a novel Convolutional Neural Network (CNN) capable of detecting and rectifying multiple distorted license plates in a single image, which are fed to an Optical Character Recognition (OCR) method to obtain the final result. As an additional contribution, we also present manual annotations for a challenging set of LP images from different regions and acquisition conditions. Our experimental results indicate that the proposed method, without any parameter adaptation or fine tuning for a specific scenario, performs similarly to state-of-the-art commercial systems in traditional scenarios, and outperforms both academic and commercial approaches in challenging ones.", "title": "" }, { "docid": "f08107cd8af2bfe78b2004740e27677c", "text": "(i) You are allowed to freely download, share, print, or photocopy this document. (ii) You are not allowed to modify, sell, or claim authorship of any part of this document. (iii) We thank you for any feedback information, including errors, suggestions, evaluations, and teaching or research uses.", "title": "" }, { "docid": "faa077308647a951cc31b4f3efdbca2b", "text": "This letter presents the design, manufacturing, and operational performance of a graphene-flakes-based screen-printed wideband elliptical dipole antenna operating from 2 up to 5 GHz for low-cost wireless communications applications. To investigate radio frequency (RF) conductivity of the printed graphene, a coplanar waveguide (CPW) test structure was designed, fabricated, and tested in the frequency range from 1 to 20 GHz. Antenna and CPW were screen-printed on Kapton substrates using a graphene paste formulated with a graphene-to-binder ratio of 1:2. A combination of thermal treatment and subsequent compression rolling is utilized to further decrease the sheet resistance for printed graphene structures, ultimately reaching 4 Ω/□ at 10-μ m thicknesses. For the graphene-flakes printed antenna, an antenna efficiency of 60% is obtained. The measured maximum antenna gain is 2.3 dBi at 4.8 GHz. Thus, the graphene-flakes printed antenna adds a total loss of only 3.1 dB to an RF link when compared to the same structure screen-printed for reference with a commercial silver ink. This shows that the electrical performance of screen-printed graphene flakes, which also does not degrade after repeated bending, is suitable for realizing low-cost wearable RF wireless communication devices.", "title": "" }, { "docid": "c0dd3979344c5f327fe447f46c13cffc", "text": "Clinicians and researchers often ask patients to remember their past pain. They also use patient's reports of relief from pain as evidence of treatment efficacy, assuming that relief represents the difference between pretreatment pain and present pain. We have estimated the accuracy of remembering pain and described the relationship between remembered pain, changes in pain levels and reports of relief during treatment. During a 10-week randomized controlled clinical trial on the effectiveness of oral appliances for the management of chronic myalgia of the jaw muscles, subjects recalled their pretreatment pain and rated their present pain and perceived relief. Multiple regression analysis and repeated measures analyses of variance (ANOVA) were used for data analysis. Memory of the pretreatment pain was inaccurate and the errors in recall got significantly worse with the passage of time (P < 0.001). Accuracy of recall for pretreatment pain depended on the level of pain before treatment (P < 0.001): subjects with low pretreatment pain exaggerated its intensity afterwards, while it was underestimated by those with the highest pretreatment pain. Memory of pretreatment pain was also dependent on the level of pain at the moment of recall (P < 0.001). Ratings of relief increased over time (P < 0.001), and were dependent on both present and remembered pain (Ps < 0.001). However, true changes in pain were not significantly related to relief scores (P = 0.41). Finally, almost all patients reported relief, even those whose pain had increased. These results suggest that reports of perceived relief do not necessarily reflect true changes in pain.", "title": "" }, { "docid": "ef2738cfced7ef069b13e5b5dca1558b", "text": "Organic agriculture (OA) is practiced on 1% of the global agricultural land area and its importance continues to grow. Specifically, OA is perceived by many as having less Advances inAgronomy, ISSN 0065-2113 © 2016 Elsevier Inc. http://dx.doi.org/10.1016/bs.agron.2016.05.003 All rights reserved. 1 ARTICLE IN PRESS", "title": "" } ]
scidocsrr
35d63c4d5d92cdf3ca01de6c792a56cf
Consumption of fermented milk product with probiotic modulates brain activity.
[ { "docid": "79c5513abeb58c8735f823258f0bd3e7", "text": "Putting feelings into words (affect labeling) has long been thought to help manage negative emotional experiences; however, the mechanisms by which affect labeling produces this benefit remain largely unknown. Recent neuroimaging studies suggest a possible neurocognitive pathway for this process, but methodological limitations of previous studies have prevented strong inferences from being drawn. A functional magnetic resonance imaging study of affect labeling was conducted to remedy these limitations. The results indicated that affect labeling, relative to other forms of encoding, diminished the response of the amygdala and other limbic regions to negative emotional images. Additionally, affect labeling produced increased activity in a single brain region, right ventrolateral prefrontal cortex (RVLPFC). Finally, RVLPFC and amygdala activity during affect labeling were inversely correlated, a relationship that was mediated by activity in medial prefrontal cortex (MPFC). These results suggest that affect labeling may diminish emotional reactivity along a pathway from RVLPFC to MPFC to the amygdala.", "title": "" } ]
[ { "docid": "a09866f7077022fa5b00b3380dd70b24", "text": "Light can elicit acute physiological and alerting responses in humans, the magnitude of which depends on the timing, intensity, and duration of light exposure. Here, we report that the alerting response of light as well as its effects on thermoregulation and heart rate are also wavelength dependent. Exposure to 2 h of monochromatic light at 460 nm in the late evening induced a significantly greater melatonin suppression than occurred with 550-nm monochromatic light, concomitant with a significantly greater alerting response and increased core body temperature and heart rate ( approximately 2.8 x 10(13) photons/cm(2)/sec for each light treatment). Light diminished the distal-proximal skin temperature gradient, a measure of the degree of vasoconstriction, independent of wavelength. Nonclassical ocular photoreceptors with peak sensitivity around 460 nm have been found to regulate circadian rhythm function as measured by melatonin suppression and phase shifting. Our findings-that the sensitivity of the human alerting response to light and its thermoregulatory sequelae are blue-shifted relative to the three-cone visual photopic system-indicate an additional role for these novel photoreceptors in modifying human alertness, thermophysiology, and heart rate.", "title": "" }, { "docid": "2bbfa2f3d6db8ec0c3dd03ff1f25c52d", "text": "Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods have a strong bias towards lowor high-order interactions, or rely on expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both lowand high-order feature interactions. The proposed framework, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide & Deep model from Google, DeepFM has a shared raw feature input to both its “wide” and “deep” components, with no need of feature engineering besides raw features. DeepFM, as a general learning framework, can incorporate various network architectures in its deep component. In this paper, we study two instances of DeepFM where its “deep” component is DNN and PNN respectively, for which we denote as DeepFM-D and DeepFM-P. Comprehensive experiments are conducted to demonstrate the effectiveness of DeepFM-D and DeepFM-P over the existing models for CTR prediction, on both benchmark data and commercial data. We conduct online A/B test in Huawei App Market, which reveals that DeepFM-D leads to more than 10% improvement of click-through rate in the production environment, compared to a well-engineered LR model. We also covered related practice in deploying our framework in Huawei App Market.", "title": "" }, { "docid": "1d11b3ddedc72cdcb3002c149ea41316", "text": "The \\emph{wavelet tree} data structure is a space-efficient technique for rank and select queries that generalizes from binary characters to an arbitrary multicharacter alphabet. It has become a key tool in modern full-text indexing and data compression because of its capabilities in compressing, indexing, and searching. We present a comparative study of its practical performance regarding a wide range of options on the dimensions of different coding schemes and tree shapes. Our results are both theoretical and experimental: (1)~We show that the run-length $\\delta$ coding size of wavelet trees achieves the 0-order empirical entropy size of the original string with leading constant 1, when the string's 0-order empirical entropy is asymptotically less than the logarithm of the alphabet size. This result complements the previous works that are dedicated to analyzing run-length $\\gamma$-encoded wavelet trees. It also reveals the scenarios when run-length $\\delta$ encoding becomes practical. (2)~We introduce a full generic package of wavelet trees for a wide range of options on the dimensions of coding schemes and tree shapes. Our experimental study reveals the practical performance of the various modifications.", "title": "" }, { "docid": "67826169bd43d22679f93108aab267a2", "text": "Nonnegative matrix factorization (NMF) has become a widely used tool for the analysis of high-dimensional data as it automatically extracts sparse and meaningful features from a set of nonnegative data vectors. We first illustrate this property of NMF on three applications, in image processing, text mining and hyperspectral imaging –this is the why. Then we address the problem of solving NMF, which is NP-hard in general. We review some standard NMF algorithms, and also present a recent subclass of NMF problems, referred to as near-separable NMF, that can be solved efficiently (that is, in polynomial time), even in the presence of noise –this is the how. Finally, we briefly describe some problems in mathematics and computer science closely related to NMF via the nonnegative rank.", "title": "" }, { "docid": "1cf4e42c496c97b5b153e680606cd07a", "text": "The remarkable success of machine learning, especially deep learning, has produced a variety of cloud-based services for mobile users. Such services require an end user to send data to the service provider, which presents a serious challenge to end-user privacy. To address this concern, prior works either add noise to the data or send features extracted from the raw data. They struggle to balance between the utility and privacy because added noise reduces utility and raw data can be reconstructed from extracted features. This work represents a methodical departure from prior works: we balance between a measure of privacy and another of utility by leveraging adversarial learning to find a sweeter tradeoff. We design an encoder that optimizes against the reconstruction error (a measure of privacy), adversarially by a Decoder, and the inference accuracy (a measure of utility) by a Classifier. The result is RAN, a novel deep model with a new training algorithm that automatically extracts features for classification that are both private and useful. It turns out that adversarially forcing the extracted features to only conveys the intended information required by classification leads to an implicit regularization leading to better classification accuracy than the original model which completely ignores privacy. Thus, we achieve better privacy with better utility, a surprising possibility in machine learning! We conducted extensive experiments on five popular datasets over four training schemes, and demonstrate the superiority of RAN compared with existing alternatives.", "title": "" }, { "docid": "0c2a2cb741d1d22c5ef3eabd0b525d8d", "text": "Part-of-speech (POS) tagging is a process of assigning the words in a text corresponding to a particular part of speech. A fundamental version of POS tagging is the identification of words as nouns, verbs, adjectives etc. For processing natural languages, Part of Speech tagging is a prominent tool. It is one of the simplest as well as most constant and statistical model for many NLP applications. POS Tagging is an initial stage of linguistics, text analysis like information retrieval, machine translator, text to speech synthesis, information extraction etc. In POS Tagging we assign a Part of Speech tag to each word in a sentence and literature. Various approaches have been proposed to implement POS taggers. In this paper we present a Marathi part of speech tagger. It is morphologically rich language. Marathi is spoken by the native people of Maharashtra. The general approach used for development of tagger is statistical using Unigram, Bigram, Trigram and HMM Methods. It presents a clear idea about all the algorithms with suitable examples. It also introduces a tag set for Marathi which can be used for tagging Marathi text. In this paper we have shown the development of the tagger as well as compared to check the accuracy of taggers output. The three Marathi POS taggers viz. Unigram, Bigram, Trigram and HMM gives the accuracy of 77.38%, 90.30%, 91.46% and 93.82% respectively.", "title": "" }, { "docid": "149d66b68ea0fa665533bf986f89666f", "text": "This paper introduces a novel software package for the simulation of various types of range scanners. The goal is to provide researchers in the elds of obstacle detection, range data segmentation, obstacle tracking or surface reconstruction with a versatile and powerful software package that is easy to use and allows to focus on algorithmic improvements rather than on building the software framework around it. The simulation environment and the actual simulations can be e ciently distributed with a single compact le. Our proposed approach facilitates easy regeneration of published results, hereby highlighting the value of reproducible research.", "title": "" }, { "docid": "b3a85b88e4a557fcb7f0efb6ba628418", "text": "We present the bilateral solver, a novel algorithm for edgeaware smoothing that combines the flexibility and speed of simple filtering approaches with the accuracy of domain-specific optimization algorithms. Our technique is capable of matching or improving upon state-of-the-art results on several different computer vision tasks (stereo, depth superresolution, colorization, and semantic segmentation) while being 10-1000× faster than baseline techniques with comparable accuracy, and producing lower-error output than techniques with comparable runtimes. The bilateral solver is fast, robust, straightforward to generalize to new domains, and simple to integrate into deep learning pipelines.", "title": "" }, { "docid": "5b41a7c287b54b16e9d791cb62d7aa5a", "text": "Recent evidence demonstrates that children are selective in their social learning, preferring to learn from a previously accurate speaker than from a previously inaccurate one. We examined whether children assessing speakers' reliability take into account how speakers achieved their prior accuracy. In Study 1, when faced with two accurate informants, 4- and 5-year-olds (but not 3-year-olds) were more likely to seek novel information from an informant who had previously given the answers unaided than from an informant who had always relied on help from a third party. Similarly, in Study 2, 4-year-olds were more likely to trust the testimony of an unaided informant over the testimony provided by an assisted informant. Our results indicate that when children reach around 4 years of age, their selective trust extends beyond simple generalizations based on informants' past accuracy to a more sophisticated selectivity that distinguishes between truly knowledgeable informants and merely accurate informants who may not be reliable in the long term.", "title": "" }, { "docid": "84037cd25cb12f6f823da8170a843f75", "text": "This paper presents a topology-based representation dedicated to complex indoor scenes. It accounts for memory management and performances during modelling, visualization and lighting simulation. We propose to enlarge a topological model (called generalized maps) with multipartition and hierarchy. Multipartition allows the user to group objects together according to semantics. Hierarchy provides a coarse-to-fine description of the environment. The topological model we propose has been used for devising a modeller prototype and generating efficient data structure in the context of visualization, global illumination and 1 GHz wave propagation simulation. We presently handle buildings composed of up to one billion triangles.", "title": "" }, { "docid": "61c68d03ed5769bf4c061ba78624cc7f", "text": "Extant xenarthrans (armadillos, anteaters and sloths) are among the most derived placental mammals ever evolved. South America was the cradle of their evolutionary history. During the Tertiary, xenarthrans experienced an extraordinary radiation, whereas South America remained isolated from other continents. The 13 living genera are relics of this earlier diversification and represent one of the four major clades of placental mammals. Sequences of the three independent protein-coding nuclear markers alpha2B adrenergic receptor (ADRA2B), breast cancer susceptibility (BRCA1), and von Willebrand Factor (VWF) were determined for 12 of the 13 living xenarthran genera. Comparative evolutionary dynamics of these nuclear exons using a likelihood framework revealed contrasting patterns of molecular evolution. All codon positions of BRCA1 were shown to evolve in a strikingly similar manner, and third codon positions appeared less saturated within placentals than those of ADRA2B and VWF. Maximum likelihood and Bayesian phylogenetic analyses of a 47 placental taxa data set rooted by three marsupial outgroups resolved the phylogeny of Xenarthra with some evidence for two radiation events in armadillos and provided a strongly supported picture of placental interordinal relationships. This topology was fully compatible with recent studies, dividing placentals into the Southern Hemisphere clades Afrotheria and Xenarthra and a monophyletic Northern Hemisphere clade (Boreoeutheria) composed of Laurasiatheria and Euarchontoglires. Partitioned likelihood statistical tests of the position of the root, under different character partition schemes, identified three almost equally likely hypotheses for early placental divergences: a basal Afrotheria, an Afrotheria + Xenarthra clade, or a basal Xenarthra (Epitheria hypothesis). We took advantage of the extensive sampling realized within Xenarthra to assess its impact on the location of the root on the placental tree. By resampling taxa within Xenarthra, the conservative Shimodaira-Hasegawa likelihood-based test of alternative topologies was shown to be sensitive to both character and taxon sampling.", "title": "" }, { "docid": "44b7ed6c8297b6f269c8b872b0fd6266", "text": "vii", "title": "" }, { "docid": "f6417f30a8f0358f73ac25e15c9016cd", "text": "Due to large quantity of data needed for image synthesis in SAR applications, methods of raw signal compression were developed alongside actual imaging systems. Although performance of modern processing units allows on-platform, online image synthesis, data compressor still can be a valuable addition. Since it is no longer necessary part of SAR system, it should be delivered in a flexible, easy to use and low cost form - like low-resources demanding Intellectual Property core. In this paper chosen properties of raw SAR signal and some of compression methods are presented followed by compressor IP core implementation results.", "title": "" }, { "docid": "5858927c35f9e050e65b101961945727", "text": "Percutaneous endoscopic gastrostomy (PEG) tube placement is a well-established procedure in adults as well as in pediatric patients who cannot be orally fed. However, potential serious complications may occur. The buried bumper syndrome is a well-recognized long-term complication of PEG. Overgrowth of gastric mucosa over the inner bumper of the tube will cause mechanical failure of formula delivery, rendering the tube useless. However, published experience in children with buried bumper syndrome is very scarce. In the authors' clinic, 76 PEG tubes were placed from 2001 to 2008, and buried bumper syndrome occurred in 1 patient. The authors report on their experience with buried bumper syndrome, an adapted safe endoscopic removal technique, as well as recommendations for prevention of buried bumper syndrome.", "title": "" }, { "docid": "a3ea6fad86fe124aa68e0865b432ab32", "text": "This paper mainly addressed the kinematics and dynamics simulation of the Slider-Crank mechanism. After proposing a mathematical model for the forward displacement of the slider-crank mechanism, the mathematical models for the forward velocity and acceleration of the slider-crank mechanism are constructed, respectively. According to the theory of statical equilibrium, the mathematical model for the forward dynamics of the slider-crank mechanism is constituted as well based on the acceleration analysis of each component part of this mechanism under consideration. Taking into account of mathematical models for the forward kinematics and dynamics of the slider-crank mechanism, simulation models for the forward kinematics and dynamics of the slider-crank mechanism are constituted in the Matlab/Simulink simulation platform and the forward kinematics and dynamics simulation of the slider-crank mechanism was successfully accomplished based on Matlab/Simulink by which an arduous and complicated mathematical manipulation can be avoided and a lot of computation time can be saved. Examples of the simulation for the forward kinematics and dynamics of a slider-crank mechanism are given to demonstrate the above-mentioned theoretical results.", "title": "" }, { "docid": "3156539889e42e1796ae2f280d0bbaf5", "text": "ETL process (Extracting-Transforming-Loading) is responsible for (E)xtracting data from heterogeneous sources, (T)ransforming and finally (L)oading them into a data warehouse (DW). Nowadays, Internet and Web 2.0 are generating data at an increasing rate, and therefore put the information systems (IS) face to the challenge of big data. Data integration systems and ETL, in particular, should be revisited and adapted and the well-known solution is based on the data distribution and the parallel/distributed processing. Among all the dimensions defining the complexity of the big data, we focus in this paper on its excessive \"volume\" in order to ensure good performance for ETL processes. In this context, we propose an original approach called Big-ETL (ETL Approach for Big Data) in which we define ETL functionalities that can be run easily on a cluster of computers with MapReduce (MR) paradigm. Big-ETL allows, thereby, parallelizing/distributing ETL at two levels: (i) the ETL process level (coarse granularity level), and (ii) the functionality level (fine level); this allows improving further the ETL performance.", "title": "" }, { "docid": "373756333a3079dccd553fd3fe5a1974", "text": "Research in statistical machine translation (SMT) is largely driven by formal translation tasks, while translating informal text is much more challenging. In this paper we focus on SMT for the informal genre of dialogues, which has rarely been addressed to date. Concretely, we investigate the effect of dialogue acts, speakers, gender, and text register on SMT quality when translating fictional dialogues. We first create and release a corpus of multilingual movie dialogues annotated with these four dialogue-specific aspects. When measuring translation performance for each of these variables, we find that BLEU fluctuations between their categories are often significantly larger than randomly expected. Following this finding, we hypothesize and show that SMT of fictional dialogues benefits from adaptation towards dialogue acts and registers. Finally, we find that male speakers are harder to translate and use more vulgar language than female speakers, and that vulgarity is often not preserved during translation.", "title": "" }, { "docid": "c572efe3e0d84691a31917afa0478929", "text": "Sparse representation has attracted much attention from researchers in fields of signal processing, image processing, computer vision, and pattern recognition. Sparse representation also has a good reputation in both theoretical research and practical applications. Many different algorithms have been proposed for sparse representation. The main purpose of this paper is to provide a comprehensive study and an updated review on sparse representation and to supply guidance for researchers. The taxonomy of sparse representation methods can be studied from various viewpoints. For example, in terms of different norm minimizations used in sparsity constraints, the methods can be roughly categorized into five groups: 1) sparse representation with l0-norm minimization; 2) sparse representation with lp-norm (0 <; p <; 1) minimization; 3) sparse representation with l1-norm minimization; 4) sparse representation with l2,1-norm minimization; and 5) sparse representation with l2-norm minimization. In this paper, a comprehensive overview of sparse representation is provided. The available sparse representation algorithms can also be empirically categorized into four groups: 1) greedy strategy approximation; 2) constrained optimization; 3) proximity algorithm-based optimization; and 4) homotopy algorithm-based sparse representation. The rationales of different algorithms in each category are analyzed and a wide range of sparse representation applications are summarized, which could sufficiently reveal the potential nature of the sparse representation theory. In particular, an experimentally comparative study of these sparse representation algorithms was presented.", "title": "" }, { "docid": "d952d54231f1093129fe23f051fc858d", "text": "As part of the Face Recognition Technology (FERET) program, the U.S. Army Research Laboratory (ARL) conducted supervised government tests and evaluations of automatic face recognition algorithms. The goal of the tests was to provide an independent method of evaluating algorithms and assessing the state of the art in automatic face recognition. This report describes the design and presents the results of the August 1994 and March 1995 FERET tests. Results for FERET tests administered by ARL between August 1994 and August 1996 are reported.", "title": "" }, { "docid": "5d934dd45e812336ad12cee90d1e8cdf", "text": "As research on the connection between narcissism and social networking site (SNS) use grows, definitions of SNS and measurements of their use continue to vary, leading to conflicting results. To improve understanding of the relationship between narcissism and SNS use, as well as the implications of differences in definition and measurement, we examine two ways of measuring Facebook and Twitter use by testing the hypothesis that SNS use is positively associated with narcissism. We also explore the relation between these types of SNS use and different components of narcissism within college students and general adult samples. Our findings suggest that for college students, posting on Twitter is associated with the Superiority component of narcissistic personality while Facebook posting is associated with the Exhibitionism component. Conversely, adults high in Superiority post on Facebook more rather than Twitter. For adults, Facebook and Twitter are both used more by those focused on their own appearances but not as a means of showing off, as is the case with college students. Given these differences, it is essential for future studies of SNS use and personality traits to distinguish between different types of SNS, different populations, and different types of use. 2013 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
b07acabb66359be32e5b5cac447c22a1
Millimeter wave wireless communications: new results for rural connectivity
[ { "docid": "47ee1b71ed10b64110b84e5eecf2857c", "text": "Measurements for future outdoor cellular systems at 28 GHz and 38 GHz were conducted in urban microcellular environments in New York City and Austin, Texas, respectively. Measurements in both line-of-sight and non-line-of-sight scenarios used multiple combinations of steerable transmit and receive antennas (e.g. 24.5 dBi horn antennas with 10.9° half power beamwidths at 28 GHz, 25 dBi horn antennas with 7.8° half power beamwidths at 38 GHz, and 13.3 dBi horn antennas with 24.7° half power beamwidths at 38 GHz) at different transmit antenna heights. Based on the measured data, we present path loss models suitable for the development of fifth generation (5G) standards that show the distance dependency of received power. In this paper, path loss is expressed in easy-to-use formulas as the sum of a distant dependent path loss factor, a floating intercept, and a shadowing factor that minimizes the mean square error fit to the empirical data. The new models are compared with previous models that were limited to using a close-in free space reference distance. Here, we illustrate the differences of the two modeling approaches, and show that a floating intercept model reduces the shadow factors by several dB and offers smaller path loss exponents while simultaneously providing a better fit to the empirical data. The upshot of these new path loss models is that coverage is actually better than first suggested by work in [1], [7] and [8].", "title": "" }, { "docid": "29786d164d0d5e76ea9c098944e27266", "text": "Future mobile communications systems are likely to be very different to those of today with new service innovations driven by increasing data traffic demand, increasing processing power of smart devices and new innovative applications. To meet these service demands the telecommunications industry is converging on a common set of 5G requirements which includes network speeds as high as 10 Gbps, cell edge rate greater than 100 Mbps, and latency of less than 1 msec. To reach these 5G requirements the industry is looking at new spectrum bands in the range up to 100 GHz where there is spectrum availability for wide bandwidth channels. For the development of new 5G systems to operate in bands up to 100 GHz there is a need for accurate radio propagation models which are not addressed by existing channel models developed for bands below 6 GHz. This paper presents a preliminary overview of the 5G channel models for bands up to 100 GHz in indoor offices and shopping malls, derived from extensive measurements across a multitude of bands. These studies have found some extensibility of the existing 3GPP models (e.g. 3GPP TR36.873) to the higher frequency bands up to 100 GHz. The measurements indicate that the smaller wavelengths introduce an increased sensitivity of the propagation models to the scale of the environment and show some frequency dependence of the path loss as well as increased occurrence of blockage. Further, the penetration loss is highly dependent on the material and tends to increase with frequency. The small-scale characteristics of the channel such as delay spread and angular spread and the multipath richness is somewhat similar over the frequency range, which is encouraging for extending the existing 3GPP models to the wider frequency range. Further work will be carried out to complete these models, but this paper presents the first steps for an initial basis for the model development.", "title": "" } ]
[ { "docid": "0b117f379a32b0ba4383c71a692405c8", "text": "Today’s educational policies are largely devoted to fostering the development and implementation of computer applications in education. This paper analyses the skills and competences needed for the knowledgebased society and reveals the role and impact of using computer applications to the teaching and learning processes. Also, the aim of this paper is to reveal the outcomes of a study conducted in order to determine the impact of using computer applications in teaching and learning Management and to propose new opportunities for the process improvement. The findings of this study related to the teachers’ and students’ perceptions about using computer applications for teaching and learning could open further researches on computer applications in education and their educational and economic implications.", "title": "" }, { "docid": "9fc89c9e89877ef867b01c883e75339d", "text": "In recent years, dimensionality-reduction techniques have been developed and are widely used for hypothesis generation in Exploratory Data Analysis. However, these techniques are confronted with overcoming the trade-off between computation time and the quality of the provided dimensionality reduction. In this work, we address this limitation, by introducing Hierarchical Stochastic Neighbor Embedding (Hierarchical-SNE). Using a hierarchical representation of the data, we incorporate the well-known mantra of Overview-First, Details-On-Demand in nonlinear dimensionality reduction. First, the analysis shows an embedding, that reveals only the dominant structures in the data (Overview). Then, by selecting structures that are visible in the overview, the user can filter the data and drill down in the hierarchy. While the user descends into the hierarchy, detailed visualizations of the highdimensional structures will lead to new insights. In this paper, we explain how Hierarchical-SNE scales to the analysis of big datasets. In addition, we show its application potential in the visualization of Deep-Learning architectures and the analysis of hyperspectral images.", "title": "" }, { "docid": "1e42000ed8a108c8745403102613373b", "text": "Knowledge graph embedding aims to represent entities and relations in a large-scale knowledge graph as elements in a continuous vector space. Existing methods, e.g., TransE and TransH, learn embedding representation by defining a global margin-based loss function over the data. However, the optimal loss function is determined during experiments whose parameters are examined among a closed set of candidates. Moreover, embeddings over two knowledge graphs with different entities and relations share the same set of candidate loss functions, ignoring the locality of both graphs. This leads to the limited performance of embedding related applications. In this paper, we propose a locally adaptive translation method for knowledge graph embedding, called TransA, to find the optimal loss function by adaptively determining its margin over different knowledge graphs. Experiments on two benchmark data sets demonstrate the superiority of the proposed method, as compared to the-state-of-the-art ones.", "title": "" }, { "docid": "93076547ea755c11690aa66c1bf3b69f", "text": "The ability of the Myers-Briggs Type Indicator (MBTI; Myers & McCaulley, 1985) to predict performance on social cognitive tasks tapping information processing effort was assessed. Judgment and intuition interacted to predict amount of attributional adjustment on a dispositional attribution task. The MBTI scales predicted processing above and beyond measures of the five factors, rational-experiential preferences, and causal uncertainty. The relevance of these results for interpretation of the MBTI indexes is discussed.", "title": "" }, { "docid": "7b2c17ae926b542ea6c5df442ee4554b", "text": "Magnetic resonance imaging (MRI) has been proposed as a complimentary method to measure bone quality and assess fracture risk. However, manual segmentation of MR images of bone is time-consuming, limiting the use of MRI measurements in the clinical practice. The purpose of this paper is to present an automatic proximal femur segmentation method that is based on deep convolutional neural networks (CNNs). This study had institutional review board approval and written informed consent was obtained from all subjects. A dataset of volumetric structural MR images of the proximal femur from 86 subjects were manually-segmented by an expert. We performed experiments by training two different CNN architectures with multiple number of initial feature maps, layers and dilation rates, and tested their segmentation performance against the gold standard of manual segmentations using four-fold cross-validation. Automatic segmentation of the proximal femur using CNNs achieved a high dice similarity score of 0.95 ± 0.02 with precision = 0.95 ± 0.02, and recall = 0.95 ± 0.03. The high segmentation accuracy provided by CNNs has the potential to help bring the use of structural MRI measurements of bone quality into clinical practice for management of osteoporosis.", "title": "" }, { "docid": "0e8dbf7567f183c314b55890cad98050", "text": "Differential evolution (DE) is an efficient and powerful population-based stochastic search technique for solving optimization problems over continuous space, which has been widely applied in many scientific and engineering fields. However, the success of DE in solving a specific problem crucially depends on appropriately choosing trial vector generation strategies and their associated control parameter values. Employing a trial-and-error scheme to search for the most suitable strategy and its associated parameter settings requires high computational costs. Moreover, at different stages of evolution, different strategies coupled with different parameter settings may be required in order to achieve the best performance. In this paper, we propose a self-adaptive DE (SaDE) algorithm, in which both trial vector generation strategies and their associated control parameter values are gradually self-adapted by learning from their previous experiences in generating promising solutions. Consequently, a more suitable generation strategy along with its parameter settings can be determined adaptively to match different phases of the search process/evolution. The performance of the SaDE algorithm is extensively evaluated (using codes available from P. N. Suganthan) on a suite of 26 bound-constrained numerical optimization problems and compares favorably with the conventional DE and several state-of-the-art parameter adaptive DE variants.", "title": "" }, { "docid": "ab45fd5e4aae81b5b6324651b035365b", "text": "The most popular way to use probabilistic models in vision is first to extract some descriptors of small image patches or object parts using well-engineered features, and then to use statistical learning tools to model the dependencies among these features and eventual labels. Learning probabilistic models directly on the raw pixel values has proved to be much more difficult and is typically only used for regularizing discriminative methods. In this work, we use one of the best, pixel-level, generative models of natural images–a gated MRF–as the lowest level of a deep belief network (DBN) that has several hidden layers. We show that the resulting DBN is very good at coping with occlusion when predicting expression categories from face images, and it can produce features that perform comparably to SIFT descriptors for discriminating different types of scene. The generative ability of the model also makes it easy to see what information is captured and what is lost at each level of representation.", "title": "" }, { "docid": "ee473a0bb8b96249e61ad5e3925c11c2", "text": "Simple, short, and compact hashtags cover a wide range of information on social networks. Although many works in the field of natural language processing (NLP) have demonstrated the importance of hashtag recommendation, hashtag recommendation for images has barely been studied. In this paper, we introduce the HARRISON dataset, a benchmark on hashtag recommendation for real world images in social networks. The HARRISON dataset is a realistic dataset, composed of 57,383 photos from Instagram and an average of 4.5 associated hashtags for each photo. To evaluate our dataset, we design a baseline framework consisting of visual feature extractor based on convolutional neural network (CNN) and multi-label classifier based on neural network. Based on this framework, two single feature-based models, object-based and scene-based model, and an integrated model of them are evaluated on the HARRISON dataset. Our dataset shows that hashtag recommendation task requires a wide and contextual understanding of the situation conveyed in the image. As far as we know, this work is the first vision-only attempt at hashtag recommendation for real world images in social networks. We expect this benchmark to accelerate the advancement of hashtag recommendation.", "title": "" }, { "docid": "af5e15777e3d7331ed8020de4af73f96", "text": "We present a virtual try-on system EON Interactive Mirror that employs one Kinect sensor and one High-Definition (HD) Camera. We first overview the major technical components for the complete virtual try-on system. We then elaborate on several key challenges such as calibration between the Kinect and HD cameras, and shoulder height estimation for individual subjects. Quality of these steps is the key to achieving seamless try-on experience for users. We also present performance comparison of our system implemented on top of two skeletal tracking SDKs: OpenNI and Kinect for Windows SDK (KWSDK). Lastly, we discuss our experience in deploying the system in retail stores and some potential future improvements.", "title": "" }, { "docid": "8eb99f7441bd77556d6ccf7d6fa22f26", "text": "Recent developments in both heating and power sectors contribute to the creation of an integrated power system. Taking also into account the increased amount of distributed generation, current trends in power generation, transportation and consumption will be significantly affected. Linking components of this integrated system, such as heat pumps, can be controlled in different ways, to provide certain benefits to different parties. The scope of this paper is to provide a control algorithm for a residential heat pump, in order to minimize its cost, paid by the customer/owner, while maintaining a certain temperature and comfort level. A commercially available heat pump installed in a typical house with standard thermal insulation is considered. Simulation results conclude that the proposed controlling method succeeds in reducing the amount of money spent by the customer for residential heating purposes.", "title": "" }, { "docid": "8f1a5420deb75a2b664ceeaae8fc03f9", "text": "A stretchable and multiple-force-sensitive electronic fabric based on stretchable coaxial sensor electrodes is fabricated for artificial-skin application. This electronic fabric, with only one kind of sensor unit, can simultaneously map and quantify the mechanical stresses induced by normal pressure, lateral strain, and flexion.", "title": "" }, { "docid": "27ebb65cfcac664e5b0bd600b09915a0", "text": "Amazon Aurora is a high-throughput cloud-native relational database offered as part of Amazon Web Services (AWS). One of the more novel differences between Aurora and other relational databases is how it pushes redo processing to a multi-tenant scale-out storage service, purpose-built for Aurora. Doing so reduces networking traffic, avoids checkpoints and crash recovery, enables failovers to replicas without loss of data, and enables fault-tolerant storage that heals without database involvement. Traditional implementations that leverage distributed storage would use distributed consensus algorithms for commits, reads, replication, and membership changes and amplify cost of underlying storage. In this paper, we describe how Aurora avoids distributed consensus under most circumstances by establishing invariants and leveraging local transient state. Doing so improves performance, reduces variability, and lowers costs.", "title": "" }, { "docid": "18ffa160ffce386993b5c2da5070b364", "text": "This paper presents a new approach for facial attribute classification using a multi-task learning approach. Unlike other approaches that uses hand engineered features, our model learns a shared feature representation that is wellsuited for multiple attribute classification. Learning a joint feature representation enables interaction between different tasks. For learning this shared feature representation we use a Restricted Boltzmann Machine (RBM) based model, enhanced with a factored multi-task component to become Multi-Task Restricted Boltzmann Machine (MT-RBM). Our approach operates directly on faces and facial landmark points to learn a joint feature representation over all the available attributes. We use an iterative learning approach consisting of a bottom-up/top-down pass to learn the shared representation of our multi-task model and at inference we use a bottom-up pass to predict the different tasks. Our approach is not restricted to any type of attributes, however, for this paper we focus only on facial attributes. We evaluate our approach on three publicly available datasets, the Celebrity Faces (CelebA), the Multi-task Facial Landmarks (MTFL), and the ChaLearn challenge dataset. We show superior classification performance improvement over the state-of-the-art.", "title": "" }, { "docid": "fdb0009b962254761541eb08f556fa0e", "text": "Nonionic surfactants are widely used in the development of protein pharmaceuticals. However, the low level of residual peroxides in surfactants can potentially affect the stability of oxidation-sensitive proteins. In this report, we examined the peroxide formation in polysorbate 80 under a variety of storage conditions and tested the potential of peroxides in polysorbate 80 to oxidize a model protein, IL-2 mutein. For the first time, we demonstrated that peroxides can be easily generated in neat polysorbate 80 in the presence of air during incubation at elevated temperatures. Polysorbate 80 in aqueous solution exhibited a faster rate of peroxide formation and a greater amount of peroxides during incubation, which is further promoted/catalyzed by light. Peroxide formation can be greatly inhibited by preventing any contact with air/oxygen during storage. IL-2 mutein can be easily oxidized both in liquid and solid states. A lower level of peroxides in polysorbate 80 did not change the rate of IL-2 mutein oxidation in liquid state but significantly accelerated its oxidation in solid state under air. A higher level of peroxides in polysorbate 80 caused a significant increase in IL-2 mutein oxidation both in liquid and solid states, and glutathione can significantly inhibit the peroxide-induced oxidation of IL-2 mutein in a lyophilized formulation. In addition, a higher level of peroxides in polysorbate 80 caused immediate IL-2 mutein oxidation during annealing in lyophilization, suggesting that implementation of an annealing step needs to be carefully evaluated in the development of a lyophilization process for oxidation-sensitive proteins in the presence of polysorbate.", "title": "" }, { "docid": "077479a268be00930533f4ce8fce2845", "text": "Our research goals are to understand and model the factors that affect trust in intelligent systems across a variety of application domains. In this chapter, we present two methods that can be used to build models of trust for such systems. The first method is the use of surveys, in which large numbers of people are asked to identify and rank factors that would influence their trust of a particular intelligent system. Results from multiple surveys exploring multiple application domains can be used to build a core model of trust and to identify domain specific factors that are needed to modify the core model to improve its accuracy and usefulness. The second method involves conducting experiments where human subjects use the intelligent system, where a variety of factors can be controlled in the studies to explore different factors. Based upon the results of these human subjects experiments, a trust model can be built. These trust models can be used to create design guidelines, to predict initial trust levels before the start of a system’s use, and to measure the evolution of trust over the use of a system. With increased understanding of how to model trust, we can build systems that will be more accepted and used appropriately by target populations.", "title": "" }, { "docid": "226b8592fac85db4a91b723ca39fa419", "text": "We address the task of predicting causally related events in stories according to a standard evaluation framework, the Choice of Plausible Alternatives (COPA). We present a neural encoder-decoder model that learns to predict relations between adjacent sequences in stories as a means of modeling causality. We explore this approach using different methods for extracting and representing sequence pairs as well as different model architectures. We also compare the impact of different training datasets on our model. In particular, we demonstrate the usefulness of a corpus not previously applied to COPA, the ROCStories corpus. While not state-of-the-art, our results establish a new reference point for systems evaluated on COPA, and one that is particularly informative for future neural-based approaches.", "title": "" }, { "docid": "3c8b9a015157a7dd7ce4a6b0b35847d9", "text": "While more and more people are relying on social media for news feeds, serious news consumers still resort to well-established news outlets for more accurate and in-depth reporting and analyses. They may also look for reports on related events that have happened before and other background information in order to better understand the event being reported. Many news outlets already create sidebars and embed hyperlinks to help news readers, often with manual efforts. Technologies in IR and NLP already exist to support those features, but standard test collections do not address the tasks of modern news consumption. To help advance such technologies and transfer them to news reporting, NIST, in partnership with the Washington Post, is starting a new TREC track in 2018 known as the News Track.", "title": "" }, { "docid": "3d973bfe3b1b29a4ab0ea803fc05e0e0", "text": "This paper provides a background to fingerprint recognition, describes the biometric use of fingerprints, biometric standards and related security issues .and also discusses several Biometric scan technologies: finger-scan, facials can and retinal-scan. As accurate automatic personal identification is critical in a wide range of application domain such as ID cards, electronic commerce and automated banking and several other information repositories . Biometrics , which refers to automatic identification of persons based on his/her physiological or behavioral characteristics ,in inherently more reliable and more capable in differentiating between an authorized person and fraudulent imposter than traditional methods such as passwords and pin numbers. Automatic fingerprinting and other biometric aspects like face, voice, iris , etc are more reliable and secure and ways for identification and verification of person to claim the access to the information system. We have explored the fingerprint technique and also mentioned some other techniques which when accompanied with the fingerprint provides more powerful security to the system by granting identification to correct and verified access to the person by extracting some physical features from these physiological parts of the body to attain respective security traits (minutiae).", "title": "" }, { "docid": "5b6d68984b4f9a6e0f94e0a68768dc8c", "text": "In this paper, we focus on a major internet problem which is a huge amount of uncategorized text. We review existing techniques used for feature selection and categorization. After reviewing the existing literature, it was found that there exist some gaps in existing algorithms, one of which is a requirement of the labeled dataset for the training of the classifier. Keywords— Bayesian; KNN; PCA; SVM; TF-IDF", "title": "" } ]
scidocsrr
6bf66f0d4b1560ebcdb0f09b0a5f5efa
Big Data Analytics in the Cloud: Spark on Hadoop vs MPI/OpenMP on Beowulf
[ { "docid": "181eafc11f3af016ca0926672bdb5a9d", "text": "The conventional wisdom is that backprop nets with excess hi dden units generalize poorly. We show that nets with excess capacity ge neralize well when trained with backprop and early stopping. Experim nts suggest two reasons for this: 1) Overfitting can vary significant ly i different regions of the model. Excess capacity allows better fit to reg ions of high non-linearity, and backprop often avoids overfitting the re gions of low non-linearity. 2) Regardless of size, nets learn task subco mponents in similar sequence. Big nets pass through stages similar to th ose learned by smaller nets. Early stopping can stop training the large n et when it generalizes comparably to a smaller net. We also show that co njugate gradient can yield worse generalization because it overfits regions of low non-linearity when learning to fit regions of high non-linea rity.", "title": "" }, { "docid": "4a684a0a590f326894416d5afc31b63c", "text": "Collisions at high-energy particle colliders are a traditionally fruitful source of exotic particle discoveries. Finding these rare particles requires solving difficult signal-versus-background classification problems, hence machine-learning approaches are often used. Standard approaches have relied on 'shallow' machine-learning models that have a limited capacity to learn complex nonlinear functions of the inputs, and rely on a painstaking search through manually constructed nonlinear features. Progress on this problem has slowed, as a variety of techniques have shown equivalent performance. Recent advances in the field of deep learning make it possible to learn more complex functions and better discriminate between signal and background classes. Here, using benchmark data sets, we show that deep-learning methods need no manually constructed inputs and yet improve the classification metric by as much as 8% over the best current approaches. This demonstrates that deep-learning approaches can improve the power of collider searches for exotic particles.", "title": "" } ]
[ { "docid": "6f75ccb94bb4b420ea8a209e8031e451", "text": "Much of the recent research on solving iterative inference problems focuses on moving away from hand-chosen inference algorithms and towards learned inference. In the latter, the inference process is unrolled in time and interpreted as a recurrent neural network (RNN) which allows for joint learning of model and inference parameters with back-propagation through time. In this framework, the RNN architecture is directly derived from a hand-chosen inference algorithm, effectively limiting its capabilities. We propose a learning framework, called Recurrent Inference Machines (RIM), in which we turn algorithm construction the other way round: Given data and a task, train an RNN to learn an inference algorithm. Because RNNs are Turing complete [1, 2] they are capable to implement any inference algorithm. The framework allows for an abstraction which removes the need for domain knowledge. We demonstrate in several image restoration experiments that this abstraction is effective, allowing us to achieve state-of-the-art performance on image denoising and super-resolution tasks and superior across-task generalization.", "title": "" }, { "docid": "0a51a9bd6021a8a0a7c6783dffedff06", "text": "Classification of music genre has been an inspiring job in the area of music information retrieval (MIR). Classification of genre can be valuable to explain some actual interesting problems such as creating song references, finding related songs, finding societies who will like that specific song. The purpose of our research is to find best machine learning algorithm that predict the genre of songs using k-nearest neighbor (k-NN) and Support Vector Machine (SVM). This paper also presents comparative analysis between k-nearest neighbor (k-NN) and Support Vector Machine (SVM) with dimensionality return and then without dimensionality reduction via principal component analysis (PCA). The Mel Frequency Cepstral Coefficients (MFCC) is used to extract information for the data set. In addition, the MFCC features are used for individual tracks. From results we found that without the dimensionality reduction both k-nearest neighbor and Support Vector Machine (SVM) gave more accurate results compare to the results with dimensionality reduction. Overall the Support Vector Machine (SVM) is much more effective classifier for classification of music genre. It gave an overall accuracy of 77%. Keywords—K-nearest neighbor (k-NN); Support Vector Machine (SVM); music; genre; classification; features; Mel Frequency Cepstral Coefficients (MFCC); principal component analysis (PCA)", "title": "" }, { "docid": "e02310e36b8306e3f033830447af2f1e", "text": "This paper suggests the need for a software engineering research community conversation about the future that the community would like to have. The paper observes that the research directions the community has taken in the past, dating at least back to the formative NATO Conferences in the late 1960's, have been driven largely by desire to meet the needs of practice. The paper suggests that the community should discuss whether it is now appropriate to balance this problem-solving-oriented research with a stronger complement of curiosity-driven research. This paper does not advocate what that balance should be. Neither does it advocate what curiosity driven research topics should be pursued (although illustrative examples are offered). It does does advocate the need for a community conversation about these questions.", "title": "" }, { "docid": "349417ffb2170620c30bb6c5c4ca158e", "text": "The task of zero resource query-by-example keyword search has received much attention in recent years as the speech technology needs of the developing world grow. These systems traditionally rely upon dynamic time warping (DTW) based retrieval algorithms with runtimes that are linear in the size of the search collection. As a result, their scalability substantially lags that of their supervised counterparts, which take advantage of efficient word-based indices. In this paper, we present a novel audio indexing approach called Segmental Randomized Acoustic Indexing and Logarithmic-time Search (S-RAILS). S-RAILS generalizes the original frame-based RAILS methodology to word-scale segments by exploiting a recently proposed acoustic segment embedding technique. By indexing word-scale segments directly, we avoid higher cost frame-based processing of RAILS while taking advantage of the improved lexical discrimination of the embeddings. Using the same conversational telephone speech benchmark, we demonstrate major improvements in both speed and accuracy over the original RAILS system.", "title": "" }, { "docid": "c56c392e1a7d58912eeeb1718379fa37", "text": "The changing face of technology has played an integral role in the development of the hotel and restaurant industry. The manuscript investigated the impact that technology has had on the hotel and restaurant industry. A detailed review of the literature regarding the growth of technology in the industry was linked to the development of strategic direction. The manuscript also looked at the strategic analysis methodology for evaluating and taking advantage of current and future technological innovations for the hospitality industry. Identification and implementation of these technologies can help in building a sustainable competitive advantage for hotels and restaurants.", "title": "" }, { "docid": "ceb02e24964c29ef1bf03f2fe1ef8e3e", "text": "In this paper we present initial research to develop a conceptual model for describing data quality effects in the context of Big Data. Despite the importance of data quality for modern businesses, current research on Big Data Quality is limited. It is particularly unknown how to apply previous data quality models to Big Data. Therefore in this paper we review data quality research from several perspectives and apply the data quality model developed by Helfert & Heinrich with its elements of quality of conformance and quality of design to the context of Big Data. We extend this model by analyzing the effect of three Big Data characteristics (Volume, Velocity and Variety) and discuss its application to the context of Smart Cities, as one interesting example in which Big Data is increasingly important. Although this paper provides only propositions and a first conceptual discussion, we believe that the paper can build a foundation for further empirical research to understand Big Data Quality and its implications in practice.", "title": "" }, { "docid": "6deaeb7d3fdb3a9ffce007af333061ac", "text": "This paper proposes a simple CMOS exponential current circuit that is capable to control a Variable Gain Amplifier with a linear-in-dB manner. The proposed implementation is based on a Taylor's series approximation of the exponential function. A simple VGA architecture has been designed in a CMOS 90nm technology, in order to validate the theoretical analysis. The approximation achieves a 17dB linear range with less than 0.5dB approximation error, while the overall power consumption is less than 300μW.", "title": "" }, { "docid": "785ce19a91fbca6f8b3a3ccbe45669cd", "text": "Automatic brain tumor segmentation plays an important role for diagnosis, surgical planning and treatment assessment of brain tumors. Deep convolutional neural networks (CNNs) have been widely used for this task. Due to the relatively small data set for training, data augmentation at training time has been commonly used for better performance of CNNs. Recent works also demonstrated the usefulness of data augmentation at test time, in addition to training time, for achieving more robust predictions. We investigate how test-time augmentation can improve CNNs’ performance for brain tumor segmentation. We used different underpinning network structures and augmented the image by 3D rotation, flipping, scaling and adding random noise at both training and test time. Experiments with BraTS 2018 training and validation set show that test-time augmentation can achieve higher segmentation accuracy and obtain uncertainty estimation of the segmentation results.", "title": "" }, { "docid": "2fb78e13c42cf6ed2f4394be5d7d84a6", "text": "This paper deals with the control design of an asymmetrical cascaded multilevel inverter. This structure therefore provides the capability to produce higher voltages at higher speeds with low switching frequency which has inherent low switching losses and high converter efficiency. Selective harmonic elimination(SHE) for asymmetrical multilevel CHB inverter control is proposed. The technique utilized in the estimation of switching angles involves Firefly Algorithm (FFA). Compared to Newton Raphson algorithm (NR), FFA is more robust and entails less computation time. Simulation results prove the effectiveness of FFA technique compared to NR algorithm. The proposed method does effectively eliminate a number of specific low order harmonics, and the output voltage is resulted in low total harmonic distortion. FFA and NR algorithm for asymmetrical cascaded H-bridge nine level inverter control are experimentally tested on a prototype using FPGA.", "title": "" }, { "docid": "bbb8b5304ac9b7b1221b0b34387cd7f7", "text": "Paramaligne Erscheinungen stellen eigenartige Symptome der Krebskrankheit dar. Sie sind meist bei den Lungeneareinomen und hier haupts/tchlieh bei den kleinzelligen Carcinomen zu verzeichnen. Es kommen vor (BA~I~TY, CouRY u. RULLI~RE, 1964) neurologische, osteoartikul~re, h/~matologische, vascul/ire, metabolische, Muskelund Hauterscheinungen. Bei den endokrinologisehen Symptomen kSnnen die bioehemisehen Untersuehungen reeht interessante Ergebnisse zeigen (CrrA~OT, 1964; AZZOPARDI u. BELLAU, 1965). Von diesen Symptomen sind die osteoartikul/~ren die h~ufigsten. In unserer Zusammensteliung von 225 kleinzelligen Lungencarcinomen haben wir 44 Kranke mit verschieden stark ent~,iekelten Ver~nderungen im Sinne der Trommelschlegel finger gefunden (Tab.).", "title": "" }, { "docid": "9ac7dbae53fe06937780a53dd3432f80", "text": "Artefact evaluation is regarded as being crucial for Design Science Research (DSR) in order to rigorously proof an artefact’s relevance for practice. The availability of guidelines for structuring DSR processes notwithstanding, the current body of knowledge provides only rudimentary means for a design researcher to select and justify appropriate artefact evaluation strategies in a given situation. This paper proposes patterns that could be used to articulate and justify artefact evaluation strategies within DSR projects. These patterns have been synthesised from priorDSR literature concerned with evaluation strategies. They distinguish both ex ante as well as ex post evaluations and reflect current DSR approaches and evaluation criteria.", "title": "" }, { "docid": "1c16d6b5072283cfc9301f6ae509ede1", "text": "T paper introduces a model of collective creativity that explains how the locus of creative problem solving shifts, at times, from the individual to the interactions of a collective. The model is grounded in observations, interviews, informal conversations, and archival data gathered in intensive field studies of work in professional service firms. The evidence suggests that although some creative solutions can be seen as the products of individual insight, others should be regarded as the products of a momentary collective process. Such collective creativity reflects a qualitative shift in the nature of the creative process, as the comprehension of a problematic situation and the generation of creative solutions draw from—and reframe—the past experiences of participants in ways that lead to new and valuable insights. This research investigates the origins of such moments, and builds a model of collective creativity that identifies the precipitating roles played by four types of social interaction: help seeking, help giving, reflective reframing, and reinforcing. Implications of this research include shifting the emphasis in research and management of creativity from identifying and managing creative individuals to understanding the social context and developing interactive approaches to creativity, and from a focus on relatively constant contextual variables to the alignment of fluctuating variables and their precipitation of momentary phenomena.", "title": "" }, { "docid": "171b5d7c884cd934af602bf000451cb9", "text": "Can playing action video games improve visuomotor control? If so, can these games be used in training people to perform daily visuomotor-control tasks, such as driving? We found that action gamers have better lane-keeping and visuomotor-control skills than do non-action gamers. We then trained non-action gamers with action or nonaction video games. After they played a driving or first-person-shooter video game for 5 or 10 hr, their visuomotor control improved significantly. In contrast, non-action gamers showed no such improvement after they played a nonaction video game. Our model-driven analysis revealed that although different action video games have different effects on the sensorimotor system underlying visuomotor control, action gaming in general improves the responsiveness of the sensorimotor system to input error signals. The findings support a causal link between action gaming (for as little as 5 hr) and enhancement in visuomotor control, and suggest that action video games can be beneficial training tools for driving.", "title": "" }, { "docid": "d4e4759c183c61acbf09bff91cc75ee5", "text": "A wide range of defenses have been proposed to harden neural networks against adversarial attacks. However, a pattern has emerged in which the majority of adversarial defenses are quickly broken by new attacks. Given the lack of success at generating robust defenses, we are led to ask a fundamental question: Are adversarial attacks inevitable? This paper analyzes adversarial examples from a theoretical perspective, and identifies fundamental bounds on the susceptibility of a classifier to adversarial attacks. We show that, for certain classes of problems, adversarial examples are inescapable. Using experiments, we explore the implications of theoretical guarantees for real-world problems and discuss how factors such as dimensionality and image complexity limit a classifier’s robustness against adversarial examples.", "title": "" }, { "docid": "f18c9cecdd3b7697af7c160906d6d501", "text": "A new data structure for efficient similarity search in very large dataseis of high-dimensional vectors is introduced. This structure called the inverted multi-index generalizes the inverted index idea by replacing the standard quantization within inverted indices with product quantization. For very similar retrieval complexity and preprocessing time, inverted multi-indices achieve a much denser subdivision of the search space compared to inverted indices, while retaining their memory efficiency. Our experiments with large dataseis of SIFT and GIST vectors demonstrate that because of the denser subdivision, inverted multi-indices are able to return much shorter candidate lists with higher recall. Augmented with a suitable reranking procedure, multi-indices were able to improve the speed of approximate nearest neighbor search on the dataset of 1 billion SIFT vectors by an order of magnitude compared to the best previously published systems, while achieving better recall and incurring only few percent of memory overhead.", "title": "" }, { "docid": "b2c05f820195154dbbb76ee68740b5d9", "text": "DeConvNet, Guided BackProp, LRP, were invented to better understand deep neural networks. We show that these methods do not produce the theoretically correct explanation for a linear model. Yet they are used on multi-layer networks with millions of parameters. This is a cause for concern since linear models are simple neural networks. We argue that explanation methods for neural nets should work reliably in the limit of simplicity, the linear models. Based on our analysis of linear models we propose a generalization that yields two explanation techniques (PatternNet and PatternAttribution) that are theoretically sound for linear models and produce improved explanations for deep networks.", "title": "" }, { "docid": "32bf3e0ce6f9bc8864bd905ffebcfcce", "text": "BACKGROUND AND PURPOSE\nTo improve the accuracy of early postonset prediction of motor recovery in the flaccid hemiplegic arm, the effects of change in motor function over time on the accuracy of prediction were evaluated, and a prediction model for the probability of regaining dexterity at 6 months was developed.\n\n\nMETHODS\nIn 102 stroke patients, dexterity and paresis were measured with the Action Research Arm Test, Motricity Index, and Fugl-Meyer motor evaluation. For model development, 23 candidate determinants were selected. Logistic regression analysis was used for prognostic factors and model development.\n\n\nRESULTS\nAt 6 months, some dexterity in the paretic arm was found in 38%, and complete functional recovery was seen in 11.6% of the patients. Total anterior circulation infarcts, right hemisphere strokes, homonymous hemianopia, visual gaze deficit, visual inattention, and paresis were statistically significant related to a poor arm function. Motricity Index leg scores of at least 25 points in the first week and Fugl-Meyer arm scores of 11 points in the second week increasing to 19 points in the fourth week raised the probability of developing some dexterity (Action Research Arm Test >or=10 points) from 74% (positive predictive value [PPV], 0.74; 95% confidence interval [CI], 0.63 to 0.86) to 94% (PPV, 0.83; 95% CI, 0.76 to 0.91) at 6 months. No change in probabilities of prediction dexterity was found after 4 weeks.\n\n\nCONCLUSIONS\nBased on the Fugl-Meyer scores of the flaccid arm, optimal prediction of arm function outcome at 6 months can be made within 4 weeks after onset. Lack of voluntary motor control of the leg in the first week with no emergence of arm synergies at 4 weeks is associated with poor outcome at 6 months.", "title": "" }, { "docid": "d168bdb3f1117aac53da1fbac0906887", "text": "Enforcing open source licenses such as the GNU General Public License (GPL), analyzing a binary for possible vulnerabilities, and code maintenance are all situations where it is useful to be able to determine the source code provenance of a binary. While previous work has either focused on computing binary-to-binary similarity or source-to-source similarity, BinPro is the first work we are aware of to tackle the problem of source-to-binary similarity. BinPro can match binaries with their source code even without knowing which compiler was used to produce the binary, or what optimization level was used with the compiler. To do this, BinPro utilizes machine learning to compute optimal code features for determining binaryto-source similarity and a static analysis pipeline to extract and compute similarity based on those features. Our experiments show that on average BinPro computes a similarity of 81% for matching binaries and source code of the same applications, and an average similarity of 25% for binaries and source code of similar but different applications. This shows that BinPro’s similarity score is useful for determining if a binary was derived from a particular source code.", "title": "" }, { "docid": "0b9e7adde5f9b577930cab27cd4bc7a0", "text": "Statistical speech reconstruction for larynx-related dysphonia has achieved good performance using Gaussian mixture models and, more recently, restricted Boltzmann machine arrays; however, deep neural network (DNN)-based systems have been hampered by the limited amount of training data available from individual voice-loss patients. The authors propose a novel DNN structure that allows a partially supervised training approach on spectral features from smaller data sets, yielding very good results compared with the current state-of-the-art.", "title": "" } ]
scidocsrr
3aa7b5f5e919ead2fe56dcac3a9ee08c
Vulnerability Assessment of Cybersecurity for SCADA Systems
[ { "docid": "4105ebe68ca25c863f77dde3ff94dcdc", "text": "This paper deals with the increasingly important issue of proper handling of information security for electric power utilities. It is based on the efforts of CIGRE Joint Working Group (JWG) D2/B3/C2-01 on \"Security for Information Systems and Intranets in Electric Power System\" carried out between 2003 and 2006. The JWG has produced a technical brochure (TB), where the purpose to raise the awareness of information and cybersecurity in electric power systems, and gives some guidance on how to solve the security problem by focusing on security domain modeling, risk assessment methodology, and security framework building. Here in this paper, the focus is on the issue of awareness and to highlight some steps to achieve a framework for cybersecurity management. Also, technical considerations of some communication systems for substation automation are studied. Finally, some directions for further works in this vast area of information and cybersecurity are given.", "title": "" } ]
[ { "docid": "72871db63ff645a1691044bac42c56d3", "text": "Malware has become one of the most serious threats to computer information system and the current malware detection technology still has very significant limitations. In this paper, we proposed a malware detection approach by mining format information of PE (portable executable) files. Based on in-depth analysis of the static format information of the PE files, we extracted 197 features from format information of PE files and applied feature selection methods to reduce the dimensionality of the features and achieve acceptable high performance. When the selected features were trained using classification algorithms, the results of our experiments indicate that the accuracy of the top classification algorithm is 99.1% and the value of the AUC is 0.998. We designed three experiments to evaluate the performance of our detection scheme and the ability of detecting unknown and new malware. Although the experimental results of identifying new malware are not perfect, our method is still able to identify 97.6% of new malware with 1.3% false positive rates.", "title": "" }, { "docid": "8d4891ac73cdd4cd76e25438634118b2", "text": "Although software measurement plays an increasingly important role in Software Engineering, there is no consensus yet on many of the concepts and terminology used in this field. Even worse, vocabulary conflicts and inconsistencies can be frequently found amongst the many sources and references commonly used by software measurement researchers and practitioners. This article presents an analysis of the current situation, and provides a comparison framework that can be used to identify and address the discrepancies, gaps, and terminology conflicts that current software measurement proposals present. A basic software measurement ontology is introduced, that aims at contributing to the harmonization of the different software measurement proposals and standards, by providing a coherent set of common concepts used in software measurement. The ontology is also aligned with the metrology vocabulary used in other more mature measurement engineering disciplines. q 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "f4c49c00e70845322499814ab4a99de6", "text": "OBJECTIVES Three domains comprise the field of human assessment: ability, motive and personality. Differences in personality and cognitive abilities between generations have been documented, but differences in motive between generations have not been explored. This study explored generational differences in medical students regarding motives using the Thematic Apperception Test (TAT). METHODS Four hundred and twenty six students (97% response rate) at one medical school (Generation X = 229, Millennials = 197) who matriculated in 1995 & 1996 (Generation X) or in 2003 & 2004 (Millennials) wrote a story after being shown two TAT picture cards. Student stories for each TAT card were scored for different aspects of motives: Achievement, Affiliation, and Power. RESULTS A multiple analysis of variance (p < 0.05) showed significant differences between Millennials' and Generation X-ers' needs for Power on both TAT cards and needs for Achievement and Affiliation on one TAT card. The main effect for gender was significant for both TAT cards regarding Achievement. No main effect for ethnicity was noted. CONCLUSIONS Differences in needs for Achievement, Affiliation and Power exist between Millennial and Generation X medical students. Generation X-ers scored higher on the motive of Power, whereas Millennials scored higher on the motives of Achievement and Affiliation.", "title": "" }, { "docid": "0923e899e5d7091a6da240db21eefad2", "text": "A new method was developed to acquire images automatically at a series of specimen tilts, as required for tomographic reconstruction. The method uses changes in specimen position at previous tilt angles to predict the position at the current tilt angle. Actual measurement of the position or focus is skipped if the statistical error of the prediction is low enough. This method allows a tilt series to be acquired rapidly when conditions are good but falls back toward the traditional approach of taking focusing and tracking images when necessary. The method has been implemented in a program, SerialEM, that provides an efficient environment for data acquisition. This program includes control of an energy filter as well as a low-dose imaging mode, in which tracking and focusing occur away from the area of interest. The program can automatically acquire a montage of overlapping frames, allowing tomography of areas larger than the field of the CCD camera. It also includes tools for navigating between specimen positions and finding regions of interest.", "title": "" }, { "docid": "7bc0250aa9a766ececa4cf9a45db2b05", "text": "This paper presents a new average d- q model and a control approach with a carrier-based pulsewidth modulation (PWM) implementation for nonregenerative three-phase three-level boost (VIENNA-type) rectifiers. State-space analysis and an averaging technique are used to derive the relationship between the controlled duty cycle and the dc-link neutral-point voltage, based on which an optimal zero-sequence component is found for dc-link voltage balance. By utilizing this zero-sequence component, the behavior of the dc-link voltage unbalance can be modeled in d-q coordinates using averaging over a switching cycle. Therefore, the proposed model is valid for up to half of the switching frequency. With the proposed model, a new control algorithm is developed with carrier-based PWM implementation, which features great simplicity and good dc-link neutral-point regulation. Space vector representation is also utilized to analyze the voltage balancing mechanism and the region of feasible operation. Simulation and experimental results validated the proposed model and control approach.", "title": "" }, { "docid": "d19503f965e637089d9fa200329f1349", "text": "Almost a half century ago, regular endurance exercise was shown to improve the capacity of skeletal muscle to oxidize substrates to produce ATP for muscle work. Since then, adaptations in skeletal muscle mRNA level were shown to happen with a single bout of exercise. Protein changes occur within days if daily endurance exercise continues. Some of the mRNA and protein changes cause increases in mitochondrial concentrations. One mitochondrial adaptation that occurs is an increase in fatty acid oxidation at a given absolute, submaximal workload. Mechanisms have been described as to how endurance training increases mitochondria. Importantly, Pgc-1α is a master regulator of mitochondrial biogenesis by increasing many mitochondrial proteins. However, not all adaptations to endurance training are associated with increased mitochondrial concentrations. Recent evidence suggests that the energetic demands of muscle contraction are by themselves stronger controllers of body weight and glucose control than is muscle mitochondrial content. Endurance exercise has also been shown to regulate the processes of mitochondrial fusion and fission. Mitophagy removes damaged mitochondria, a process that maintains mitochondrial quality. Skeletal muscle fibers are composed of different phenotypes, which are based on concentrations of mitochondria and various myosin heavy chain protein isoforms. Endurance training at physiological levels increases type IIa fiber type with increased mitochondria and type IIa myosin heavy chain. Endurance training also improves capacity of skeletal muscle blood flow. Endurance athletes possess enlarged arteries, which may also exhibit decreased wall thickness. VEGF is required for endurance training-induced increases in capillary-muscle fiber ratio and capillary density.", "title": "" }, { "docid": "e01c6de08d59af1f51edf3e9143af9dc", "text": "Deep learning refers to the shining branch of machine learning that is based on learning levels of representations. Convolutional Neural Networks (CNN) is one kind of deep neural network. It can study concurrently. In this article, we gave a detailed analysis of the process of CNN algorithm both the forward process and back propagation. Then we applied the particular convolutional neural network to implement the typical face recognition problem by java. Then, a parallel strategy was proposed in section4. In addition, by measuring the actual time of forward and backward computing, we analysed the maximal speed up and parallel efficiency theoretically.", "title": "" }, { "docid": "85e867bd998e9c68540d4a22305d8bab", "text": "Warped Gaussian processes (WGP) [1] model output observations in regression tasks as a parametric nonlinear transformation of a Gaussian process (GP). The use of this nonlinear transformation, which is included as part of the probabilistic model, was shown to enhance performance by providing a better prior model on several data sets. In order to learn its parameters, maximum likelihood was used. In this work we show that it is possible to use a non-parametric nonlinear transformation in WGP and variationally integrate it out. The resulting Bayesian WGP is then able to work in scenarios in which the maximum likelihood WGP failed: Low data regime, data with censored values, classification, etc. We demonstrate the superior performance of Bayesian warped GPs on several real data sets.", "title": "" }, { "docid": "a114801b4a00d024d555378ffa7cc583", "text": "UNLABELLED\nRectal prolapse is the partial or complete protrusion of the rectal wall into the anal canal. The most common etiology consists in the insufficiency of the diaphragm of the lesser pelvis and anal sphincter apparatus. Methods of surgical treatment involve perineal or abdominal approach surgical procedures. The aim of the study was to present the method of surgical rectal prolapse treatment, according to Mikulicz's procedure by means of the perineal approach, based on our own experience and literature review.\n\n\nMATERIAL AND METHODS\nThe study group comprised 16 patients, including 14 women and 2 men, aged between 38 and 82 years admitted to the department, due to rectal prolapse, during the period between 2000 and 2012. Nine female patients, aged between 68 and 82 years (mean age-76.3 years) with fullthickness rectal prolapse underwent surgery by means of Mikulicz's method with levator muscle and external anal sphincter plasty. The most common comorbidities amongst patients operated by means of Mikulicz's method included cardiovascular and metabolic diseases.\n\n\nRESULTS\nMean hospitalization was 14.4 days (ranging between 12 and 17 days). Despite advanced age and poor general condition of the patients, complications during the perioperative period were not observed. Good early and late functional results were achieved. The degree of anal sphincter continence was determined 6-8 weeks after surgery showing significant improvement, as compared to results obtained prior to surgery. One case of recurrence consisting in mucosal prolapse was noted, being treated surgically by means of Whitehead's method. Good treatment results were observed.\n\n\nCONCLUSION\nTransperineal rectosigmoidectomy using Mikulicz's method with levator muscle and external anal sphincter plasty seems to be an effective, minimally invasive and relatively safe procedure that does not require general anesthesia. It is recommended in case of patients with significant comorbidities and high surgical risk.", "title": "" }, { "docid": "dd6dec1da537cfe21a44b11c56d07b27", "text": "AIREAL is a novel haptic technology that delivers effective and expressive tactile sensations in free air, without requiring the user to wear a physical device. Combined with interactive computers graphics, AIREAL enables users to feel virtual 3D objects, experience free air textures and receive haptic feedback on gestures performed in free space. AIREAL relies on air vortex generation directed by an actuated flexible nozzle to provide effective tactile feedback with a 75 degrees field of view, and within an 8.5cm resolution at 1 meter. AIREAL is a scalable, inexpensive and practical free air haptic technology that can be used in a broad range of applications, including gaming, mobile applications, and gesture interaction among many others. This paper reports the details of the AIREAL design and control, experimental evaluations of the device's performance, as well as an exploration of the application space of free air haptic displays. Although we used vortices, we believe that the results reported are generalizable and will inform the design of haptic displays based on alternative principles of free air tactile actuation.", "title": "" }, { "docid": "44c9526319039305edf89ce58deb6398", "text": "Networks of constraints fundamental properties and applications to picture processing Sketchpad: a man-machine graphical communication system Using auxiliary variables and implied constraints to model non-binary problems Solving constraint satisfaction problems using neural-networks C. Search Backtracking algorithms for constraint satisfaction problems; a survey", "title": "" }, { "docid": "5b382b27257cdb333b7e709c8138580f", "text": "Proton++ is a declarative multitouch framework that allows developers to describe multitouch gestures as regular expressions of touch event symbols. It builds on the Proton framework by allowing developers to incorporate custom touch attributes directly into the gesture description. These custom attributes increase the expressivity of the gestures, while preserving the benefits of Proton: automatic gesture matching, static analysis of conflict detection, and graphical gesture creation. We demonstrate Proton++'s flexibility with several examples: a direction attribute for describing trajectory, a pinch attribute for detecting when touches move towards one another, a touch area attribute for simulating pressure, an orientation attribute for selecting menu items, and a screen location attribute for simulating hand ID. We also use screen location to simulate user ID and enable simultaneous recognition of gestures by multiple users. In addition, we show how to incorporate timing into Proton++ gestures by reporting touch events at a regular time interval. Finally, we present a user study that suggests that users are roughly four times faster at interpreting gestures written using Proton++ than those written in procedural event-handling code commonly used today.", "title": "" }, { "docid": "8261ce69652ba278f9154c364a1f558a", "text": "Recently, the skill involved in playing and mastering video games has led to the professionalization of the activity in the form of ‘esports’ (electronic sports). The aim of the present paper was to review the main topics of psychological interest about esports and then to examine the similarities of esports to professional and problem gambling. As a result of a systematic literature search, eight studies were identified that had investigated three topics: (1) the process of becoming an esport player, (2) the characteristics of esport players such as mental skills and motivations, and (3) the motivations of esport spectators. These findings draw attention to the new research field of professional video game playing and provides some preliminary insight into the psychology of esports players. The paper also examines the similarities between esport players and professional gamblers (and more specifically poker players). It is suggested that future research should focus on esport players’ psychological vulnerability because some studies have begun to investigate the difference between problematic and professional gambling and this might provide insights into whether the playing of esports could also be potentially problematic for some players.", "title": "" }, { "docid": "27fff4fe7d8c40eb0518639eb176dba9", "text": "This paper presents a hybrid AC/DC micro grid concept to directly integrate DC/AC renewable sources and loads to DC/AC links respectively. The hybrid grid eliminates multiple DC-AC-DC&AC-DC-AC conversions in an individual AC&DC grid. The hybrid grid increases system efficiency, eliminates the embedded AC/DC and DC/DC converters in various home, office and industry facilities which can reduce size and cost of those facilities. The basic architecture of the hybrid grid is introduced in this paper. Different operation modes of the hybrid grid are discussed. The various control algorithms are investigated and proposed to harness the maximum power from various renewable sources, to store energy surplus during low peak loads, to eliminate unbalance problem in AC link, to maintain voltage stability and smooth power transfer between AC and DC links under various generation and load conditions. A prototype of the hybrid grid under construction is presented. Some simulation and test results are presented.", "title": "" }, { "docid": "fff9e38c618a6a644e3795bdefd74801", "text": "Several code smell detection tools have been developed providing different results, because smells can be subjectively interpreted, and hence detected, in different ways. In this paper, we perform the largest experiment of applying machine learning algorithms to code smells to the best of our knowledge. We experiment 16 different machine-learning algorithms on four code smells (Data Class, Large Class, Feature Envy, Long Method) and 74 software systems, with 1986 manually validated code smell samples. We found that all algorithms achieved high performances in the cross-validation data set, yet the highest performances were obtained by J48 and Random Forest, while the worst performance were achieved by support vector machines. However, the lower prevalence of code smells, i.e., imbalanced data, in the entire data set caused varying performances that need to be addressed in the future studies. We conclude that the application of machine learning to the detection of these code smells can provide high accuracy (>96 %), and only a hundred training examples are needed to reach at least 95 % accuracy.", "title": "" }, { "docid": "df9a3910e449bbe3609e6d11e1425bd7", "text": "The sentiment of a sentence or a comment can be detected more accurately by applying Word Embeddings. This article presents the idea of word co-occurrence matrix and Skip-Gram to determine the actual contexts of the words, Hellinger PCA to determine the most similar words and generate a sliding window of most probable context words around each word. It is shown that, by applying Word Embeddings to classify the sentiment of a comment achieves higher accuracy with larger corpus. For our corpus of 2500 comments, the accuracy achieved is 70%, which is rapidly increasing with the size of the corpus.", "title": "" }, { "docid": "da6771ebd128ce1dc58f2ab1d56b065f", "text": "We present a method for the automatic classification of text documents into a dynamically defined set of topics of interest. The proposed approach requires only a domain ontology and a set of user-defined classification topics, specified as contexts in the ontology. Our method is based on measuring the semantic similarity of the thematic graph created from a text document and the ontology sub-graphs resulting from the projection of the defined contexts. The domain ontology effectively becomes the classifier, where classification topics are expressed using the defined ontological contexts. In contrast to the traditional supervised categorization methods, the proposed method does not require a training set of documents. More importantly, our approach allows dynamically changing the classification topics without retraining of the classifier. In our experiments, we used the English language Wikipedia converted to an RDF ontology to categorize a corpus of current Web news documents into selection of topics of interest. The high accuracy achieved in our tests demonstrates the effectiveness of the proposed method, as well as the applicability of Wikipedia for semantic text categorization purposes.", "title": "" }, { "docid": "cd274d98201f27fe6159e6db2f7db8aa", "text": "Due to the appearance of antibiotic resistance and the toxicity associated with currently used antibiotics, peptide antibiotics are the need of the hour. Thus, demand for new antimicrobial agents has brought great interest in new technologies to enhance safety. One such antimicrobial molecule is bacteriocin, synthesised by various micro-organisms. Bacteriocins are widely used in agriculture, veterinary medicine as a therapeutic, and as a food preservative agent to control various infectious and food-borne pathogens. In this review, we highlight the potential therapeutic and food preservative applications of bacteriocin.", "title": "" }, { "docid": "ec5abeb42b63ed1976cd47d3078c35c9", "text": "In semistructured data, the information that is normally associated with a schema is contained within the data, which is sometimes called “self-describing”. In some forms of semistructured data there is no separate schema, in others it exists but only places loose constraints on the data. Semistructured data has recently emerged as an important topic of study for a variety of reasons. First, there are data sources such as the Web, which we would like to treat as databases but which cannot be constrained by a schema. Second, it may be desirable to have an extremely flexible format for data exchange between disparate databases. Third, even when dealing with structured data, it may be helpful to view it. as semistructured for the purposes of browsing. This tutorial will cover a number of issues surrounding such data: finding a concise formulation, building a sufficiently expressive language for querying and transformation, and optimizat,ion problems.", "title": "" }, { "docid": "fa888e57652804e86c900c8e1041d399", "text": "BACKGROUND\nJehovah's Witness patients (Witnesses) who undergo cardiac surgery provide a unique natural experiment in severe blood conservation because anemia, transfusion, erythropoietin, and antifibrinolytics have attendant risks. Our objective was to compare morbidity and long-term survival of Witnesses undergoing cardiac surgery with a similarly matched group of patients who received transfusions.\n\n\nMETHODS\nA total of 322 Witnesses and 87 453 non-Witnesses underwent cardiac surgery at our center from January 1, 1983, to January 1, 2011. All Witnesses prospectively refused blood transfusions. Among non-Witnesses, 38 467 did not receive blood transfusions and 48 986 did. We used propensity methods to match patient groups and parametric multiphase hazard methods to assess long-term survival. Our main outcome measures were postoperative morbidity complications, in-hospital mortality, and long-term survival.\n\n\nRESULTS\nWitnesses had fewer acute complications and shorter length of stay than matched patients who received transfusions: myocardial infarction, 0.31% vs 2.8% (P = . 01); additional operation for bleeding, 3.7% vs 7.1% (P = . 03); prolonged ventilation, 6% vs 16% (P < . 001); intensive care unit length of stay (15th, 50th, and 85th percentiles), 24, 25, and 72 vs 24, 48, and 162 hours (P < . 001); and hospital length of stay (15th, 50th, and 85th percentiles), 5, 7, and 11 vs 6, 8, and 16 days (P < . 001). Witnesses had better 1-year survival (95%; 95% CI, 93%-96%; vs 89%; 95% CI, 87%-90%; P = . 007) but similar 20-year survival (34%; 95% CI, 31%-38%; vs 32% 95% CI, 28%-35%; P = . 90).\n\n\nCONCLUSIONS\nWitnesses do not appear to be at increased risk for surgical complications or long-term mortality when comparisons are properly made by transfusion status. Thus, current extreme blood management strategies do not appear to place patients at heightened risk for reduced long-term survival.", "title": "" } ]
scidocsrr
c8c44ec46585285a00a3b9a15a2771fb
Faceted Wikipedia Search
[ { "docid": "ee95ad7e7243607b56e92b6cb4228288", "text": "We have developed an innovative search interface that allows non-expert users to move through large information spaces in a flexible manner without feeling lost. The design goal was to offer users a “browsing the shelves” experience seamlessly integrated with focused search. Key to achieving our goal is the explicit exposure of hierarchical faceted metadata in a manner that is intuitive and inviting to users. After several iterations of design and testing, the usability results are strikingly positive. We believe our approach marks a major step forward in search user interfaces and can serve as a model for web-based collections of up to 100,000 items. Topics: Search User Interfaces, Faceted Metadata INTRODUCTION Although general Web search is steadily improving [30], studies show that search is still the primary usability problem in web site design. A recent report by Vividence Research analyzing 69 web sites found that the most common usability problem was poorly organized search results, affecting 53% of sites studied. The second most common problem was poor information architecture, affecting 32% of sites [27]. Studies of search behavior reveal that good search involves both broadening and narrowing of the query, appropriate selection of terminology, and the ability to modify the query [31]. Still others show that users often express a concern about online search systems since they do not allow a “browsing the shelves” experience afforded by physical libraries [6] and that users like wellstructured hyperlinks but often feel lost when navigating through complex sites [23]. Our goals are to support search usability guidelines [28], while avoiding negative consequences like empty result sets or feelings of being lost. We are especially interested in large collections of similar-style items (such as product catalog sites, sites consisting of collections of images, or text documents on a topic such as medicine or law). Our approach is to follow iterative design practices from the field of human-computer interaction [29], meaning that we first assess the behavior of the target users, then prototype a system, then assess that system with target users, learn from and adjust to the problems found, and repeat until a successful interface is produced. We have applied this method to the problem of creating an information architecture that seamlessly integrates navigation and free-text search into one interface. This system builds on earlier work that shows the importance of query previews [25] for indicating next choices (thus allowing the user to use recognition over recall) and avoiding empty result sets. The approach makes use of faceted hierarchical metadata (described below) as the basis for a navigation structure showing next choices, providing alternative views, and permitting refinement and expansion in new directions, while at the same time maintaining a consistent representation of the collection’s structure [14]. This use of metadata is integrated with free-text search, allowing the user to follow links, then add search terms, then follow more links, without interrupting the interaction flow. Our most recent usability studies show strong, positive results along most measured variables. An added advantage of this framework is that it can be built using off-the-shelf database technology, and it allows the contents of the collection to be changed without requiring the web site maintainer to change the system or the interface. For these reasons, we believe these results should influence the design of information architecture of information-centric web sites. In the following sections we define the metadata-based terminology, describe the interface framework as applied to a collection of architectural images, report the results of usability studies, discuss related work, and discuss the implications of these results. Submitted for Publication METADATA Content-oriented category metadata has become more prevalent in the last few years, and many people are interested in standards for describing content in various fields (e.g., Dublin Core and the Semantic Web). Web directories such as Yahoo and the Open Directory Project are familiar examples of the use of metadata for navigation structures. Web search engines have begun to interleave search hits on category labels with other search results. Many individual collections already have rich metadata assigned to their contents; for example, biomedical journal articles have on average a dozen or more content attributes attached to them. Metadata for organizing content collections can be classified along several dimensions: • The metadata may be faceted, that is, composed of orthogonal sets of categories. For example, in the domain of architectural images, some possible facets might be Materials (concrete, brick, wood, etc.), Styles (Baroque, Gothic, Ming, etc.), View Types, People (architects, artists, developers, etc.), Locations, Periods, and so on. • The metadata (or an individual facet) may be hierarchical (“located in Berkeley, California, United States”) or flat (“by Ansel Adams”). • The metadata (or an individual facet) may be singlevalued or multi-valued. That is, the data may be constrained so that at most one value can be assigned to an item (“measures 36 cm tall”) or it may allow multiple values to be assigned to an item (“uses oil paint, ink, and watercolor”). We note that there are a number of issues associated with creation of metadata itself which we are not addressing here. The most pressing problem is how to decide which descriptors are correct or at least most appropriate for a collection of information. Another problem relates to how to assign metadata descriptors to items that currently do not have metadata assigned. We will not be addressing these issues, in part because many other researchers already are, and because the fact remains that there are many existing, important collections whose contents have hierarchical metadata already assigned. RECIPE USABILITY STUDY We are particularly concerned with supporting non-professional searchers in rich information seeking tasks. Specifically we aim to answer the following questions: do users like and understand flexible organizations of metadata from different hierarchies? Are faceted hierarchies preferable to single hierarchies? Do people prefer to follow category-based hyperlinks or do they prefer to issue a keyword-based query and sort through results listings? 1http://dublincore.org, http://www.w3.org/2001/sw 2http://www.yahoo.com, http://dmoz.org Figure 1: The opening page for both interfaces shows a text search box and the first level of metadata terms. Hovering over a facet name yields a tooltip (here shown below Locations) explaining the meaning of the facet. Before developing our system, we tested the idea of using hierarchical faceted metadata on an existing interface that exemplified some of our design goals. This preliminary study was conducted using a commercial recipe web site called Epicurious containing five flat facets, 93 metadata terms, and approximately 13,000 recipes. We compared the three available search interfaces:(1) Simple keyword search, with unsorted results list (2) Enhanced search form that exposes metadata using checkboxes and drop-down lists, with unsorted results list. (3) Browse interface that allows user to navigate through the collection, implicitly building up a query consisting of an AND across facets; Selecting a category within a facet (e.g., Pasta within Main Ingredient) narrows results set, and users are shown query previews at every step. In the interests of space, we can only provide a brief summary of this small (9 participant) study: All the participants who liked the site (7 out of 9) said they were likely to use the browse interface again. Only 4 said this about enhanced search and 0 said this about simple search. Participants especially liked the browse interface for open-ended tasks such as “plan a dinner party.” We took this as encouraging support for the faceted metadata approach. However, the recipe browse facility is lacking in several ways. Free-text search is not integrated with metadata browse, the collection and metadata are of only moderate size, and the metadata is organized into flat (non-hierarchical) facets. Finally users are only allowed to refine queries, they cannot broaden 3http://eat.epicurious.com/recipes/browse home/", "title": "" } ]
[ { "docid": "bd2adf12f6d6bd0c50b7fa6fceb7f568", "text": "The lack of a common benchmark for the evaluation of the gaze estimation task from RGB and RGB-D data is a serious limitation for distinguishing the advantages and disadvantages of the many proposed algorithms found in the literature. This paper intends to overcome this limitation by introducing a novel database along with a common framework for the training and evaluation of gaze estimation approaches. In particular, we have designed this database to enable the evaluation of the robustness of algorithms with respect to the main challenges associated to this task: i) Head pose variations; ii) Person variation; iii) Changes in ambient and sensing conditions and iv) Types of target: screen or 3D object.", "title": "" }, { "docid": "dd37e97635b0ded2751d64cafcaa1aa4", "text": "DEVICES, AND STRUCTURES By S.E. Lyshevshi, CRC Press, 2002. This book is the first of the CRC Press “Nanoand Microscience, Engineering, Technology, and Medicine Series,” of which the author of this book is also the editor. This book could be a textbook of a semester course on microelectro mechanical systems (MEMS) and nanoelectromechanical systems (NEMS). The objective is to cover the topic from basic theory to the design and development of structures of practical devices and systems. The idea of MEMS and NEMS is to utilize and further extend the technology of integrated circuits (VLSI) to nanometer structures of mechanical and biological devices for potential applications in molecular biology and medicine. MEMS and NEMS (nanotechnology) are hot topics in the future development of electronics. The interest is not limited to electrical engineers. In fact, many scientists and researchers are interested in developing MEMS and NEMS for biological and medical applications. Thus, this field has attracted researchers from many different fields. Many new books are coming out. This book seems to be the first one aimed to be a textbook for this field, but it is very hard to write a book for readers with such different backgrounds. The author of this book has emphasized computer modeling, mostly due to his research interest in this field. It would be good to provide coverage on biological and medical MEMS, for example, by reporting a few gen or DNA-related cases. Furthermore, the mathematical modeling in term of a large number of nonlinear coupled differential equations, as used in many places in the book, does not appear to have any practical value to the actual physical structures.", "title": "" }, { "docid": "70fa03bcd9c5eec86050052ea77d30fd", "text": "The importance of SMEs SMEs (small and medium-sized enterprises) account for 60 to 70 per cent of jobs in most OECD countries, with a particularly large share in Italy and Japan, and a relatively smaller share in the United States. Throughout they also account for a disproportionately large share of new jobs, especially in those countries which have displayed a strong employment record, including the United States and the Netherlands. Some evidence points also to the importance of age, rather than size, in job creation: young firms generate more than their share of employment. However, less than one-half of start-ups survive for more than five years and only a fraction develop into the high-growth firms which make important contributions to job creation. High job turnover poses problems for employment security; and small establishments are often exempt from giving notice to their employees. Small firms also tend to invest less in training and rely relatively more on external recruitment for raising competence. The demand for reliable, relevant and internationally comparable data on SMEs is on the rise, and statistical offices have started to expand their collection and publication of data. International comparability is still weak, however, due to divergent size-class definitions and sector classifications. To enable useful policy analysis, OECD governments need to improve their build-up of data, without creating additional obstacles for firms through the burden of excessive paper work. The greater variance in profitability, survival and growth of SMEs compared to larger firms accounts for special problems in financing. SMEs generally tend to be confronted with higher interest rates, as well as credit rationing due to shortage of collateral. The issues that arise in financing differ considerably between existing and new firms, as well as between those which grow slowly and those that grow rapidly. The expansion of private equity markets, including informal markets, has greatly improved the access to venture capital for start-ups and SMEs, but considerable differences remain among countries. Regulatory burdens remain a major obstacle for SMEs as these firms tend to be poorly equipped to deal with the problems arising from regulations. Access to information about regulations should be made available to SMEs at minimum cost. Policy makers must ensure that the compliance procedures associated with, e.g. R&D and new technologies, are not unnecessarily costly, complex or lengthy. Transparency is of particular importance to SMEs, and information technology has great potential to narrow the information …", "title": "" }, { "docid": "9263fd7d4846157332322697a482a68d", "text": "Mental fatigue is a psychobiological state caused by prolonged periods of demanding cognitive activity. Although the impact of mental fatigue on cognitive and skilled performance is well known, its effect on physical performance has not been thoroughly investigated. In this randomized crossover study, 16 subjects cycled to exhaustion at 80% of their peak power output after 90 min of a demanding cognitive task (mental fatigue) or 90 min of watching emotionally neutral documentaries (control). After experimental treatment, a mood questionnaire revealed a state of mental fatigue (P = 0.005) that significantly reduced time to exhaustion (640 +/- 316 s) compared with the control condition (754 +/- 339 s) (P = 0.003). This negative effect was not mediated by cardiorespiratory and musculoenergetic factors as physiological responses to intense exercise remained largely unaffected. Self-reported success and intrinsic motivation related to the physical task were also unaffected by prior cognitive activity. However, mentally fatigued subjects rated perception of effort during exercise to be significantly higher compared with the control condition (P = 0.007). As ratings of perceived exertion increased similarly over time in both conditions (P < 0.001), mentally fatigued subjects reached their maximal level of perceived exertion and disengaged from the physical task earlier than in the control condition. In conclusion, our study provides experimental evidence that mental fatigue limits exercise tolerance in humans through higher perception of effort rather than cardiorespiratory and musculoenergetic mechanisms. Future research in this area should investigate the common neurocognitive resources shared by physical and mental activity.", "title": "" }, { "docid": "dbe5561dc992bab2b3fbebca5412fd39", "text": "Detox diets are popular dieting strategies that claim to facilitate toxin elimination and weight loss, thereby promoting health and well-being. The present review examines whether detox diets are necessary, what they involve, whether they are effective and whether they present any dangers. Although the detox industry is booming, there is very little clinical evidence to support the use of these diets. A handful of clinical studies have shown that commercial detox diets enhance liver detoxification and eliminate persistent organic pollutants from the body, although these studies are hampered by flawed methodologies and small sample sizes. There is preliminary evidence to suggest that certain foods such as coriander, nori and olestra have detoxification properties, although the majority of these studies have been performed in animals. To the best of our knowledge, no randomised controlled trials have been conducted to assess the effectiveness of commercial detox diets in humans. This is an area that deserves attention so that consumers can be informed of the potential benefits and risks of detox programmes.", "title": "" }, { "docid": "695af0109c538ca04acff8600d6604d4", "text": "Human actions can be represented by the trajectories of skeleton joints. Traditional methods generally model the spatial structure and temporal dynamics of human skeleton with hand-crafted features and recognize human actions by well-designed classifiers. In this paper, considering that recurrent neural network (RNN) can model the long-term contextual information of temporal sequences well, we propose an end-to-end hierarchical RNN for skeleton based action recognition. Instead of taking the whole skeleton as the input, we divide the human skeleton into five parts according to human physical structure, and then separately feed them to five subnets. As the number of layers increases, the representations extracted by the subnets are hierarchically fused to be the inputs of higher layers. The final representations of the skeleton sequences are fed into a single-layer perceptron, and the temporally accumulated output of the perceptron is the final decision. We compare with five other deep RNN architectures derived from our model to verify the effectiveness of the proposed network, and also compare with several other methods on three publicly available datasets. Experimental results demonstrate that our model achieves the state-of-the-art performance with high computational efficiency.", "title": "" }, { "docid": "c018a5cb5e89ee697f20d634ea360954", "text": "A comprehensive approach to the design of a stripline for EMC testing is given in this paper. The authors attention has been focused on the design items that are most crucial by the achievement of satisfactory value of the VSWR and the impedance matching at the feeding ports in the extended frequency range from 80 MHz to 1000 GHz. For this purpose, the Vivaldi-structure and other advanced structures were considered. The theoretical approach based on numerical simulations lead to conclusions which have been applied by the physical design and also evaluated by experimental results.", "title": "" }, { "docid": "c8482ed26ba2c4ba1bd3eed6ac0e00b4", "text": "Virtual Reality (VR) has now emerged as a promising tool in many domains of therapy and rehabilitation (Rizzo, Schultheis, Kerns & Mateer, 2004; Weiss & Jessel, 1998; Zimand, Anderson, Gershon, Graap, Hodges, & Rothbaum, 2002; Glantz, Rizzo & Graap, 2003). Continuing advances in VR technology along with concomitant system cost reductions have supported the development of more usable, useful, and accessible VR systems that can uniquely target a wide range of physical, psychological, and cognitive rehabilitation concerns and research questions. What makes VR application development in the therapy and rehabilitation sciences so distinctively important is that it represents more than a simple linear extension of existing computer technology for human use. VR offers the potential to create systematic human testing, training and treatment environments that allow for the precise control of complex dynamic 3D stimulus presentations, within which sophisticated interaction, behavioral tracking and performance recording is possible. Much like an aircraft simulator serves to test and train piloting ability, virtual environments (VEs) can be developed to present simulations that assess and rehabilitate human functional performance under a range of stimulus conditions that are not easily deliverable and controllable in the real world. When combining these assets within the context of functionally relevant, ecologically enhanced VEs, a fundamental advancement could emerge in how human functioning can be addressed in many rehabilitation disciplines.", "title": "" }, { "docid": "9ed5fdb991edd5de57ffa7f13121f047", "text": "We analyze the increasing threats against IoT devices. We show that Telnet-based attacks that target IoT devices have rocketed since 2014. Based on this observation, we propose an IoT honeypot and sandbox, which attracts and analyzes Telnet-based attacks against various IoT devices running on different CPU architectures such as ARM, MIPS, and PPC. By analyzing the observation results of our honeypot and captured malware samples, we show that there are currently at least 5 distinct DDoS malware families targeting Telnet-enabled IoT devices and one of the families has quickly evolved to target more devices with as many as 9 different CPU architectures.", "title": "" }, { "docid": "41261cf72d8ee3bca4b05978b07c1c4f", "text": "The association of Sturge-Weber syndrome with naevus of Ota is an infrequently reported phenomenon and there are only four previously described cases in the literature. In this paper we briefly review the literature regarding the coexistence of vascular and pigmentary naevi and present an additional patient with the association of the Sturge-Weber syndrome and naevus of Ota.", "title": "" }, { "docid": "5d5a103852019f1de8455e4d13c0e82a", "text": "INTRODUCTION The cryptocurrency market has evolved erratically and at unprecedented speed over the course of its short lifespan. Since the release of the pioneer anarchic cryptocurrency, Bitcoin, to the public in January 2009, more than 550 cryptocurrencies have been developed, the majority with only a modicum of success [1]. Research on the industry is still scarce. The majority of it is singularly focused on Bitcoin rather than a more diverse spread of cryptocurrencies and is steadily being outpaced by fluid industry developments, including new coins, technological progression, and increasing government regulation of the markets. Though the fluidity of the industry does, admittedly, present a challenge to research, a thorough evaluation of the cryptocurrency industry writ large is necessary. This paper seeks to provide a concise yet comprehensive analysis of the cryptocurrency industry with particular analysis of Bitcoin, the first decentralized cryptocurrency. Particular attention will be given to examining theoretical economic differences between existing coins. Section 1 of this paper provides an overview of the industry. Section 1.1 provides a brief history of digital currencies, which segues into a discussion of Bitcoin in section 1.2. Section 2 of this paper provides an in-depth analysis of coin economics, partitioning the major currencies by their network security protocol mechanisms, and discussing the long-term theoretical implications that these classes entail. Section 2.1 will discuss network security protocol. The mechanisms will be discussed in the order that follows. Section 2.2 will discuss the proof-of-work (PoW) mechanism used in the Bitcoin protocol and various altcoins. Section 2.3 will discuss the proof-of-stake (PoS) protocol scheme first introduced by Peercoin in 2011, which relies on a less energy intensive security mechanism than PoW. Section 2.4 will discuss a hybrid PoW/PoS mechanism. Section 2.5 will discuss the Byzantine Consensus mechanism. Section 2.6 presents the results of a systematic review of 21 cryptocurrencies. Section 3 provides an overview of factors affecting industry growth, focusing heavily on the regulatory environment in section 3.1. Section 3.2 discusses public perception and acceptance of cryptocurrency as a payment system in the current retail environment. Section 4 concludes the analysis. A note on sources: Because the cryptocurrency industry is still young and factors that impact it are changing on a daily basis, few comprehensive or fully updated academic sources exist on the topic. While academic work was of course consulted for this project, the majority of the information that informs this paper was derived from …", "title": "" }, { "docid": "fec2b6b7cdef1ddf88dffd674fe7111a", "text": "This paper introduces Dex, a reinforcement learning environment toolkit specialized for training and evaluation of continual learning methods as well as general reinforcement learning problems. We also present the novel continual learning method of incremental learning, where a challenging environment is solved using optimal weight initialization learned from first solving a similar easier environment. We show that incremental learning can produce vastly superior results than standard methods by providing a strong baseline method across ten Dex environments. We finally develop a saliency method for qualitative analysis of reinforcement learning, which shows the impact incremental learning has on network attention.", "title": "" }, { "docid": "4a26443fd7e16c7af86bcf07c6ba39ca", "text": "This study proposes representative figures of merit for circadian and vision performance for healthy and efficient use of smartphone displays. The recently developed figures of merit for circadian luminous efficacy of radiation (CER) and circadian illuminance (CIL) related to human health and circadian rhythm were measured to compare three kinds of commercial smartphone displays. The CIL values for social network service (SNS) messenger screens from all three displays were higher than 41.3 biolux (blx) in a dark room at night, and the highest CIL value reached 50.9 blx. These CIL values corresponded to melatonin suppression values (MSVs) of 7.3% and 11.4%, respectively. Moreover, smartphone use in a bright room at night had much higher CIL and MSV values (58.7 ~ 105.2 blx and 15.4 ~ 36.1%, respectively). This study also analyzed the nonvisual and visual optical properties of the three smartphone displays while varying the distance between the screen and eye and controlling the brightness setting. Finally, a method to possibly attenuate the unhealthy effects of smartphone displays was proposed and investigated by decreasing the emitting wavelength of blue LEDs in a smartphone LCD backlight and subsequently reducing the circadian effect of the display.", "title": "" }, { "docid": "165fcc5242321f6fed9c353cc12216ff", "text": "Fingerprint alteration represents one of the newest challenges in biometric identification. The aim of fingerprint mutilation is to destroy the structure of the papillary ridges so that the identity of the offender cannot be recognized by the biometric system. The problem has received little attention and there is a lack of a real world altered fingerprints database that would allow researchers to develop new algorithms and techniques for altered fingerprints detection. The major contribution of this paper is that it provides a new public database of synthetically altered fingerprints. Starting from the cases described in the literature, three methods for generating simulated altered fingerprints are proposed.", "title": "" }, { "docid": "2c5eb3fb74c6379dfd38c1594ebe85f4", "text": "Accurately recognizing speaker emotion and age/gender from speech can provide better user experience for many spoken dialogue systems. In this study, we propose to use deep neural networks (DNNs) to encode each utterance into a fixed-length vector by pooling the activations of the last hidden layer over time. The feature encoding process is designed to be jointly trained with the utterance-level classifier for better classification. A kernel extreme learning machine (ELM) is further trained on the encoded vectors for better utterance-level classification. Experiments on a Mandarin dataset demonstrate the effectiveness of our proposed methods on speech emotion and age/gender recognition tasks.", "title": "" }, { "docid": "c02e7ece958714df34539a909c2adb7d", "text": "Despite the growing evidence of the association between shame experiences and eating psychopathology, the specific effect of body image-focused shame memories on binge eating remains largely unexplored. The current study examined this association and considered current body image shame and self-criticism as mediators. A multi-group path analysis was conducted to examine gender differences in these relationships. The sample included 222 women and 109 men from the Portuguese general and college student populations who recalled an early body image-focused shame experience and completed measures of the centrality of the shame memory, current body image shame, binge eating symptoms, depressive symptoms, and self-criticism. For both men and women, the effect of the centrality of shame memories related to body image on binge eating symptoms was fully mediated by body image shame and self-criticism. In women, these effects were further mediated by self-criticism focused on a sense of inadequacy and also on self-hatred. In men, only the form of self-criticism focused on a sense of inadequacy mediated these associations. The present study has important implications for the conceptualization and treatment of binge eating symptoms. Findings suggest that, in both genders, body image-focused shame experiences are associated with binge eating symptoms via their effect on current body image shame and self-criticism.", "title": "" }, { "docid": "f66f9e04fe16dd4a1de20554e25ec902", "text": "Motor imagery (MI) based brain-computer interface (BCI) plays a crucial role in various scenarios ranging from post-traumatic rehabilitation to control prosthetics. Computer-aided interpretation of MI has augmented prior mentioned scenarios since decades but failed to address interpersonal variability. Such variability further escalates in case of multi-class MI, which is currently a common practice. The failures due to interpersonal variability can be attributed to handcrafted features as they failed to extract more generalized features. The proposed approach employs convolution neural network (CNN) based model with both filtering (through axis shuffling) and feature extraction to avail end-to-end training. Axis shuffling is performed adopted in initial blocks of the model for 1D preprocessing and reduce the parameters required. Such practice has avoided the overfitting which resulted in an improved generalized model. Publicly available BCI Competition-IV 2a dataset is considered to evaluate the proposed model. The proposed model has demonstrated the capability to identify subject-specific frequency band with an average and highest accuracy of 70.5% and S3.6% respectively. Proposed CNN model can classify in real time without relying on accelerated computing device like GPU.", "title": "" }, { "docid": "ee997fc4bf329ef2918d5dbe021b3be3", "text": "This study examines the potential link of Facebook group participation with viral advertising responses. The results suggest that college-aged Facebook group members engage in higher levels of self-disclosure and maintain more favorable attitudes toward social media and advertising in general than do nongroup members. However, Facebook group participation does not exert an influence on users' viral advertising pass-on behaviors. The results also identify variations in predictors of passon behaviors between group members and nonmembers. These findings have theoretical and managerial implications for viral advertising on Facebook.", "title": "" }, { "docid": "124d740d3796d6a707100e0d8c384f1f", "text": "We present Nodeinfo, an unsupervised algorithm for anomaly detection in system logs. We demonstrate Nodeinfo's effectiveness on data from four of the world's most powerful supercomputers: using logs representing over 746 million processor-hours, in which anomalous events called alerts were manually tagged for scoring, we aim to automatically identify the regions of the log containing those alerts. We formalize the alert detection task in these terms, describe how Nodeinfo uses the information entropy of message terms to identify alerts, and present an online version of this algorithm, which is now in production use. This is the first work to investigate alert detection on (several) publicly-available supercomputer system logs, thereby providing a reproducible performance baseline.", "title": "" }, { "docid": "ec9810e7def2ae57493996b460540af0", "text": "PURPOSE\nTo describe the results of a diabetic retinopathy screening program implemented in a primary care area.\n\n\nMETHODS\nA retrospective study was conducted using data automatically collected since the program began on 1 January 2007 until 31 December 2015.\n\n\nRESULTS\nThe number of screened diabetic patients has progressively increased, from 7,173 patients in 2007 to 42,339 diabetic patients in 2015. Furthermore, the ability of family doctors to correctly interpret retinographies has improved, with the proportion of retinal images classified as normal having increased from 55% in 2007 to 68% at the end of the study period. The proportion of non-evaluable retinographies decreased to 7% in 2015, having peaked at 15% during the program. This was partly due to a change in the screening program policy that allowed the use of tropicamide. The number of severe cases detected has declined, from 14% with severe non-proliferative and proliferativediabetic retinopathy in the initial phase of the program to 3% in 2015.\n\n\nCONCLUSIONS\nDiabetic eye disease screening by tele-ophthalmology has shown to be a valuable method in a growing population of diabetics. It leads to a regular medical examination of patients, helps ease the workload of specialised care services and favours the early detection of treatable cases. However, the results of implementing a program of this type are not immediate, achieving only modest results in the early years of the project that have improved over subsequent years.", "title": "" } ]
scidocsrr
52884d8b767e745bb5f1d901fb8d6e08
Specimen Box: A tangible interaction technique for world-fixed virtual reality displays
[ { "docid": "c9b7832cd306fc022e4a376f10ee8fc8", "text": "This paper describes a study to assess the influence of a variety of factors on reported level of presence in immersive virtual environments. It introduces the idea of stacking depth, that is, where a participant can simulate the process of entering the virtual environment while already in such an environment, which can be repeated to several levels of depth. An experimental study including 24 subjects was carried out. Half of the subjects were transported between environments by using virtual head-mounted displays, and the other half by going through doors. Three other binary factors were whether or not gravity operated, whether or not the subject experienced a virtual precipice, and whether or not the subject was followed around by a virtual actor. Visual, auditory, and kinesthetic representation systems and egocentric/exocentric perceptual positions were assessed by a preexperiment questionnaire. Presence was assessed by the subjects as their sense of being there, the extent to which they experienced the virtual environments as more the presenting reality than the real world in which the experiment was taking place, and the extent to which the subject experienced the virtual environments as places visited rather than images seen. A logistic regression analysis revealed that subjective reporting of presence was significantly positively associated with visual and kinesthetic representation systems, and negatively with the auditory system. This was not surprising since the virtual reality system used was primarily visual. The analysis also showed a significant and positive association with stacking level depth for those who were transported between environments by using the virtual HMD, and a negative association for those who were transported through doors. Finally, four of the subjects moved their real left arm to match movement of the left arm of the virtual body displayed by the system. These four scored significantly higher on the kinesthetic representation system than the remainder of the subjects.", "title": "" }, { "docid": "c8fb5b6ece46d0a70fbd4b46e6fecc25", "text": "The aim of this study was to analyze the performance of Spanish-English bilinguals on the Golden Stroop Test. The effects of bilingualism, participants' age, age of acquisition of the second language, and proficiency in each language were analyzed. Participants consisted of 71 Spanish-English bilinguals, 40 English monolinguals, and 11 Spanish monolinguals from South Florida. Proficiency in Spanish and English was established using a self-report questionnaire and the English and Spanish versions of the Boston Naming Test. In bilinguals, the Golden Stroop Test was administered in English and in Spanish. Overall, performance was slower in bilinguals than in monolinguals. No significant differences were observed in color reading but bilinguals performed worse in the naming color condition. Even though bilinguals were 5% to 10% slower in the color-word condition, one-way ANOVAs revealed no statistically significant differences between groups. Within the bilingual group, the Stroop Test scores were similar in both English and Spanish. Age of acquisition of the second language did not predict the Stroop Test performance. Repeated measures ANOVA demonstrated a significant interaction between Language Proficiency x Language (in which the test was administered) in some of the ST conditions. In balanced bilinguals, the language used in the ST did not matter, but in unbalanced subjects, the best-spoken language showed better results. In addition, our results support the presence of both between- and within-language interference in Spanish-English bilinguals. Different conceptualization models of the structure of bilingual memory are disclosed.", "title": "" } ]
[ { "docid": "59db435e906db2c198afdc5cc7c7de2c", "text": "Although the recent advances in the sparse representations of images have achieved outstanding denosing results, removing real, structured noise in digital videos remains a challenging problem. We show the utility of reliable motion estimation to establish temporal correspondence across frames in order to achieve high-quality video denoising. In this paper, we propose an adaptive video denosing framework that integrates robust optical flow into a non-local means (NLM) framework with noise level estimation. The spatial regularization in optical flow is the key to ensure temporal coherence in removing structured noise. Furthermore, we introduce approximate K-nearest neighbor matching to significantly reduce the complexity of classical NLM methods. Experimental results show that our system is comparable with the state of the art in removing AWGN, and significantly outperforms the state of the art in removing real, structured noise.", "title": "" }, { "docid": "f9143c2bb6c8271efa516ca54c9baef7", "text": "In recent years several measures for the gold standard based evaluation of ontology learning were proposed. They can be distinguished by the layers of an ontology (e.g. lexical term layer and concept hierarchy) they evaluate. Judging those measures with a list of criteria we show that there exist some measures sufficient for evaluating the lexical term layer. However, existing measures for the evaluation of concept hierarchies fail to meet basic criteria. This paper presents a new taxonomic measure which overcomes the problems of current approaches.", "title": "" }, { "docid": "6c3be94fe73ef79d711ef5f8b9c789df", "text": "• Belief update based on m last rewards • Gaussian belief model instead of Beta • Limited lookahead to h steps and a myopic function in the horizon. • Noisy rewards Motivation: Correct sequential decision-making is critical for life success, and optimal approaches require signi!cant computational look ahead. However, simple models seem to explain people’s behavior. Questions: (1) Why we seem so simple compared to a rational agent? (2) What is the built-in model that we use to sequentially choose between courses of actions?", "title": "" }, { "docid": "513eef1c207793a05275662642e0ed12", "text": "Personal skill information on social media is at the core of many interesting applications. In this paper, we propose a factor graph based approach to automatically infer skills from personal profile incorporated with both personal and skill connections. We first extract personal connections with similar academic and business background (e.g. co-major, co-university, and co-corporation). We then extract skill connections between skills from the same person. To well integrate various kinds of connections, we propose a joint prediction factor graph (JPFG) model to collectively infer personal skills with help of personal connection factor, skill connection factor, besides the normal textual attributes. Evaluation on a large-scale dataset from LinkedIn.com validates the effectiveness of our approach.", "title": "" }, { "docid": "ad0892ee2e570a8a2f5a90883d15f2d2", "text": "Supervised event extraction systems are limited in their accuracy due to the lack of available training data. We present a method for self-training event extraction systems by bootstrapping additional training data. This is done by taking advantage of the occurrence of multiple mentions of the same event instances across newswire articles from multiple sources. If our system can make a highconfidence extraction of some mentions in such a cluster, it can then acquire diverse training examples by adding the other mentions as well. Our experiments show significant performance improvements on multiple event extractors over ACE 2005 and TAC-KBP 2015 datasets.", "title": "" }, { "docid": "7e2382e25788c156a3bd59bbe63b4688", "text": "The performance of radio over fiber (RoF) links using low cost optoelectronic components is assessed for distributed antenna applications in next generation wireless systems. Important design issues are discussed and an example link design is presented for a wireless system requiring the transmission of four radio channels per link direction, each with 100 MHz bandwidth, modulation complexity of 256-QAM and 2048 OFDM subcarriers. We show that the noise introduced by the RoF links does not have a significant impact on wireless range, provided the wireless system has uplink power control. Finally, we compare the cost and performance of RoF links for this application with alternative link types that use digitized radio transmission and show that RoF is the optimum choice from a cost perspective.", "title": "" }, { "docid": "a836b7771937a15bc90d27de9fb8f9e4", "text": "Principal component analysis (PCA) is a mainstay of modern data analysis a black box that is widely used but poorly understood. The goal of this paper is to dispel the magic behind this black box. This tutorial focuses on building a solid intuition for how and why principal component analysis works; furthermore, it crystallizes this knowledge by deriving from first principals, the mathematics behind PCA . This tutorial does not shy away from explaining the ideas informally, nor does it shy away from the mathematics. The hope is that by addressing both aspects, readers of all levels will be able to gain a better understanding of the power of PCA as well as the when, the how and the why of applying this technique.", "title": "" }, { "docid": "0d51dc0edc9c4e1c050b536c7c46d49d", "text": "MOTIVATION\nThe identification of risk-associated genetic variants in common diseases remains a challenge to the biomedical research community. It has been suggested that common statistical approaches that exclusively measure main effects are often unable to detect interactions between some of these variants. Detecting and interpreting interactions is a challenging open problem from the statistical and computational perspectives. Methods in computing science may improve our understanding on the mechanisms of genetic disease by detecting interactions even in the presence of very low heritabilities.\n\n\nRESULTS\nWe have implemented a method using Genetic Programming that is able to induce a Decision Tree to detect interactions in genetic variants. This method has a cross-validation strategy for estimating classification and prediction errors and tests for consistencies in the results. To have better estimates, a new consistency measure that takes into account interactions and can be used in a genetic programming environment is proposed. This method detected five different interaction models with heritabilities as low as 0.008 and with prediction errors similar to the generated errors.\n\n\nAVAILABILITY\nInformation on the generated data sets and executable code is available upon request.", "title": "" }, { "docid": "0d5db96ce7153043e18faa28c7e3d2d7", "text": "Oriental ink painting, called Sumi-e, is one of the most appealing painting styles that has attracted artists around the world. Major challenges in computer-based Sumi-e simulation are to abstract complex scene information and draw smooth and natural brush strokes. To automatically generate such strokes, we propose to model a brush as a reinforcement learning agent, and learn desired brush-trajectories by maximizing the sum of rewards in the policy search framework. We also elaborate on the design of actions, states, and rewards tailored for a Sumi-e agent. The effectiveness of our proposed approach is demonstrated through simulated Sumi-e experiments.", "title": "" }, { "docid": "56e9ec407c7ece36464fbf5294d49de6", "text": "In recent years, neural networks have been applied to many text processing problems. One example is learning a similarity function between pairs of text, which has applications to paraphrase extraction, plagiarism detection, question answering, and ad hoc retrieval. Within the information retrieval community, the convolutional neural network model proposed by Severyn and Moschitti in a SIGIR 2015 paper has gained prominence. This paper focuses on the problem of answer selection for question answering: we attempt to replicate the results of Severyn and Moschitti using their open-source code as well as to reproduce their results via a de novo (i.e., from scratch) implementation using a completely different deep learning toolkit. Our de novo implementation is instructive in ascertaining whether reported results generalize across toolkits, each of which have their idiosyncrasies. We were able to successfully replicate and reproduce the reported results of Severyn and Moschitti, albeit with minor differences in effectiveness, but affirming the overall design of their model. Additional ablation experiments break down the components of the model to show their contributions to overall effectiveness. Interestingly, we find that removing one component actually increases effectiveness and that a simplified model with only four word overlap features performs surprisingly well, even better than convolution feature maps alone.", "title": "" }, { "docid": "f4f9a79bf6dc7afac056e9615c25c7f4", "text": "Multi-scanner Antivirus systems provide insightful information on the nature of a suspect application; however there is o‰en a lack of consensus and consistency between di‚erent Anti-Virus engines. In this article, we analyze more than 250 thousand malware signatures generated by 61 di‚erent Anti-Virus engines a‰er analyzing 82 thousand di‚erent Android malware applications. We identify 41 di‚erent malware classes grouped into three major categories, namely Adware, Harmful Œreats and Unknown or Generic signatures. We further investigate the relationships between such 41 classes using community detection algorithms from graph theory to identify similarities between them; and we €nally propose a Structure Equation Model to identify which Anti-Virus engines are more powerful at detecting each macro-category. As an application, we show how such models can help in identifying whether Unknown malware applications are more likely to be of Harmful or Adware type.", "title": "" }, { "docid": "ef30864113ba3d467fca85256e6329eb", "text": "This paper presents a method for compiling a large-scale bilingual corpus from a database of movie subtitles. To create the corpus, we propose an algorithm based on Gale and Church’s sentence alignment algorithm(1993). However, our algorithm not only relies on character length information, but also uses subtitle-timing information, which is encoded in the subtitle files. Timing is highly correlated between subtitles in different versions (for the same movie), since subtitles that match should be displayed at the same time. However, the absolute time values can’t be used for alignment, since the timing is usually specified by frame numbers and not by real time, and converting it to real time values is not always possible, hence we use normalized subtitle duration instead. This results in a significant reduction in the alignment error rate.", "title": "" }, { "docid": "6f7fc5e2953cfb8173ab5a54e3d16b93", "text": "There are three modalities in the reading comprehension setting: question, answer and context. The task of question answering or question generation aims to infer an answer or a question when given the counterpart based on context. We present a novel two-way neural sequence transduction model that connects three modalities, allowing it to learn two tasks simultaneously and mutually benefit one another. During training, the model receives question-context-answer triplets as input and captures the cross-modal interaction via a hierarchical attention process. Unlike previous joint learning paradigms that leverage the duality of question generation and question answering at data level, we solve such dual tasks at the architecture level by mirroring the network structure and partially sharing components at different layers. This enables the knowledge to be transferred from one task to another, helping the model to find a general representation for each modality. The evaluation on four public datasets shows that our dual-learning model outperforms the mono-learning counterpart as well as the state-of-the-art joint models on both question answering and question generation tasks.", "title": "" }, { "docid": "caa41494c6e6dc8788da6d2041084188", "text": "In this paper the coverage and capacity of SigFox, LoRa, GPRS, and NB-IoT is compared using a real site deployment covering 8000 km2 in Northern Denmark. Using the existing Telenor cellular site grid it is shown that the four technologies have more than 99 % outdoor coverage, while GPRS is challenged for indoor coverage. Furthermore, the study analyzes the capacity of the four technologies assuming a traffic growth from 1 to 10 IoT device per user. The conclusion is that the 95 %-tile uplink failure rate for outdoor users is below 5 % for all technologies. For indoor users only NB-IoT provides uplink and downlink connectivity with less than 5 % failure rate, while SigFox is able to provide an unacknowledged uplink data service with about 12 % failure rate. Both GPRS and LoRa struggle to provide sufficient indoor coverage and capacity.", "title": "" }, { "docid": "5dc78e62ca88a6a5f253417093e2aa4d", "text": "This paper surveys the scientific and trade literature on cybersecurity for unmanned aerial vehicles (UAV), concentrating on actual and simulated attacks, and the implications for small UAVs. The review is motivated by the increasing use of small UAVs for inspecting critical infrastructures such as the electric utility transmission and distribution grid, which could be a target for terrorism. The paper presents a modified taxonomy to organize cyber attacks on UAVs and exploiting threats by Attack Vector and Target. It shows that, by Attack Vector, there has been one physical attack and ten remote attacks. By Target, there have been six attacks on GPS (two jamming, four spoofing), two attacks on the control communications stream (a deauthentication attack and a zero-day vulnerabilities attack), and two attacks on data communications stream (two intercepting the data feed, zero executing a video replay attack). The paper also divides and discusses the findings by large or small UAVs, over or under 25 kg, but concentrates on small UAVs. The survey concludes that UAV-related research to counter cybersecurity threats focuses on GPS Jamming and Spoofing, but ignores attacks on the controls and data communications stream. The gap in research on attacks on the data communications stream is concerning, as an operator can see a UAV flying off course due to a control stream attack but has no way of detecting a video replay attack (substitution of a video feed).", "title": "" }, { "docid": "34138dce207c3ce702d6554d27c3c1e3", "text": "Fraud detection is of great importance to financial institutions. This paper is concerned with the problem of finding outliers in time series financial data using Peer Group Analysis (PGA), which is an unsupervised technique for fraud detection. The objective of PGA is to characterize the expected pattern of behavior around the target sequence in terms of the behavior of similar objects, and then to detect any difference in evolution between the expected pattern and the target. The tool has been applied to the stock market data, which has been collected from Bangladesh Stock Exchange to assess its performance in stock fraud detection. We observed PGA can detect those brokers who suddenly start selling the stock in a different way to other brokers to whom they were previously similar. We also applied t-statistics to find the deviations effectively.", "title": "" }, { "docid": "065740786a7fcb2e63df4103ea0ede59", "text": "Accumulating glycine betaine through the ButA transport system from an exogenous supply is a survival strategy employed by Tetragenococcus halophilus, a moderate halophilic lactic acid bacterium with crucial role in flavor formation of high-salt food fermentation, to achieve cellular protection. In this study, we firstly confirmed that butA expression was up-regulated under salt stress conditions by quantitative reverse transcription polymerase chain reaction (qRT-PCR). Subsequently, we discovered that recombinant Escherichia coli MKH13 strains with single- and double-copy butA complete expression box(es) showed typical growth curves while they differed in their salt adaption and tolerance. Meanwhile, high-performance liquid chromatography (HPLC) experiments confirmed results obtained from growth curves. In summary, our results indicated that regulation of butA expression was salt-induced and double-copy butA cassettes entrusted a higher ability of salt adaption and tolerance to E. coli MKH13, which implied the potential of muti-copies of butA gene in the genetic modification of T. halophilus for improvement of salt tolerance and better industrial application.", "title": "" }, { "docid": "57e70bca420ca75412758ef8591c99ab", "text": "We present graph partition neural networks (GPNN), an extension of graph neural networks (GNNs) able to handle extremely large graphs. GPNNs alternate between locally propagating information between nodes in small subgraphs and globally propagating information between the subgraphs. To efficiently partition graphs, we experiment with spectral partitioning and also propose a modified multi-seed flood fill for fast processing of large scale graphs. We extensively test our model on a variety of semi-supervised node classification tasks. Experimental results indicate that GPNNs are either superior or comparable to state-of-the-art methods on a wide variety of datasets for graph-based semi-supervised classification. We also show that GPNNs can achieve similar performance as standard GNNs with fewer propagation steps.", "title": "" }, { "docid": "21b07dc04d9d964346748eafe3bcfc24", "text": "Online social data like user-generated content, expressed or implicit relations among people, and behavioral traces are at the core of many popular web applications and platforms, driving the research agenda of researchers in both academia and industry. The promises of social data are many, including the understanding of \"what the world thinks»» about a social issue, brand, product, celebrity, or other entity, as well as enabling better decision-making in a variety of fields including public policy, healthcare, and economics. However, many academics and practitioners are increasingly warning against the naive usage of social data. They highlight that there are biases and inaccuracies occurring at the source of the data, but also introduced during data processing pipeline; there are methodological limitations and pitfalls, as well as ethical boundaries and unexpected outcomes that are often overlooked. Such an overlook can lead to wrong or inappropriate results that can be consequential.", "title": "" }, { "docid": "0209627cd57745dc5c06dc5ff9723352", "text": "The cloud computing provides on demand services over the Internet with the help of a large amount of virtual storage. The main features of cloud computing is that the user does not have any setup of expensive computing infrastructure and the cost of its services is less. In the recent years, cloud computing integrates with the industry and many other areas, which has been encouraging the researcher to research on new related technologies. Due to the availability of its services & scalability for computing processes individual users and organizations transfer their application, data and services to the cloud storage server. Regardless of its advantages, the transformation of local computing to remote computing has brought many security issues and challenges for both consumer and provider. Many cloud services are provided by the trusted third party which arises new security threats. The cloud provider provides its services through the Internet and uses many web technologies that arise new security issues. This paper discussed about the basic features of the cloud computing, security issues, threats and their solutions. Additionally, the paper describes several key topics related to the cloud, namely cloud architecture framework, service and deployment model, cloud technologies, cloud security concepts, threats, and attacks. The paper also discusses a lot of open research issues related to the cloud security. Keywords—Cloud Computing, Cloud Framework, Cloud Security, Cloud Security Challenges, Cloud Security Issues", "title": "" } ]
scidocsrr
839b9c8780958004fb736dceff1b0f98
Macroeconomic Risk and Debt Overhang ∗
[ { "docid": "64bb4d9db995a2908a6eaea727728121", "text": "We present a capital budgeting valuation framework that takes into account both personal and corporate taxation. This has implications even for all-equity-financed projects. It is also important when the firm or project is partially financed by debt, of course. The setting is a Miller equilibrium economy with differential taxation of debt and equity income that is generalized to allow cross-sectional variation in corporate tax rates. We show broad circumstances under which taxes do not affect the martingale operator (the martingale operator is the same before and after personal taxes, which we call “valuation neutrality”) and in which there are no tax-timing options. One implication of this is that the appropriate discount rate for riskless equity-financed flows (martingale expectations or certaintyequivalents) is an equity rate that differs from the riskless debt rate by a tax wedge. This tax wedge factor is the after-tax retention rate for the corporate tax rate that corresponds to tax neutrality in the Miller equilibrium. We then extend this result to the valuation of the interest tax shield when the firm has an exogenous debt policy, where the debt may or may not have default risk. Interest tax shields accrue at a net rate corresponding to the difference between the corporate tax rate that will be faced by the project and the Miller equilibrium tax rate. Depending on the financing system, interest tax shields can be incorporated by using a tax-adjusted discount rate or by implementing an APV-like approach with additive interest tax shields. We also analyze the effect of uncertainty and debt financing on the value of investment real options and on the exercise policy, including the effect of default risk. For low uncertainty, a rise in leverage reduces the time value of the real option and increases the probability of being exercised. This last effect on the exercise policy is completely offset when the firm is close to default (i.e, a high coupon). In this situation, more debt or more uncertainty reduces the probability of investing.", "title": "" } ]
[ { "docid": "523fae58b0da2d96c2b3b126480d8302", "text": "Many online shopping malls in which explicit rating information is not available still have difficulty in providing recommendation services using collaborative filtering (CF) techniques for their users. Applying temporal purchase patterns derived from sequential pattern analysis (SPA) for recommendation services also often makes users unhappy with the inaccurate and biased results obtained by not considering individual preferences. The objective of this research is twofold. One is to derive implicit ratings so that CF can be applied to online transaction data even when no explicit rating information is available, and the other is to integrate CF and SPA for improving recommendation quality. Based on the results of several experiments that we conducted to compare the performance between ours and others, we contend that implicit rating can successfully replace explicit rating in CF and that the hybrid approach of CF and SPA is better than the individual ones. 2012 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "679df1dc4a2317e56fcdb2122460dd73", "text": "Over the past five years, large-scale storage installations have required fault-protection beyond RAID-5, leading to a flurry of research on and development of erasure codes for multiple disk failures. Numerous open-source implementations of various coding techniques are available to the general public. In this paper, we perform a head-to-head comparison of these implementations in encoding and decoding scenarios. Our goals are to compare codes and implementations, to discern whether theory matches practice, and to demonstrate how parameter selection, especially as it concerns memory, has a significant impact on a code’s performance. Additional benefits are to give storage system designers an idea of what to expect in terms of coding performance when designing their storage systems, and to identify the places where further erasure coding research can have the most impact.", "title": "" }, { "docid": "b393e4057c495785a7cc56263dcba943", "text": "Poisson denoising is an essential issue for various imaging applications, such as night vision, medical imaging and microscopy. State-of-the-art approaches are clearly dominated by patch-based non-local methods in recent years. In this paper, we aim to propose a local Poisson denoising model with both structure simplicity and good performance. To this end, we consider a variational modeling to integrate the so-called Fields of Experts (FoE) image prior, that has proven an effective higher-order Markov Random Fields (MRF) model for many classic image restoration problems. We exploit several feasible variational variants for this task. We start with a direct modeling in the original image domain by taking into account the Poisson noise statistics, which performs generally well for the cases of high SNR. However, this strategy encounters problem in cases of low SNR. Then we turn to an alternative modeling strategy by using the Anscombe transform and Gaussian statistics derived data term. We retrain the FoE prior model directly in the transform domain. With the newly trained FoE model, we end up with a local variational model providing strongly competitive results against state-of-the-art non-local approaches, meanwhile bearing the property of simple structure. Furthermore, our proposed model comes along with an additional advantage, that the inference is very efficient as it is well-suited for parallel computation on GPUs. For images of size 512× 512, our GPU implementation takes less ∗Corresponding author. Email address: chenyunjin_nudt@hotmail.com (Yunjin Chen) Preprint submitted to SIAM Journal on Imaging Sciences September 20, 2016 ar X iv :1 60 9. 05 72 2v 1 [ cs .C V ] 1 9 Se p 20 16 than 1 second to produce state-of-the-art Poisson denoising performance.", "title": "" }, { "docid": "fa7416bd48a3f4b5edbbcefadc74f72d", "text": "This paper introduces a meaning representation for spoken language understanding. The Alexa meaning representation language (AMRL), unlike previous approaches, which factor spoken utterances into domains, provides a common representation for how people communicate in spoken language. AMRL is a rooted graph, links to a large-scale ontology, supports cross-domain queries, finegrained types, complex utterances and composition. A spoken language dataset has been collected for Alexa, which contains ∼ 20k examples across eight domains. A version of this meaning representation was released to developers at a trade show in 2016.", "title": "" }, { "docid": "b3e9c251b2da6c704da6285602773afe", "text": "It has been well established that most operating system crashes are due to bugs in device drivers. Because drivers are normally linked into the kernel address space, a buggy driver can wipe out kernel tables and bring the system crashing to a halt. We have greatly mitigated this problem by reducing the kernel to an absolute minimum and running each driver as a separate, unprivileged process in user space. In addition, we implemented a POSIX-conformant operating system as multiple user-mode processes. In this design, all that is left in kernel mode is a tiny kernel of under 3800 lines of executable code for catching interrupts, starting and stopping processes, and doing IPC. By moving nearly the entire operating system to multiple, protected user-mode processes we reduce the consequences of faults, since a driver failure no longer is fatal and does not require rebooting the computer. In fact, our system incorporates a reincarnation server that is designed to deal with such errors and often allows for full recovery, transparent to the application and without loss of data. To achieve maximum reliability, our design was guided by simplicity, modularity, least authorization, and fault tolerance. This paper discusses our lightweight approach and reports on its performance and reliability. It also compares our design to other proposals for protecting drivers using kernel wrapping and virtual machines.", "title": "" }, { "docid": "4f3936b753abd2265d867c0937aec24c", "text": "A weighted constraint satisfaction problem (WCSP) is a constraint satisfaction problem in which preferences among solutions can be expressed. Bucket elimination is a complete technique commonly used to solve this kind of constraint satisfaction problem. When the memory required to apply bucket elimination is too high, a heuristic method based on it (denominated mini-buckets) can be used to calculate bounds for the optimal solution. Nevertheless, the curse of dimensionality makes these techniques impractical on large scale problems. In response to this situation, we present a memetic algorithm for WCSPs in which bucket elimination is used as a mechanism for recombining solutions, providing the best possible child from the parental set. Subsequently, a multi-level model in which this exact/metaheuristic hybrid is further hybridized with branch-and-bound techniques and mini-buckets is studied. As a case study, we have applied these algorithms to the resolution of the maximum density still life problem, a hard constraint optimization problem based on Conway’s game of life. The resulting algorithm consistently finds optimal patterns for up to date solved instances in less time than current approaches. Moreover, it is shown that this proposal provides new best known solutions for very large instances.", "title": "" }, { "docid": "a4099a526548c6d00a91ea21b9f2291d", "text": "The robust principal component analysis (robust PCA) problem has been considered in many machine learning applications, where the goal is to decompose the data matrix to a low rank part plus a sparse residual. While current approaches are developed by only considering the low rank plus sparse structure, in many applications, side information of row and/or column entities may also be given, and it is still unclear to what extent could such information help robust PCA. Thus, in this paper, we study the problem of robust PCA with side information, where both prior structure and features of entities are exploited for recovery. We propose a convex problem to incorporate side information in robust PCA and show that the low rank matrix can be exactly recovered via the proposed method under certain conditions. In particular, our guarantee suggests that a substantial amount of low rank matrices, which cannot be recovered by standard robust PCA, become recoverable by our proposed method. The result theoretically justifies the effectiveness of features in robust PCA. In addition, we conduct synthetic experiments as well as a real application on noisy image classification to show that our method also improves the performance in practice by exploiting side information.", "title": "" }, { "docid": "a2b858e253a2f5075ae294e52c0b3bb7", "text": "Learning and evolution are two fundamental forms of adaptation. There has been a great interest in combining learning and evolution with artificial neural networks (ANN’s) in recent years. This paper: 1) reviews different combinations between ANN’s and evolutionary algorithms (EA’s), including using EA’s to evolve ANN connection weights, architectures, learning rules, and input features; 2) discusses different search operators which have been used in various EA’s; and 3) points out possible future research directions. It is shown, through a considerably large literature review, that combinations between ANN’s and EA’s can lead to significantly better intelligent systems than relying on ANN’s or EA’s alone.", "title": "" }, { "docid": "93df984beae6626b70d954792f6c012e", "text": "We show that for any ε > 0, a maximum-weight triangle in an undirected graph with <i>n</i> vertices and real weights assigned to vertices can be found in time O(<i>n</i>ω + <i>n</i><sup>2+ε</sup>), where ω is the exponent of fastest matrix multiplication algorithm. By the currently best bound on ω, the running time of our algorithm is O(<i>n</i><sup>2.376</sup>). Our algorithm substantially improves the previous time-bounds for this problem recently established by Vassilevska et al. (STOC 2006, O(<i>n</i><sup>2.688</sup>)) and (ICALP 2006, O(<i>n</i><sup>2.575</sup>)). Its asymptotic time complexity matches that of the fastest known algorithm for finding <i>a</i> triangle (not necessarily a maximum-weight one) in a graph.\n By applying or extending our algorithm, we can also improve the upper bounds on finding a maximum-weight triangle in a sparse graph and on finding a maximum-weight subgraph isomorphic to a fixed graph established in the papers by Vassilevska et al. For example, we can find a maximum-weight triangle in a vertex-weighted graph with <i>m</i> edges in asymptotic time required by the fastest algorithm for finding <i>any</i> triangle in a graph with <i>m</i> edges, i.e., in time O(<i>m</i><sup>1.41</sup>).", "title": "" }, { "docid": "fb25a736466dad9acb3ad5c9c1baab7b", "text": "Aquaporin-3 (AQP3) is a water channel expressed at the basolateral plasma membrane of kidney collecting-duct epithelial cells. The mouse AQP3 cDNA was isolated and encodes a 292-amino acid water/glycerol-transporting glycoprotein expressed in kidney, large airways, eye, urinary bladder, skin, and gastrointestinal tract. The mouse AQP3 gene was analyzed, and AQP3 null mice were generated by targeted gene disruption. The growth and phenotype of AQP3 null mice were grossly normal except for polyuria. AQP3 deletion had little effect on AQP1 or AQP4 protein expression but decreased AQP2 protein expression particularly in renal cortex. Fluid consumption in AQP3 null mice was more than 10-fold greater than that in wild-type litter mates, and urine osmolality (<275 milliosmol) was much lower than in wild-type mice (>1,200 milliosmol). After 1-desamino-8-d-arginine-vasopressin administration or water deprivation, the AQP3 null mice were able to concentrate their urine partially to approximately 30% of that in wild-type mice. Osmotic water permeability of cortical collecting-duct basolateral membrane, measured by a spatial filtering optics method, was >3-fold reduced by AQP3 deletion. To test the hypothesis that the residual concentrating ability of AQP3 null mice was due to the inner medullary collecting-duct water channel AQP4, AQP3/AQP4 double-knockout mice were generated. The double-knockout mice had greater impairment of urinary-concentrating ability than did the AQP3 single-knockout mice. Our findings establish a form of nephrogenic diabetes insipidus produced by impaired water permeability in collecting-duct basolateral membrane. Basolateral membrane aquaporins may thus provide blood-accessible targets for drug discovery of aquaretic inhibitors.", "title": "" }, { "docid": "5d05addd1cac2ea4ca5008950a21bd06", "text": "We propose a general purpose variational inference algorithm that forms a natural counterpart of gradient descent for optimization. Our method iteratively transports a set of particles to match the target distribution, by applying a form of functional gradient descent that minimizes the KL divergence. Empirical studies are performed on various real world models and datasets, on which our method is competitive with existing state-of-the-art methods. The derivation of our method is based on a new theoretical result that connects the derivative of KL divergence under smooth transforms with Stein’s identity and a recently proposed kernelized Stein discrepancy, which is of independent interest.", "title": "" }, { "docid": "9d316fae0354f3eb28540ea013b4f8a4", "text": "Natural language makes considerable use of recurrent formulaic patterns of words. This article triangulates the construct of formula from corpus linguistic, psycholinguistic, and educational perspectives. It describes the corpus linguistic extraction of pedagogically useful formulaic sequences for academic speech and writing. It determines English as a second language (ESL) and English for academic purposes (EAP) instructors’ evaluations of their pedagogical importance. It summarizes three experiments which show that different aspects of formulaicity affect the accuracy and fluency of processing of these formulas in native speakers and in advanced L2 learners of English. The language processing tasks were selected to sample an ecologically valid range of language processing skills: spoken and written, production and comprehension. Processing in all experiments was affected by various corpus-derived metrics: length, frequency, and mutual information (MI), but to different degrees in the different populations. For native speakers, it is predominantly the MI of the formula which determines processability; for nonnative learners of the language, it is predominantly the frequency of the formula. The implications of these findings are discussed for (a) the psycholinguistic validity of corpus-derived formulas, (b) a model of their acquisition, (c) ESL and EAP instruction and the prioritization of which formulas to teach.", "title": "" }, { "docid": "437dceea204950d6c604b167660c0223", "text": "Computer memory systems are increasingly a bottleneck limiting application performance. IRAM architectures, which integrate a CPU with DRAM main memory on a single chip, promise to remove this limitation by providing tremendous main memory bandwidth and significant reductions in memory latency. To determine whether existing microarchitectures can tap the potential performance advantages of IRAM systems, we examined both execution time analyses of existing microprocessors and system simulation of hypothetical processors. Our results indicate that, for current benchmarks, existing architectures, whether simple, superscalar or out-of-order, are unable to exploit IRAM’s increased memory bandwidth and decreased memory latency to achieve significant performance benefits.", "title": "" }, { "docid": "0c4a57f3b0defd307b1b7b6f22404d5a", "text": "We present a novel wavelet method for the simulation of fluids at high spatial resolution. The algorithm enables large- and small-scale detail to be edited separately, allowing high-resolution detail to be added as a post-processing step. Instead of solving the Navier-Stokes equations over a highly refined mesh, we use the wavelet decomposition of a low-resolution simulation to determine the location and energy characteristics of missing high-frequency components. We then synthesize these missing components using a novel incompressible turbulence function, and provide a method to maintain the temporal coherence of the resulting structures. There is no linear system to solve, so the method parallelizes trivially and requires only a few auxiliary arrays. The method guarantees that the new frequencies will not interfere with existing frequencies, allowing animators to set up a low resolution simulation quickly and later add details without changing the overall fluid motion.", "title": "" }, { "docid": "0dc1119bf47ffa6d032c464a54d5d173", "text": "The use of an analogy from a semantically distant domain to guide the problemsolving process was investigated. The representation of analogy in memory and processes involved in the use of analogies were discussed theoretically and explored in five experiments. In Experiment I oral protocols were used to examine the processes involved in solving a problem by analogy. In all experiments subjects who first read a story about a military problem and its solution tended to generate analogous solutions to a medical problem (Duncker’s “radiation problem”), provided they were given a hint to use the story to help solve the problem. Transfer frequency was reduced when the problem presented in the military story was substantially disanalogous to the radiation problem, even though the solution illustrated in the story corresponded to an effective radiation solution (Experiment II). Subjects in Experiment III tended to generate analogous solutions to the radiation problem after providing their own solutions to the military problem. Subjects were able to retrieve the story from memory and use it to generate an analogous solution, even when the critical story had been memorized in the context of two distractor stories (Experiment IV). However, when no hint to consider the story was given, frequency of analogous solutions decreased markedly. This decrease in transfer occurred when the story analogy was presented in a recall task along with distractor stories (Experiment IV), when it was presented alone, and when it was presented in between two attempts to solve the problem (Experiment V). Component processes and strategic variations in analogical problem solving were discussed. Issues related to noticing analogies and accessing them in memory were also examined, as was the relationship of analogical reasoning to other cognitive tasks.", "title": "" }, { "docid": "8ee5a9dde6f919637618787f6ffcc777", "text": "Microbial infection initiates complex interactions between the pathogen and the host. Pathogens express several signature molecules, known as pathogen-associated molecular patterns (PAMPs), which are essential for survival and pathogenicity. PAMPs are sensed by evolutionarily conserved, germline-encoded host sensors known as pathogen recognition receptors (PRRs). Recognition of PAMPs by PRRs rapidly triggers an array of anti-microbial immune responses through the induction of various inflammatory cytokines, chemokines and type I interferons. These responses also initiate the development of pathogen-specific, long-lasting adaptive immunity through B and T lymphocytes. Several families of PRRs, including Toll-like receptors (TLRs), RIG-I-like receptors (RLRs), NOD-like receptors (NLRs), and DNA receptors (cytosolic sensors for DNA), are known to play a crucial role in host defense. In this review, we comprehensively review the recent progress in the field of PAMP recognition by PRRs and the signaling pathways activated by PRRs.", "title": "" }, { "docid": "6ac90c81aaf243bb611c55a4daa59b61", "text": "Parallel corpora are crucial for training SMT systems. However, for many language pairs they are available only in very limited quantities. For these language pairs a huge portion of phrases encountered at run-time will be unknown. We show how techniques from paraphrasing can be used to deal with these otherwise unknown source language phrases. Our results show that augmenting a stateof-the-art SMT system with paraphrases leads to significantly improved coverage and translation quality. For a training corpus with 10,000 sentence pairs we increase the coverage of unique test set unigrams from 48% to 90%, with more than half of the newly covered items accurately translated, as opposed to none in current approaches.", "title": "" }, { "docid": "f8d06c65acdbec0a41fe49fc4e7aef09", "text": "We present an exhaustive review of research on automatic classification of sounds from musical instruments. Two different but complementary approaches are examined, the perceptual approach and the taxonomic approach. The former is targeted to derive perceptual similarity functions in order to use them for timbre clustering and for searching and retrieving sounds by timbral similarity. The latter is targeted to derive indexes for labeling sounds after cultureor user-biased taxonomies. We review the relevant features that have been used in the two areas and then we present and discuss different techniques for similarity-based clustering of sounds and for classification into pre-defined instrumental categories.", "title": "" }, { "docid": "4cec6136b607b3c28ff6f2e5abf04153", "text": "| 3", "title": "" }, { "docid": "c81fb61f8c12dfe3bb88d417d9ec645a", "text": "Existing timeline generation systems for complex events consider only information from traditional media, ignoring the rich social context provided by user-generated content that reveals representative public interests or insightful opinions. We instead aim to generate socially-informed timelines that contain both news article summaries and selected user comments. We present an optimization framework designed to balance topical cohesion between the article and comment summaries along with their informativeness and coverage of the event. Automatic evaluations on real-world datasets that cover four complex events show that our system produces more informative timelines than state-of-theart systems. In human evaluation, the associated comment summaries are furthermore rated more insightful than editor’s picks and comments ranked highly by users.", "title": "" } ]
scidocsrr
12704f2210f49fe16b0c3f65d60198be
Vehicle Logo Recognition System Based on Convolutional Neural Networks With a Pretraining Strategy
[ { "docid": "9d72142cce4e27443c3f2ca471dbad41", "text": "Building visual recognition models that adapt across different domains is a challenging task for computer vision. While feature-learning machines in the form of hierarchial feed-forward models (e.g., convolutional neural networks) showed promise in this direction, they are still difficult to train especially when few training examples are available. In this paper, we present a framework for training hierarchical feed-forward models for visual recognition, using transfer learning from pseudo tasks. These pseudo tasks are automatically constructed from data without supervision and comprise a set of simple pattern-matching operations. We show that these pseudo tasks induce an informative inverse-Wishart prior on the functional behavior of the network, offering an effective way to incorporate useful prior knowledge into the network training. In addition to being extremely simple to implement, and adaptable across different domains with little or no extra tuning, our approach achieves promising results on challenging visual recognition tasks, including object recognition, gender recognition, and ethnicity recognition.", "title": "" }, { "docid": "19f08f2e9dd22bb2779ded2ad9cd19d4", "text": "In this paper, a new algorithm for Vehicle Logo Recognition is proposed, on the basis of an enhanced Scale Invariant Feature Transform (Merge-SIFT or M-SIFT). The algorithm is assessed on a set of 1500 logo images that belong to 10 distinctive vehicle manufacturers. A series of experiments are conducted, splitting the 1500 images to a training set (database) and to a testing set (query). It is shown that the MSIFT approach, which is proposed in this paper, boosts the recognition accuracy compared to the standard SIFT method. The reported results indicate an average of 94.6% true recognition rate in vehicle logos, while the processing time remains low (~0.8sec).", "title": "" } ]
[ { "docid": "71c0325b85044c55dcf608449f13a05a", "text": "Microorganisms are a promising source of an enormous number of natural products, which have made significant contribution to almost each sphere of human, plant and veterinary life. Natural compounds obtained from microorganisms have proved their value in nutrition, agriculture and healthcare. Primary metabolites, such as amino acids, enzymes, vitamins, organic acids and alcohol are used as nutritional supplements as well as in the production of industrial commodities through biotransformation. Whereas, secondary metabolites are organic compounds that are largely obtained by extraction from plants or tissues. They are primarily used in the biopharmaceutical industry due to their capability to reduce infectious diseases in human beings and animals and thus increase the life expectancy. Additionally, microorganisms and their products inevitably play a significant role in sustainable agriculture development.", "title": "" }, { "docid": "de22ed244b7b2c5fb9da0981ec1b9852", "text": "Knowledge graph embedding aims to construct a low-dimensional and continuous space, which is able to describe the semantics of high-dimensional and sparse knowledge graphs. Among existing solutions, translation models have drawn much attention lately, which use a relation vector to translate the head entity vector, the result of which is close to the tail entity vector. Compared with classical embedding methods, translation models achieve the state-of-the-art performance; nonetheless, the rationale and mechanism behind them still aspire after understanding and investigation. In this connection, we quest into the essence of translation models, and present a generic model, namely, GTrans, to entail all the existing translation models. In GTrans, each entity is interpreted by a combination of two states—eigenstate and mimesis. Eigenstate represents the features that an entity intrinsically owns, and mimesis expresses the features that are affected by associated relations. The weighting of the two states can be tuned, and hence, dynamic and static weighting strategies are put forward to best describe entities in the problem domain. Besides, GTrans incorporates a dynamic relation space for each relation, which not only enables the flexibility of our model but also reduces the noise from other relation spaces. In experiments, we evaluate our proposed model with two benchmark tasks—triplets classification and link prediction. Experiment results witness significant and consistent performance gain that is offered by GTrans over existing alternatives.", "title": "" }, { "docid": "fc289c7a9f08ff3f5dd41ae683ab77b3", "text": "Approximate Newton methods are standard optimization tools which aim to maintain the benefits of Newton’s method, such as a fast rate of convergence, while alleviating its drawbacks, such as computationally expensive calculation or estimation of the inverse Hessian. In this work we investigate approximate Newton methods for policy optimization in Markov decision processes (MDPs). We first analyse the structure of the Hessian of the total expected reward, which is a standard objective function for MDPs. We show that, like the gradient, the Hessian exhibits useful structure in the context of MDPs and we use this analysis to motivate two Gauss-Newton methods for MDPs. Like the Gauss-Newton method for non-linear least squares, these methods drop certain terms in the Hessian. The approximate Hessians possess desirable properties, such as negative definiteness, and we demonstrate several important performance guarantees including guaranteed ascent directions, invariance to affine transformation of the parameter space and convergence guarantees. We finally provide a unifying perspective of key policy search algorithms, demonstrating that our second Gauss-Newton algorithm is closely related to both the EMalgorithm and natural gradient ascent applied to MDPs, but performs significantly better in practice on a range of challenging domains.", "title": "" }, { "docid": "3c203c55c925fb3f78506d46b8b453a8", "text": "In this paper, we provide combinatorial interpretations for some determinantal identities involving Fibonacci numbers. We use the method due to Lindström-Gessel-Viennot in which we count nonintersecting n-routes in carefully chosen digraphs in order to gain insight into the nature of some well-known determinantal identities while allowing room to generalize and discover new ones.", "title": "" }, { "docid": "f57b49cef2e90b8d8029dafaf59973a3", "text": "Logic emerged as the discipline of reasoning and its syllogistic fragment investigates one of the most fundamental aspect of human reasoning. However, empirical studies have shown that human inference differs from what is characterized by traditional logical validity. In order to better characterize the patterns of human reasoning, psychologists and philosophers have proposed a number of theories of syllogistic reasoning. We contribute to this endeavor by proposing a model based on natural logic with empirically weighted inference rules. Following the mental logic tradition, our basic assumptions are, firstly, natural language sentences are the mental representation of reasoning; secondly, inference rules are among the basic mental operations of reasoning; thirdly, subjects make guesses that depend on a few heuristics. We implemented the model and trained it with the experimental data. The model was able to make around 95% correct predictions and, as far as we can see from the data we have access to, it outperformed all other syllogistic theories. We further discuss the psychological plausibility of the model and the possibilities of extending the model to cover larger fragments of natural language.", "title": "" }, { "docid": "632e8ff3f6a13ec1bde0c4fa04a816b0", "text": "Computer science is expanding into K12 education and numerous educational games and systems have been created to teach programming skills, including many block-based programming environments. Teaching computational thinking has received particular attention, and more research is needed on using educational games to directly teach computational thinking skills. We propose to investigate this using Dragon Architect, an educational block-based programming game we are developing. Specifically, we wish to study ways of directly teaching computational thinking strategies such as divide and conquer in an educational game, as well as ways to evaluate our approaches.", "title": "" }, { "docid": "d4aca467d0014b2c2359f5609a1a199b", "text": "MATLAB is specifically designed for simulating dynamic systems. This paper describes a method of modelling impulse voltage generator using Simulink, an extension of MATLAB. The equations for modelling have been developed and a corresponding Simulink model has been constructed. It shows that Simulink program becomes very useful in studying the effect of parameter changes in the design to obtain the desired impulse voltages and waveshapes from an impulse generator.", "title": "" }, { "docid": "a607a1760d81fdedc53f45f0994d903c", "text": "Common visual codebook generation methods used in a Bag of Visual words model, e.g. k-means or Gaussian Mixture Model, use the Euclidean distance to cluster features into visual code words. However, most popular visual descriptors are histograms of image measurements. It has been shown that the Histogram Intersection Kernel (HIK) is more effective than the Euclidean distance in supervised learning tasks with histogram features. In this paper, we demonstrate that HIK can also be used in an unsupervised manner to significantly improve the generation of visual codebooks. We propose a histogram kernel k-means algorithm which is easy to implement and runs almost as fast as k-means. The HIK codebook has consistently higher recognition accuracy over k-means codebooks by 2–4%. In addition, we propose a one-class SVM formulation to create more effective visual code words which can achieve even higher accuracy. The proposed method has established new state-of-the-art performance numbers for 3 popular benchmark datasets on object and scene recognition. In addition, we show that the standard k-median clustering method can be used for visual codebook generation and can act as a compromise between HIK and k-means approaches.", "title": "" }, { "docid": "5928efbaaa1ec64bfaab575f1bce6bd5", "text": "Stochastic neural net weights are used in a variety of contexts, including regularization, Bayesian neural nets, exploration in reinforcement learning, and evolution strategies. Unfortunately, due to the large number of weights, all the examples in a mini-batch typically share the same weight perturbation, thereby limiting the variance reduction effect of large mini-batches. We introduce flipout, an efficient method for decorrelating the gradients within a mini-batch by implicitly sampling pseudo-independent weight perturbations for each example. Empirically, flipout achieves the ideal linear variance reduction for fully connected networks, convolutional networks, and RNNs. We find significant speedups in training neural networks with multiplicative Gaussian perturbations. We show that flipout is effective at regularizing LSTMs, and outperforms previous methods. Flipout also enables us to vectorize evolution strategies: in our experiments, a single GPU with flipout can handle the same throughput as at least 40 CPU cores using existing methods, equivalent to a factor-of-4 cost reduction on Amazon Web Services.", "title": "" }, { "docid": "49f0d1d748d1fbfb289d6af8451c16a5", "text": "Natural language processing employs computational techniques for the purpose of learning, understanding, and producing human language content. Early computational approaches to language research focused on automating the analysis of the linguistic structure of language and developing basic technologies such as machine translation, speech recognition, and speech synthesis. Today’s researchers refine and make use of such tools in real-world applications, creating spoken dialogue systems and speech-to-speech translation engines, mining social media for information about health or finance, and identifying sentiment and emotion toward products and services. We describe successes and challenges in this rapidly advancing area.", "title": "" }, { "docid": "d33b2e5883b14ac771cf128d309eddbf", "text": "Automated lip reading is the process of converting movements of the lips, face and tongue to speech in real time with enhanced accuracy. Although performance of lip reading systems is still not remotely similar to audio speech recognition, recent developments in processor technology and the massive explosion and ubiquity of computing devices accompanied with increased research in this field has reduced the ambiguities of the labial language, making it possible for free speech-to-text conversion. This paper surveys the field of lip reading and provides a detailed discussion of the trade-offs between various approaches. It gives a reverse chronological topic wise listing of the developments in lip reading systems in recent years. With advancement in computer vision and pattern recognition tools, the efficacy of real time, effective conversion has increased. The major goal of this paper is to provide a comprehensive reference source for the researchers involved in lip reading, not just for the esoteric academia but all the people interested in this field regardless of particular application areas.", "title": "" }, { "docid": "72c054c955a34fbac8e798665ece8f57", "text": "In this paper, we propose and empirically validate a suite of hotspot patterns: recurring architecture problems that occur in most complex systems and incur high maintenance costs. In particular, we introduce two novel hotspot patterns, Unstable Interface and Implicit Cross-module Dependency. These patterns are defined based on Baldwin and Clark's design rule theory, and detected by the combination of history and architecture information. Through our tool-supported evaluations, we show that these patterns not only identify the most error-prone and change-prone files, they also pinpoint specific architecture problems that may be the root causes of bug-proneness and change-proneness. Significantly, we show that 1) these structure-history integrated patterns contribute more to error- and change-proneness than other hotspot patterns, and 2) the more hotspot patterns a file is involved in, the more error- and change-prone it is. Finally, we report on an industrial case study to demonstrate the practicality of these hotspot patterns. The architect and developers confirmed that our hotspot detector discovered the majority of the architecture problems causing maintenance pain, and they have started to improve the system's maintainability by refactoring and fixing the identified architecture issues.", "title": "" }, { "docid": "917ab22adee174259bef5171fe6f14fb", "text": "The manner in which quadrupeds change their locomotive patterns—walking, trotting, and galloping—with changing speed is poorly understood. In this paper, we provide evidence for interlimb coordination during gait transitions using a quadruped robot for which coordination between the legs can be self-organized through a simple “central pattern generator” (CPG) model. We demonstrate spontaneous gait transitions between energy-efficient patterns by changing only the parameter related to speed. Interlimb coordination was achieved with the use of local load sensing only without any preprogrammed patterns. Our model exploits physical communication through the body, suggesting that knowledge of physical communication is required to understand the leg coordination mechanism in legged animals and to establish design principles for legged robots that can reproduce flexible and efficient locomotion.", "title": "" }, { "docid": "baad68c1adef7b72d78745fe03db0c57", "text": "0020-0255/$ see front matter 2012 Elsevier Inc http://dx.doi.org/10.1016/j.ins.2012.10.039 ⇑ Corresponding author. E-mail addresses: pcortez@dsi.uminho.pt (P. Cor In this paper, we propose a new visualization approach based on a Sensitivity Analysis (SA) to extract human understandable knowledge from supervised learning black box data mining models, such as Neural Networks (NNs), Support Vector Machines (SVMs) and ensembles, including Random Forests (RFs). Five SA methods (three of which are purely new) and four measures of input importance (one novel) are presented. Also, the SA approach is adapted to handle discrete variables and to aggregate multiple sensitivity responses. Moreover, several visualizations for the SA results are introduced, such as input pair importance color matrix and variable effect characteristic surface. A wide range of experiments was performed in order to test the SA methods and measures by fitting four well-known models (NN, SVM, RF and decision trees) to synthetic datasets (five regression and five classification tasks). In addition, the visualization capabilities of the SA are demonstrated using four real-world datasets (e.g., bank direct marketing and white wine quality). 2012 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "db83ca64b54bbd54b4097df425c48017", "text": "This paper introduces the application of high-resolution angle estimation algorithms for a 77GHz automotive long range radar sensor. Highresolution direction of arrival (DOA) estimation is important for future safety systems. Using FMCW principle, major challenges discussed in this paper are small number of snapshots, correlation of the signals, and antenna mismatches. Simulation results allow analysis of these effects and help designing the sensor. Road traffic measurements show superior DOA resolution and the feasibility of high-resolution angle estimation.", "title": "" }, { "docid": "ee9cb495280dc6e252db80c23f2f8c2b", "text": "Due to the dramatical increase in popularity of mobile devices in the last decade, more sensitive user information is stored and accessed on these devices everyday. However, most existing technologies for user authentication only cover the login stage or only work in restricted controlled environments or GUIs in the post login stage. In this work, we present TIPS, a Touch based Identity Protection Service that implicitly and unobtrusively authenticates users in the background by continuously analyzing touch screen gestures in the context of a running application. To the best of our knowledge, this is the first work to incorporate contextual app information to improve user authentication. We evaluate TIPS over data collected from 23 phone owners and deployed it to 13 of them with 100 guest users. TIPS can achieve over 90% accuracy in real-life naturalistic conditions within a small amount of computational overhead and 6% of battery usage.", "title": "" }, { "docid": "a8b6ccee8389eb3cd2b8d28bf816a8d7", "text": "OBJECTIVE\nGeneral aviation (GA) pilot performance utilizing a mixed-modality simulated data link was objectively evaluated based on the time required in accessing, understanding, and executing data link commands. Additional subjective data were gathered on workload, situation awareness (SA), and preference.\n\n\nBACKGROUND\nResearch exploring mixed-modality data link integration to the single-pilot GA cockpit is lacking, especially with respect to potential effects on safety.\n\n\nMETHODS\nSixteen visual flight rules (VFR)-rated pilots participated in an experiment using a flight simulator equipped with a mixed-modality data link. Data link modalities were text display, synthesized speech, digitized speech, and synthesized speech/text combination. Flight conditions included VFR (unlimited ceiling and visibility) or marginal VFR flight conditions (clouds 2,800 ft above ground level, 3-mile visibility).\n\n\nRESULTS\nStatistically significant differences were found in pilot performance, mental workload, and SA across the data link modalities. Textual data link resulted in increased time and workload as compared with the three speech-type data link conditions, which did not differ. SA measures indicated higher performance with textual and digitized speech data link conditions.\n\n\nCONCLUSION\nTextual data link can be significantly enhanced for single-pilot GA operations by the addition of a speech component.\n\n\nAPPLICATION\nPotential applications include operational safety in future GA systems that incorporate data link for use by a single pilot and guidance in the development of flight performance objectives for these systems.", "title": "" }, { "docid": "5dd9c07946288d8fced7802b00d811bd", "text": "In the period 1890 to 1895, Willem Einthoven greatly improved the quality of tracings that could be directly obtained with the capillary electrometer. He then introduced an ingenious correction for the poor frequency response of these instruments, using differential equations. This method allowed him to predict the correct form of the human electrocardiogram, as subsequently revealed by the new string galvanometer that he introduced in 1902. For Einthoven, who won the Nobel Prize for the development of the electrocardiogram in 1924, one of the most rewarding aspects of the high fidelity recording of the human electrocardiogram was its validation of his earlier theoretical predictions regarding the electrical activity of the heart.", "title": "" }, { "docid": "6bcce580df9fde67c1a0a009537c8d56", "text": "Online social networks evolved into a global mainstream medium that generates an increasing social and economic impact. However, many online social networks face the question how to leverage on their fast growing popularity to achieve sustainable revenues. In that context, particularly more effective advertising strategies and sophisticated customer loyalty programs to foster users’ retention are needed. Thereby, key users in terms of users’ connectivity and communication activity play a decisive role. However, quantitative approaches for the identification of key users in online social networks merging concepts and findings from research on users’ connectivity and communication activity are missing. Based on the design science research paradigm, we therefore propose a novel PageRank based approach bringing together both research streams. To demonstrate its practical applicability, we use a publicly available dataset of Facebook.com. Finally, we evaluate our novel PageRank based approach in comparison to existing approaches, which could alternatively be used.", "title": "" }, { "docid": "2faf7fedadfd8b24c4740f7100cf5fec", "text": "Lacking standardized extrinsic evaluation methods for vector representations of words, the NLP community has relied heavily onword similaritytasks as a proxy for intrinsic evaluation of word vectors. Word similarity evaluation, which correlates the distance between vectors and human judgments of “semantic similarity” is attractive, because it is computationally inexpensive and fast. In this paper we present several problems associated with the evaluation of word vectors on word similarity datasets, and summarize existing solutions. Our study suggests that the use of word similarity tasks for evaluation of word vectors is not sustainable and calls for further research on evaluation methods.", "title": "" } ]
scidocsrr
bf01c728064626efac7ce3912668fdda
ETHICS , CHARACTER , AND AUTHENTIC TRANSFORMATIONAL LEADERSHIP BEHAVIOR
[ { "docid": "612e460c0f6e328d7516bfba7b674517", "text": "There is universality in the transactional-transformational leadership paradigm. That is, the same conception of phenomena and relationships can be observed in a wide range of organizations and cultures. Exceptions can be understood as a consequence of unusual attributes of the organizations or cultures. Three corollaries are discussed. Supportive evidence has been gathered in studies conducted in organizations in business, education, the military, the government, and the independent sector. Likewise, supportive evidence has been accumulated from all but 1 continent to document the applicability of the paradigm.", "title": "" } ]
[ { "docid": "a050ae6738a8c511b8942deb19155b7c", "text": "Electrocardiogram (ECG) measurement without skin-contact is essential for u-healthcare. ECG measurement using capacitive-coupled electrode (CC-electrode) is a well-known method for unconstrained ECG measurement. Although the CC-electrode has the advantage of non-contact measurement, common mode noise is increased, which decreases the signal-to-noise ratio (SNR). In this study, we proposed non-contact ECG measurement system using CC-electrode and driven circuit to reduce noise. The components of driven circuit were similar to those of driven-right-leg circuit and conductive sheet was employed for driven electrode to contact uniformly to the body over clothes. We evaluated the performance of the driven circuit under different conditions, including a contact area to the body and a gain of the driven circuit to find out a relationship between them and the SNR of ECG. As the results, as contact area became larger and gain became higher, SNR increased.", "title": "" }, { "docid": "e364a2ac82f42c87f88b6ed508dc0d8e", "text": "In order to work well, many computer vision algorithms require that their parameters be adjusted according to the image noise level, making it an important quantity to estimate. We show how to estimate an upper bound on the noise level from a single image based on a piecewise smooth image prior model and measured CCD camera response functions. We also learn the space of noise level functions how noise level changes with respect to brightness and use Bayesian MAP inference to infer the noise level function from a single image. We illustrate the utility of this noise estimation for two algorithms: edge detection and featurepreserving smoothing through bilateral filtering. For a variety of different noise levels, we obtain good results for both these algorithms with no user-specified inputs.", "title": "" }, { "docid": "1f81e5e9851b4750aac009da5ae578a1", "text": "This paper describes a method to automatically create dialogue resources annotated with dialogue act information by reusing existing dialogue corpora. Numerous dialogue corpora are available for research purposes and many of them are annotated with dialogue act information that captures the intentions encoded in user utterances. Annotated dialogue resources, however, differ in various respects: data collection settings and modalities used, dialogue task domains and scenarios (if any) underlying the collection, number and roles of dialogue participants involved and dialogue act annotation schemes applied. The presented study encompasses three phases of data-driven investigation. We, first, assess the importance of various types of features and their combinations for effective cross-domain dialogue act classification. Second, we establish the best predictive model comparing various cross-corpora training settings. Finally, we specify models adaptation procedures and explore late fusion approaches to optimize the overall classification decision taking process. The proposed methodology accounts for empirically motivated and technically sound classification procedures that may reduce annotation and training costs significantly.", "title": "" }, { "docid": "2ed2b74dd19b4cb7fbe5d6a75adfa772", "text": "Surprisingly little scientific research has been conducted on the topic of interpersonal touch over the years, despite the importance of touch in our everyday social interactions from birth through to adulthood and old age. In this review, we critically evaluate the results of the research on this topic that have emerged from disciplines, such as cognitive and social psychology, neuroscience, and cultural anthropology. We highlight some of the most important advances to have been made in our understanding of this topic: For example, research has shown that interpersonal tactile stimulation provides an effective means of influencing people's social behaviors (such as modulating their tendency to comply with requests, in affecting people's attitudes toward specific services, in creating bonds between couples or groups, and in strengthening romantic relationships), regardless of whether or not the tactile contact itself can be remembered explicitly. What is more, interpersonal touch can be used to communicate emotion in a manner similar to that demonstrated previously in vision and audition. The recent growth of studies investigating the potential introduction of tactile sensations to long-distance communication technologies (by means of mediated or 'virtual' touch) are also reviewed briefly. Finally, we highlight the synergistic effort that will be needed by researchers in different disciplines if we are to develop a more complete understanding of interpersonal touch in the years to come.", "title": "" }, { "docid": "d1ebf47c1f0b1d8572d526e9260dbd32", "text": "In this paper, mortality in the immediate aftermath of an earthquake is studied on a worldwide scale using multivariate analysis. A statistical method is presented that analyzes reported earthquake fatalities as a function of a heterogeneous set of parameters selected on the basis of their presumed influence on earthquake mortality. The ensemble was compiled from demographic, seismic, and reported fatality data culled from available records of past earthquakes organized in a geographic information system. The authors consider the statistical relation between earthquake mortality and the available data ensemble, analyze the validity of the results in view of the parametric uncertainties, and propose a multivariate mortality analysis prediction method. The analysis reveals that, although the highest mortality rates are expected in poorly developed rural areas, high fatality counts can result from a wide range of mortality ratios that depend on the effective population size.", "title": "" }, { "docid": "33df3da22e9a24767c68e022bb31bbe5", "text": "The credit card industry has been growing rapidly recently, and thus huge numbers of consumers’ credit data are collected by the credit department of the bank. The credit scoring manager often evaluates the consumer’s credit with intuitive experience. However, with the support of the credit classification model, the manager can accurately evaluate the applicant’s credit score. Support Vector Machine (SVM) classification is currently an active research area and successfully solves classification problems in many domains. This study used three strategies to construct the hybrid SVM-based credit scoring models to evaluate the applicant’s credit score from the applicant’s input features. Two credit datasets in UCI database are selected as the experimental data to demonstrate the accuracy of the SVM classifier. Compared with neural networks, genetic programming, and decision tree classifiers, the SVM classifier achieved an identical classificatory accuracy with relatively few input features. Additionally, combining genetic algorithms with SVM classifier, the proposed hybrid GA-SVM strategy can simultaneously perform feature selection task and model parameters optimization. Experimental results show that SVM is a promising addition to the existing data mining methods. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "88af2cee31243eef4e46e357b053b3ae", "text": "Domestic induction heating (IH) is currently the technology of choice in modern domestic applications due to its advantages regarding fast heating time, efficiency, and improved control. New design trends pursue the implementation of new cost-effective topologies with higher efficiency levels. In order to achieve this aim, a direct ac-ac boost resonant converter is proposed in this paper. The main features of this proposal are the improved efficiency, reduced component count, and proper output power control. A detailed analytical model leading to closed-form expressions of the main magnitudes is presented, and a converter design procedure is proposed. In addition, an experimental prototype has been designed and built to prove the expected converter performance and the accurateness of the analytical model. The experimental results are in good agreement with the analytical ones and prove the feasibility of the proposed converter for the IH application.", "title": "" }, { "docid": "707c5c55c11aac05c783929239f953dd", "text": "Social networks are of significant analytical interest. This is because their data are generated in great quantity, and intermittently, besides that, the data are from a wide variety, and it is widely available to users. Through such data, it is desired to extract knowledge or information that can be used in decision-making activities. In this context, we have identified the lack of methods that apply data mining techniques to the task of analyzing the professional profile of employees. The aim of such analyses is to detect competencies that are of greater interest by being more required and also, to identify their associative relations. Thus, this work introduces MineraSkill methodology that deals with methods to infer the desired profile of a candidate for a job vacancy. In order to do so, we use keyword detection via natural language processing techniques; which are related to others by inferring their association rules. The results are presented in the form of a case study, which analyzed data from LinkedIn, demonstrating the potential of the methodology in indicating trending competencies that are required together.", "title": "" }, { "docid": "7de050ef4260ad858a620f9aa773b5a7", "text": "We present DBToaster, a novel query compilation framework for producing high performance compiled query executors that incrementally and continuously answer standing aggregate queries using in-memory views. DBToaster targets applications that require efficient main-memory processing of standing queries (views) fed by high-volume data streams, recursively compiling view maintenance (VM) queries into simple C++ functions for evaluating database updates (deltas). While today’s VM algorithms consider the impact of single deltas on view queries to produce maintenance queries, we recursively consider deltas of maintenance queries and compile to thoroughly transform queries into code. Recursive compilation successively elides certain scans and joins, and eliminates significant query plan interpreter overheads. In this demonstration, we walk through our compilation algorithm, and show the significant performance advantages of our compiled executors over other query processors. We are able to demonstrate 1-3 orders of magnitude improvements in processing times for a financial application and a data warehouse loading application, both implemented across a wide range of database systems, including PostgreSQL, HSQLDB, a commercial DBMS ’A’, the Stanford STREAM engine, and a commercial stream processor ’B’.", "title": "" }, { "docid": "a8164a657a247761147c6012fd5442c9", "text": "Latent variable models are a powerful tool for addressing several tasks in machine learning. However, the algorithms for learning the parameters of latent variable models are prone to getting stuck in a bad local optimum. To alleviate this problem, we build on the intuition that, rather than considering all samples simultaneously, the algorithm should be presented with the training data in a meaningful order that facilitates learning. The order of the samples is determined by how easy they are. The main challenge is that typically we are not provided with a readily computable measure of the easiness of samples. We address this issue by proposing a novel, iterative self-paced learning algorithm where each iteration simultaneously selects easy samples and learns a new parameter vector. The number of samples selected is governed by a weight that is annealed until the entire training data has been considered. We empirically demonstrate that the self-paced learning algorithm outperforms the state of the art method for learning a latent structural SVM on four applications: object localization, noun phrase coreference, motif finding and handwritten digit recognition.", "title": "" }, { "docid": "bd4d58e8a6254f2e0bff15c350410dee", "text": "In this project, we model MOOC dropouts using user activity data. We have several rounds of feature engineering and generate features like activity counts, percentage of visited course objects, and session counts to model this problem. We apply logistic regression, support vector machine, gradient boosting decision trees, AdaBoost, and random forest to this classification problem. Our best model is GBDT, achieving AUC of 0.8763, about 3% off the KDD winner.", "title": "" }, { "docid": "1b0dcde6dceb85c4f6278f6944f607e8", "text": "Firms around the world have been implementing enterprise resource planning (ERP) systems since the 1990s to have an uniform information system in their respective organizations and to reengineer their business processes. Through a case type analysis conducted in six manufacturing firms that have one of the widely used ERP systems, various contextual factors that influenced these firms to implement this technology were understood using the six-stage model proposed by Kwon and Zmud. Three types of ERP systems, viz. SAP, Baan and Oracle ERP were studied in this research. Implementation of ERP systems was found to follow the stage model. The findings from the process model were used to develop the items for the causal model and in identifying appropriate constructs to group those items. In order to substantiate that the constructs developed to measure the causal model were congruent with the findings based on qualitative analysis, i.e. that the instrument appropriately reflects the understanding of the case interview; ‘triangulation’ technique was used. The findings from the qualitative study and the results from the quantitative study were found to be equivalent, thus, ensuring a fair assessment of the validity and reliability of the instrument developed to test the causal model. The quantitative measures done only at these six firms are not statistically significant but the samples were used as a part of the triangulation method to collect data from multiple sources, to verify the respondents’ understanding of the scales and as an initial measure to see if my understanding from the qualitative studies were accurately reflected by the instrument. This instrument will be pilot tested first and administered to a large sample of firms. # 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "dc9a168fb4c586650b8f11cb5cdd725c", "text": "Neurolinguistic accounts of sentence comprehension identify a network of relevant brain regions, but do not detail the information flowing through them. We investigate syntactic information. Does brain activity implicate a computation over hierarchical grammars or does it simply reflect linear order, as in a Markov chain? To address this question, we quantify the cognitive states implied by alternative parsing models. We compare processing-complexity predictions from these states against fMRI timecourses from regions that have been implicated in sentence comprehension. We find that hierarchical grammars independently predict timecourses from left anterior and posterior temporal lobe. Markov models are predictive in these regions and across a broader network that includes the inferior frontal gyrus. These results suggest that while linear effects are wide-spread across the language network, certain areas in the left temporal lobe deal with abstract, hierarchical syntactic representations.", "title": "" }, { "docid": "0e45e57b4e799ebf7e8b55feded7e9e1", "text": "IMPORTANCE\nIt is increasingly evident that Parkinson disease (PD) is not a single entity but rather a heterogeneous neurodegenerative disorder.\n\n\nOBJECTIVE\nTo evaluate available evidence, based on findings from clinical, imaging, genetic and pathologic studies, supporting the differentiation of PD into subtypes.\n\n\nEVIDENCE REVIEW\nWe performed a systematic review of articles cited in PubMed between 1980 and 2013 using the following search terms: Parkinson disease, parkinsonism, tremor, postural instability and gait difficulty, and Parkinson disease subtypes. The final reference list was generated on the basis of originality and relevance to the broad scope of this review.\n\n\nFINDINGS\nSeveral subtypes, such as tremor-dominant PD and postural instability gait difficulty form of PD, have been found to cluster together. Other subtypes also have been identified, but validation by subtype-specific biomarkers is still lacking.\n\n\nCONCLUSIONS AND RELEVANCE\nSeveral PD subtypes have been identified, but the pathogenic mechanisms underlying the observed clinicopathologic heterogeneity in PD are still not well understood. Further research into subtype-specific diagnostic and prognostic biomarkers may provide insights into mechanisms of neurodegeneration and improve epidemiologic and therapeutic clinical trial designs.", "title": "" }, { "docid": "dd1a7e3493b9164af4321db944b4950c", "text": "The emerging optical/wireless topology reconfiguration technologies have shown great potential in improving the performance of data center networks. However, it also poses a big challenge on how to find the best topology configurations to support the dynamic traffic demands. In this work, we present xWeaver, a traffic-driven deep learning solution to infer the high-performance network topology online. xWeaver supports a powerful network model that enables the topology optimization over different performance metrics and network architectures. With the design of properly-structured neural networks, it can automatically derive the critical traffic patterns from data traces and learn the underlying mapping between the traffic patterns and topology configurations specific to the target data center. After offline training, xWeaver generates the optimized (or near-optimal) topology configuration online, and can also smoothly update its model parameters for new traffic patterns. We build an optical-circuit-switch-based testbed to demonstrate the function and transmission efficiency of our proposed solution. We further perform extensive simulations to show the significant performance gain of xWeaver, in supporting higher network throughput and smaller flow completion time.", "title": "" }, { "docid": "d57bd5c6426ce818328096c26f06b901", "text": "Introduction Reflexivity is a curious term with various meanings. Finding a definition of reflexivity that demonstrates what it means and how it is achieved is difficult (Colbourne and Sque 2004). Moreover, writings on reflexivity have not been transparent in terms of the difficulties, practicalities and methods of the process (Mauthner and Doucet 2003). Nevertheless, it is argued that an attempt be made to gain ‘some kind of intellectual handle’ on reflexivity in order to make use of it as a guiding standard (Freshwater and Rolfe 2001). The role of reflexivity in the many and varied qualitative methodologies is significant. It is therefore a concept of particular relevance to nursing as qualitative methodologies play a principal function in nursing enquiry. Reflexivity assumes a pivotal role in feminist research (King 1994). It is also paramount in participatory action research (Robertson 2000), ethnographies, and hermeneutic and post-structural approaches (Koch and Harrington 1998). Furthermore, it plays an integral part in medical case study research reflexivity epistemological critical feminist ▲ ▲ ▲ ▲ k e y w o rd s", "title": "" }, { "docid": "0c025ec05a1f98d71c9db5bfded0a607", "text": "Many organizations, such as banks, airlines, telecommunications companies, and police departments, routinely use queueing models to help determine capacity levels needed to respond to experienced demands in a timely fashion. Though queueing analysis has been used in hospitals and other healthcare settings, its use in this sector is not widespread. Yet, given the pervasiveness of delays in healthcare and the fact that many healthcare facilities are trying to meet increasing demands with tightly constrained resources, queueing models can be very useful in developing more effective policies for bed allocation and staffing, and in identifying other opportunities for improving service. Queueing analysis is also a key tool in estimating capacity requirements for possible future scenarios, including demand surges due to new diseases or acts of terrorism. This chapter describes basic queueing models as well as some simple modifications and extensions that are particularly useful in the healthcare setting, and give examples of their use. The critical issue of data requirements is also be discussed as well as model choice, modelbuilding and the interpretation and use of results.", "title": "" }, { "docid": "eb2459cbb99879b79b94653c3b9ea8ef", "text": "Extending the success of deep neural networks to natural language understanding and symbolic reasoning requires complex operations and external memory. Recent neural program induction approaches have attempted to address this problem, but are typically limited to differentiable memory, and consequently cannot scale beyond small synthetic tasks. In this work, we propose the Manager-ProgrammerComputer framework, which integrates neural networks with non-differentiable memory to support abstract, scalable and precise operations through a friendly neural computer interface. Specifically, we introduce a Neural Symbolic Machine, which contains a sequence-to-sequence neural \"programmer\", and a nondifferentiable \"computer\" that is a Lisp interpreter with code assist. To successfully apply REINFORCE for training, we augment it with approximate gold programs found by an iterative maximum likelihood training process. NSM is able to learn a semantic parser from weak supervision over a large knowledge base. It achieves new state-of-the-art performance on WEBQUESTIONSSP, a challenging semantic parsing dataset, with weak supervision. Compared to previous approaches, NSM is end-to-end, therefore does not rely on feature engineering or domain specific knowledge.", "title": "" }, { "docid": "32670b62c6f6e7fa698e00f7cf359996", "text": "Four cases of self-poisoning with 'Roundup' herbicide are described, one of them fatal. One of the survivors had a protracted hospital stay and considerable clinical and laboratory detail is presented. Serious self-poisoning is associated with massive gastrointestinal fluid loss and renal failure. The management of such cases and the role of surfactant toxicity are discussed.", "title": "" }, { "docid": "0784d5907a8e5f1775ad98a25b1b0b31", "text": "The Internet contains billions of images, freely available online. Methods for efficiently searching this incredibly rich resource are vital for a large number of applications. These include object recognition, computer graphics, personal photo collections, online image search tools. In this paper, our goal is to develop efficient image search and scene matching techniques that are not only fast, but also require very little memory, enabling their use on standard hardware or even on handheld devices. Our approach uses recently developed machine learning techniques to convert the Gist descriptor (a real valued vector that describes orientation energies at different scales and orientations within an image) to a compact binary code, with a few hundred bits per image. Using our scheme, it is possible to perform real-time searches with millions from the Internet using a single large PC and obtain recognition results comparable to the full descriptor. Using our codes on high quality labeled images from the LabelMe database gives surprisingly powerful recognition results using simple nearest neighbor techniques.", "title": "" } ]
scidocsrr
938858b3ef9070dae01b1830c47d1ae1
Heterogeneity-entropy based unsupervised feature learning for personality prediction with cross-media data
[ { "docid": "f2603a583b63c1c8f350b3ddabe16642", "text": "We consider the problem of modeling annotated data---data with multiple types where the instance of one type (such as a caption) serves as a description of the other type (such as an image). We describe three hierarchical probabilistic mixture models which aim to describe such data, culminating in correspondence latent Dirichlet allocation, a latent variable model that is effective at modeling the joint distribution of both types and the conditional distribution of the annotation given the primary type. We conduct experiments on the Corel database of images and captions, assessing performance in terms of held-out likelihood, automatic annotation, and text-based image retrieval.", "title": "" }, { "docid": "6200d3c4435ae34e912fc8d2f92e904b", "text": "The problem of cross-modal retrieval, e.g., using a text query to search for images and vice-versa, is considered in this paper. A novel model involving correspondence autoencoder (Corr-AE) is proposed here for solving this problem. The model is constructed by correlating hidden representations of two uni-modal autoencoders. A novel optimal objective, which minimizes a linear combination of representation learning errors for each modality and correlation learning error between hidden representations of two modalities, is used to train the model as a whole. Minimization of correlation learning error forces the model to learn hidden representations with only common information in different modalities, while minimization of representation learning error makes hidden representations are good enough to reconstruct input of each modality. A parameter $\\alpha$ is used to balance the representation learning error and the correlation learning error. Based on two different multi-modal autoencoders, Corr-AE is extended to other two correspondence models, here we called Corr-Cross-AE and Corr-Full-AE. The proposed models are evaluated on three publicly available data sets from real scenes. We demonstrate that the three correspondence autoencoders perform significantly better than three canonical correlation analysis based models and two popular multi-modal deep models on cross-modal retrieval tasks.", "title": "" } ]
[ { "docid": "653ca5c9478b1b1487fc24eeea8c1677", "text": "A fundamental question in information theory and in computer science is how to measure similarity or the amount of shared information between two sequences. We have proposed a metric, based on Kolmogorov complexity, to answer this question and have proven it to be universal. We apply this metric in measuring the amount of shared information between two computer programs, to enable plagiarism detection. We have designed and implemented a practical system SID (Software Integrity Diagnosis system) that approximates this metric by a heuristic compression algorithm. Experimental results demonstrate that SID has clear advantages over other plagiarism detection systems. SID system server is online at http://software.bioinformatics.uwaterloo.ca/SID/.", "title": "" }, { "docid": "90378605e6ee192cfedf60d226f8cacf", "text": "Ever since the introduction of freely programmable hardware components into modern graphics hardware, graphics processing units (GPUs) have become increasingly popular for general purpose computations. Especially when applied to computer vision algorithms where a Single set of Instructions has to be executed on Multiple Data (SIMD), GPU-based algorithms can provide a major increase in processing speed compared to their CPU counterparts. This paper presents methods that take full advantage of modern graphics card hardware for real-time scale invariant feature detection and matching. The focus lies on the extraction of feature locations and the generation of feature descriptors from natural images. The generation of these feature-vectors is based on the Speeded Up Robust Features (SURF) method [1] due to its high stability against rotation, scale and changes in lighting condition of the processed images. With the presented methods feature detection and matching can be performed at framerates exceeding 100 frames per second for 640 times 480 images. The remaining time can then be spent on fast matching against large feature databases on the GPU while the CPU can be used for other tasks.", "title": "" }, { "docid": "68982ce5d5a61584f125856b10e0653f", "text": "The mature human brain is organized into a collection of specialized functional networks that flexibly interact to support various cognitive functions. Studies of development often attempt to identify the organizing principles that guide the maturation of these functional networks. In this report, we combine resting state functional connectivity MRI (rs-fcMRI), graph analysis, community detection, and spring-embedding visualization techniques to analyze four separate networks defined in earlier studies. As we have previously reported, we find, across development, a trend toward 'segregation' (a general decrease in correlation strength) between regions close in anatomical space and 'integration' (an increased correlation strength) between selected regions distant in space. The generalization of these earlier trends across multiple networks suggests that this is a general developmental principle for changes in functional connectivity that would extend to large-scale graph theoretic analyses of large-scale brain networks. Communities in children are predominantly arranged by anatomical proximity, while communities in adults predominantly reflect functional relationships, as defined from adult fMRI studies. In sum, over development, the organization of multiple functional networks shifts from a local anatomical emphasis in children to a more \"distributed\" architecture in young adults. We argue that this \"local to distributed\" developmental characterization has important implications for understanding the development of neural systems underlying cognition. Further, graph metrics (e.g., clustering coefficients and average path lengths) are similar in child and adult graphs, with both showing \"small-world\"-like properties, while community detection by modularity optimization reveals stable communities within the graphs that are clearly different between young children and young adults. These observations suggest that early school age children and adults both have relatively efficient systems that may solve similar information processing problems in divergent ways.", "title": "" }, { "docid": "81fcb5705818c07f8d084c073f43bda4", "text": "Complexes of physically interacting proteins constitute fundamental functional units responsible for driving biological processes within cells. A faithful reconstruction of the entire set of complexes is therefore essential to understand the functional organisation of cells. In this review, we discuss the key contributions of computational methods developed till date (approximately between 2003 and 2015) for identifying complexes from the network of interacting proteins (PPI network). We evaluate in depth the performance of these methods on PPI datasets from yeast, and highlight their limitations and challenges, in particular at detecting sparse and small or sub-complexes and discerning overlapping complexes. We describe methods for integrating diverse information including expression profiles and 3D structures of proteins with PPI networks to understand the dynamics of complex formation, for instance, of time-based assembly of complex subunits and formation of fuzzy complexes from intrinsically disordered proteins. Finally, we discuss methods for identifying dysfunctional complexes in human diseases, an application that is proving invaluable to understand disease mechanisms and to discover novel therapeutic targets. We hope this review aptly commemorates a decade of research on computational prediction of complexes and constitutes a valuable reference for further advancements in this exciting area.", "title": "" }, { "docid": "2cd5e92b5705753d10fc5949936d43ef", "text": "Traditional flow monitoring provides a high-level view of network communications by reporting the addresses, ports, and byte and packet counts of a flow. This data is valuable, but it gives little insight into the actual content or context of a flow. To obtain this missing insight, we investigated intra-flow data, that is, information about events that occur inside of a flow that can be conveniently collected, stored, and analyzed within a flow monitoring framework. The focus of our work is on new types of data that are independent of protocol details, such as the lengths and arrival times of messages within a flow. These data elements have the attractive property that they apply equally well to both encrypted and unencrypted flows. Protocol-aware telemetry, specifically TLS-aware telemetry, is also analyzed. In this paper, we explore the benefits of enhanced telemetry, desirable properties of new intra-flow data features with respect to a flow monitoring system, and how best to use machine learning classifiers that operate on this data. We provide results on millions of flows processed by our open source program. Finally, we show that leveraging appropriate data features and simple machine learning models can successfully identify threats in encrypted network traffic.", "title": "" }, { "docid": "47b39a8839d536d57c692781d61f2b5e", "text": "Recently, stream data mining applications has drawn vital attention from several research communities. Stream data is continuous form of data which is distinguished by its online nature. Traditionally, machine learning area has been developing learning algorithms that have certain assumptions on underlying distribution of data such as data should have predetermined distribution. Such constraints on the problem domain lead the way for development of smart learning algorithms performance is theoretically verifiable. Real-word situations are different than this restricted model. Applications usually suffers from problems such as unbalanced data distribution. Additionally, data picked from non-stationary environments are also usual in real world applications, resulting in the “concept drift” which is related with data stream examples. These issues have been separately addressed by the researchers, also, it is observed that joint problem of class imbalance and concept drift has got relatively little research. If the final objective of clever machine learning techniques is to be able to address a broad spectrum of real world applications, then the necessity for a universal framework for learning from and tailoring (adapting) to, environment where drift in concepts may occur and unbalanced data distribution is present can be hardly exaggerated. In this paper, we first present an overview of issues that are observed in stream data mining scenarios, followed by a complete review of recent research in dealing with each of the issue.", "title": "" }, { "docid": "85e76a44cf95521296a92dadcbc5e8d0", "text": "This paper presents a four-channel bi-directional core chip in 0.13 um CMOS for X-band phased array Transmit/Receive (T/R) module. Each channel consists of a 5-bit step attenuator, a 6-bit phase shifter, bi-directional gain blocks (BDGB), and a bi-directional amplifier (BDA). Additional circuits such as low drop out (LDO) regulator, bias circuits with band-gap reference (BGR), and serial to parallel interface (SPI) are integrated for stable biasing and ease of interface. The chip size is 6.9 × 1.6 mm2 including pads which corresponds to 2.8 mm2 per channel. The phase and attenuation coverage is 360° with the LSB of 5.625°, and 31dB with the LSB of 1dB, respectively. The RMS phase error is better than 2.3°, and the RMS attenuation error is better than 0.25 dB at 9-10 GHz. The Tx mode reference-state gain in each channel is 11.3-12.2 dB including the 4-way power combiner insertion losses ideally 6 dB, and the Rx mode gain is 8.6-9.5 dB at 9-10 GHz. The output P1dB in Tx mode is > 11 dBm at 9-10 GHz. To the best of authors' knowledge, this is the smallest size per channel X-band core chip in CMOS technology with bi-directional operation and competitive RF performance to-date.", "title": "" }, { "docid": "2dabcec8851189e66ab223c1da142446", "text": "Use-after-free vulnerabilities have become an important class of security problems due to the existence of mitigations that protect against other types of vulnerabilities. The effects of their exploitation can be just as devastating as exploiting a buffer overflow, potentially resulting in full code execution within the vulnerable program. Few protections exist against these types of vulnerabilities and they are particularly hard to discover through manual code inspection. In this paper we present FreeSentry: a mitigation that protects against use-after-free vulnerabilities by inserting dynamic runtime checks that invalidate pointers when the associated memory is released. If such an invalidated pointer is accessed, the program will subsequently crash, preventing an attacker from exploiting the vulnerability. When checking dynamically allocated memory, our approach has a moderate performance overhead on the SPEC CPU benchmarks: running with a geometric mean performance impact of around 25%. It has no overhead when deployed on widely used server side daemons such as OpenSSH or the Apache HTTP daemon. FreeSentry also discovered a previously unknown use-after-free vulnerability in one of the programs in SPEC CPU2000 benchmarks: perlbmk. This vulnerability seems to have been missed by other mitigations.", "title": "" }, { "docid": "54dec214bd1bdf573d3ae356f2d1a8a3", "text": "Multi-scale deep CNNs have been used successfully for problems mapping each pixel to a label, such as depth estimation and semantic segmentation. It has also been shown that such architectures are reusable and can be used for multiple tasks. These networks are typically trained independently for each task by varying the output layer(s) and training objective. In this work we present a new model for simultaneous depth estimation and semantic segmentation from a single RGB image. Our approach demonstrates the feasibility of training parts of the model for each task and then fine tuning the full, combined model on both tasks simultaneously using a single loss function. Furthermore we couple the deep CNN with fully connected CRF, which captures the contextual relationships and interactions between the semantic and depth cues improving the accuracy of the final results. The proposed model is trained and evaluated on NYUDepth V2 dataset [23] outperforming the state of the art methods on semantic segmentation and achieving comparable results on the task of depth estimation.", "title": "" }, { "docid": "757543cc9590115c77fa1a91717d6f10", "text": "A simple analytical method is developed to compare the combinations of stator and rotor pole numbers in flux-switching permanent magnet (PM) machines in terms of back electromotive force (EMF) and electromagnetic torque. The winding connections and winding factors of machines having all poles and alternate poles wound, and different numbers of phases, from two to six, are determined by the coil-EMF vectors. Their differences from analyzing the conventional fractional-slot PM machines with concentrated nonoverlapping windings are highlighted. The general conditions are established for balanced symmetrical back-EMF waveform. It shows that the optimized rotor pole number should be close to the number of stator poles, whereas larger torque can be obtained by the machine with relatively higher rotor pole number. The analysis is validated by finite-element analyses and experiment.", "title": "" }, { "docid": "3727ee51255d85a9260e1e92cc5b7ca7", "text": "Electing a leader is a classical problem in distributed computing system. Synchronization between processes often requires one process acting as a coordinator. If an elected leader node fails, the other nodes of the system need to elect another leader without much wasting of time. The bully algorithm is a classical approach for electing a leader in a synchronous distributed computing system, which is used to determine the process with highest priority number as the coordinator. In this paper, we have discussed the limitations of Bully algorithm and proposed a simple and efficient method for the Bully algorithm which reduces the number of messages during the election. Our analytical simulation shows that, our proposed algorithm is more efficient than the Bully algorithm with fewer messages passing and fewer stages.", "title": "" }, { "docid": "07d419650b465af46a8e55662fd8460d", "text": "Knowledge representation learning aims at modeling knowledge graph by encoding entities and relations into a low dimensional space. Most of the traditional works for knowledge embedding need negative sampling to minimize a marginbased ranking loss. However, those works construct negative samples through a random mode, by which the samples are often too trivial to fit the model efficiently. In this paper, we propose a novel knowledge representation learning framework based on Generative Adversarial Networks (GAN). In this GAN-based framework, we take advantage of a generator to obtain high-quality negative samples. Meanwhile, the discriminator in GAN learns the embeddings of the entities and relations in knowledge graph. Thus, we can incorporate the proposed GAN-based framework into various traditional models to improve the ability of knowledge representation learning. Experimental results show that our proposed GANbased framework outperforms baselines on triplets classification and link prediction tasks.", "title": "" }, { "docid": "61309b5f8943f3728f714cd40f260731", "text": "Article history: Received 4 January 2011 Received in revised form 1 August 2011 Accepted 13 August 2011 Available online 15 September 2011 Advertising media are a means of communication that creates different marketing and communication results among consumers. Over the years, newspaper, magazine, TV, and radio have provided a one-way media where information is broadcast and communicated. Due to the widespread application of the Internet, advertising has entered into an interactive communications mode. In the advent of 3G broadband mobile communication systems and smartphone devices, consumers' preferences can be pre-identified and advertising messages can therefore be delivered to consumers in a multimedia format at the right time and at the right place with the right message. In light of this new advertisement possibility, designing personalized mobile advertising to meet consumers' needs becomes an important issue. This research uses the fuzzy Delphi method to identify the key personalized attributes in a personalized mobile advertising message for different products. Results of the study identify six important design attributes for personalized advertisements: price, preference, promotion, interest, brand, and type of mobile device. As personalized mobile advertising becomes more integrated in people's daily activities, its pros and cons and social impact are also discussed. The research result can serve as a guideline for the key parties in mobile marketing industry to facilitate the development of the industry and ensure that advertising resources are properly used. © 2011 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "e56fb0a5466a2a6067db9016fc1f7f1c", "text": "©Rabobank,2012 iv ManagementSummary", "title": "" }, { "docid": "470a363ba2e5b480e638f372c06bc140", "text": "In this paper, we describe a miniature climbing robot, 96 x 46 x 64 [mm], able to climb ferromagnetic surfaces and to make inner plane to plane transition using only two degrees of freedom. Our robot, named TRIPILLAR, combines magnetic caterpillars and magnets to climb planar ferromagnetic surfaces. Two triangular tracks are mounted in a differential drive mode, which allows squid steering and on spot turning. Exploiting the particular geometry and magnetic properties of this arrangement, TRIPILLAR is able to transit between intersecting surfaces. The intersection angle ranges from -10° to 90° on the pitch angle of the coordinate system of the robot regardless of the orientation of gravity. A possible path is to move from ground to ceiling and back. This achievement opens new avenues for mobile robotics inspection of ferromagnetic industrial structure with stringent size restriction, like the one encountered in power plants.", "title": "" }, { "docid": "c86cc66ccc8f573026e66e7e083280cc", "text": "This book comes with “batteries included” (a reference to the phrase often used to explain the popularity of the Python programming language). It is the companion book to an impressive open-source software library called the Natural Language Toolkit (NLTK), written in Python. NLTK combines language processing tools (tokenizers, stemmers, taggers, syntactic parsers, semantic analyzers) and standard data sets (corpora and tools to access the corpora in an efficient and uniform manner). Although the book builds on the NLTK library, it covers only a relatively small part of what can be done with it. The combination of the book with NLTK, a growing system of carefully designed, maintained, and documented code libraries, is an extraordinary resource that will dramatically influence the way computational linguistics is taught. The book attempts to cater to a large audience: It is a textbook on computational linguistics for science and engineering students; it also serves as practical documentation for the NLTK library, and it finally attempts to provide an introduction to programming and algorithm design for humanities students. I have used the book and its earlier on-line versions to teach advanced undergraduate and graduate students in computer science in the past eight years. The book adopts the following approach:", "title": "" }, { "docid": "1323c06ef61451c87e302939a3b0d4bd", "text": "BACKGROUND\nLean and Six Sigma are improvement methodologies developed in the manufacturing industry and have been applied to healthcare settings since the 1990 s. They use a systematic and reproducible approach to provide Quality Improvement (QI), with a flexible process that can be applied to a range of outcomes across different patient groups. This review assesses the literature with regard to the use and utility of Lean and Six Sigma methodologies in surgery.\n\n\nMETHODS\nMEDLINE, Embase, PsycINFO, Allied and Complementary Medicine Database, British Nursing Index, Cumulative Index to Nursing and Allied Health Literature, Health Business Elite and the Health Management Information Consortium were searched in January 2014. Experimental studies were included if they assessed the use of Lean or Six Sigma on the ability to improve specified outcomes in surgical patients.\n\n\nRESULTS\nOf the 124 studies returned, 23 were suitable for inclusion with 11 assessing Lean, 6 Six Sigma and 6 Lean Six Sigma. The broad range of outcomes can be collated into six common aims: to optimise outpatient efficiency, to improve operating theatre efficiency, to decrease operative complications, to reduce ward-based harms, to reduce mortality and to limit unnecessary cost and length of stay. The majority of studies (88%) demonstrate improvement; however high levels of systematic bias and imprecision were evident.\n\n\nCONCLUSION\nLean and Six Sigma QI methodologies have the potential to produce clinically significant improvement for surgical patients. However there is a need to conduct high-quality studies with low risk of systematic bias in order to further understand their role.", "title": "" }, { "docid": "f5bb79e1f4d7ee7a23f9841078971d1c", "text": "In the present paper we describe TectoMT, a multi-purpose open-source NLP framework. It allows for fast and efficient development of NLP applications by exploiting a wide range of software modules already integrated in TectoMT, such as tools for sentence segmentation, tokenization, morphological analysis, POS tagging, shallow and deep syntax parsing, named entity recognition, anaphora resolution, tree-to-tree translation, natural language generation, word-level alignment of parallel corpora, and other tasks. One of the most complex applications of TectoMT is the English-Czech machine translation system with transfer on deep syntactic (tectogrammatical) layer. Several modules are available also for other languages (German, Russian, Arabic). Where possible, modules are implemented in a language-independent way, so they can be reused in many applications.", "title": "" }, { "docid": "da87c8385ac485fe5d2903e27803c801", "text": "It's not surprisingly when entering this site to get the book. One of the popular books now is the polygon mesh processing. You may be confused because you can't find the book in the book store around your city. Commonly, the popular book will be sold quickly. And when you have found the store to buy the book, it will be so hurt when you run out of it. This is why, searching for this popular book in this website will give you benefit. You will not run out of this book.", "title": "" }, { "docid": "7d14d06a67a87006ac271c16b1c91b16", "text": "Anti-malware vendors receive daily thousands of potentially malicious binaries to analyse and categorise before deploying the appropriate defence measure. Considering the limitations of existing malware analysis and classification methods, we present MalClassifier, a novel privacy-preserving system for the automatic analysis and classification of malware using network flow sequence mining. MalClassifier allows identifying the malware family behind detected malicious network activity without requiring access to the infected host or malicious executable reducing overall response time. MalClassifier abstracts the malware families' network flow sequence order and semantics behaviour as an n-flow. By mining and extracting the distinctive n-flows for each malware family, it automatically generates network flow sequence behaviour profiles. These profiles are used as features to build supervised machine learning classifiers (K-Nearest Neighbour and Random Forest) for malware family classification. We compute the degree of similarity between a flow sequence and the extracted profiles using a novel fuzzy similarity measure that computes the similarity between flows attributes and the similarity between the order of the flow sequences. For classifier performance evaluation, we use network traffic datasets of ransomware and botnets obtaining 96% F-measure for family classification. MalClassifier is resilient to malware evasion through flow sequence manipulation, maintaining the classifier's high accuracy. Our results demonstrate that this type of network flow-level sequence analysis is highly effective in malware family classification, providing insights on reoccurring malware network flow patterns.", "title": "" } ]
scidocsrr
399928182d7ff1bee69ec08cb5a56a4c
Predicting Commercial Activeness over Urban Big Data
[ { "docid": "b294ca2034fa4133e8f7091426242244", "text": "The development of a city gradually fosters different functional regions, such as educational areas and business districts. In this paper, we propose a framework (titled DRoF) that Discovers Regions of different Functions in a city using both human mobility among regions and points of interests (POIs) located in a region. Specifically, we segment a city into disjointed regions according to major roads, such as highways and urban express ways. We infer the functions of each region using a topic-based inference model, which regards a region as a document, a function as a topic, categories of POIs (e.g., restaurants and shopping malls) as metadata (like authors, affiliations, and key words), and human mobility patterns (when people reach/leave a region and where people come from and leave for) as words. As a result, a region is represented by a distribution of functions, and a function is featured by a distribution of mobility patterns. We further identify the intensity of each function in different locations. The results generated by our framework can benefit a variety of applications, including urban planning, location choosing for a business, and social recommendations. We evaluated our method using large-scale and real-world datasets, consisting of two POI datasets of Beijing (in 2010 and 2011) and two 3-month GPS trajectory datasets (representing human mobility) generated by over 12,000 taxicabs in Beijing in 2010 and 2011 respectively. The results justify the advantages of our approach over baseline methods solely using POIs or human mobility.", "title": "" } ]
[ { "docid": "e494bd8d686605cdf10067781a8f36c9", "text": "The purpose of this paper is to examine the role of two basic types of learning in contemporary organizations – incremental (knowledge exploitation) and radical learning (knowledge exploration) – in making organization’s strategic decisions. In achieving this goal a conceptual model of influence of learning types on the nature of strategic decision making and their outcomes was formed, on the basis of which the empirical research was conducted, encompassing 54 top managers in large Croatian companies. The paper discusses the nature of organizational learning and decision making at strategic management level. The results obtained are suggesting that there is a relationship between managers' learning type and decision making approaches at strategic management level, as well as there is the interdependence between these two processes with strategic decision making outcomes. Within these results there are interesting insights, such as that the effect of radical learning on analytical decision making approach is significantly weaker and narrower when compared to the effect of incremental learning on the same approach, and that analytical decision making approach does not affect strategic decision making outcomes.", "title": "" }, { "docid": "9680944f9e6b4724bdba752981845b68", "text": "A software product line is a set of program variants, typically generated from a common code base. Feature models describe variability in product lines by documenting features and their valid combinations. In product-line engineering, we need to reason about variability and program variants for many different tasks. For example, given a feature model, we might want to determine the number of all valid feature combinations or compute specific feature combinations for testing. However, we found that contemporary reasoning approaches can only reason about feature combinations, not about program variants, because they do not take abstract features into account. Abstract features are features used to structure a feature model that, however, do not have any impact at implementation level. Using existing feature-model reasoning mechanisms for program variants leads to incorrect results. Hence, although abstract features represent domain decisions that do not affect the generation of a program variant. We raise awareness of the problem of abstract features for different kinds of analyses on feature models. We argue that, in order to reason about program variants, abstract features should be made explicit in feature models. We present a technique based on propositional formulas that enables to reason about program variants rather than feature combinations. In practice, our technique can save effort that is caused by considering the same program variant multiple times, for example, in product-line testing.", "title": "" }, { "docid": "f906bfcbfc8c01358b198ac8fd5c3fea", "text": "Path load balancing is used for distributing workload across an array of paths to increase network reliability and optimize link utilization. However, it is not easy to realize the load balancing globally in traditional networks as the whole status of the network is difficult to obtain. To address this problem, we propose the Fuzzy Synthetic Evaluation Mechanism (FSEM), a path load balancing solution based on Software Defined Networking (SDN). In this mechanism, the network traffic is allocated to the paths operated by Open Flow switches, where the flow-handling rules are installed by the central SDN controller. The paths can be dynamically adjusted with the aid of FSEM according to the global view of the network. Experimental results verify that the proposed solution can effectively balance the traffic and avoid unexpected breakdown caused by link failure. The overall network performance is also improved as well.", "title": "" }, { "docid": "d9a87325efbd29520c37ec46531c6062", "text": "Predicting the risk of potential diseases from Electronic Health Records (EHR) has attracted considerable attention in recent years, especially with the development of deep learning techniques. Compared with traditional machine learning models, deep learning based approaches achieve superior performance on risk prediction task. However, none of existing work explicitly takes prior medical knowledge (such as the relationships between diseases and corresponding risk factors) into account. In medical domain, knowledge is usually represented by discrete and arbitrary rules. Thus, how to integrate such medical rules into existing risk prediction models to improve the performance is a challenge. To tackle this challenge, we propose a novel and general framework called PRIME for risk prediction task, which can successfully incorporate discrete prior medical knowledge into all of the state-of-the-art predictive models using posterior regularization technique. Different from traditional posterior regularization, we do not need to manually set a bound for each piece of prior medical knowledge when modeling desired distribution of the target disease on patients. Moreover, the proposed PRIME can automatically learn the importance of different prior knowledge with a log-linear model.Experimental results on three real medical datasets demonstrate the effectiveness of the proposed framework for the task of risk prediction", "title": "" }, { "docid": "e901667d844155a6049b6cc28dfc34a1", "text": "Cyber-physical technologies enable event-driven applications, which monitor in real-time the occurrence of certain inherently stochastic incidents. Those technologies are being widely deployed in cities around the world and one of their critical aspects is energy consumption, as they are mostly battery powered. The most representative examples of such applications today is smart parking. Since parking sensors are devoted to detect parking events in almost-real time, strategies like data aggregation are not well suited to optimize energy consumption. Furthermore, data compression is pointless, as events are essentially binary entities. Therefore, this paper introduces the concept of Lean Sensing, which enables the relaxation of sensing accuracy at the benefit of improved operational costs. To this end, this paper departs from the concept of instantaneous randomness and it explores the correlation structure that emerges from it in complex systems. Then, it examines the use of this system-wide aggregated contextual information to optimize power consumption, thus going in the opposite way; from the system-level representation to individual device power consumption. The discussed techniques include customizing the data acquisition to temporal correlations (i.e, to adapt sensor behavior to the expected activity) and inferring the system-state from incomplete information based on spatial correlations. These techniques are applied to real-world smart-parking application deployments, aiming to evaluate the impact that a number of system-level optimization strategies have on devices power consumption.", "title": "" }, { "docid": "91136fd0fd8e15ed1d6d6bf7add489f0", "text": "Microelectromechanical Systems (MEMS) technology has already led to advances in optical imaging, scanning, communications and adaptive applications. Many of these efforts have been approached without the use of feedback control techniques that are common in macro-scale operations to ensure repeatable and precise performance. This paper examines control techniques and related issues of precision performance as applied to a one-degree-of-freedom electrostatic MEMS micro mirror.", "title": "" }, { "docid": "be546b75d515e9f84d7b1afd3fcb347a", "text": "Two-wheeled self-balancing robot is a kind of unstable, nonlinear, strong coupling system. On the basis of analyzing the method of Linear Quadratic Regulator(LQR) and PID-BP-RBF, this paper proposed a new balance control method based on LQR and Neural Network(NN)(LQR-NN).In this method, the balance controller is designed as a LQR controller contained a neural network inside. The LQR's optimal parameters are used to initialize the neural network, which would make the network have the optimum initial values and converge fast. The new method can overcome the inaccuracy modeling because of system linearization based on LQR, and also has the self-turning mechanism without great computation load which the NN method brings. Experiments show that the balance controller based on LQR-NN has better balancing control to the robot and also improved the system's robustness significantly.", "title": "" }, { "docid": "0c20ed6f2506ecb181909128796c0e5d", "text": "This paper presents a multilevel spin-orbit torque magnetic random access memory (SOT-MRAM). The conventional SOT-MRAMs enables a reliable and energy efficient write operation. However, these cells require two access transistors per cell, hence the efficiency of the SOT-MRAMs can be questioned in high-density memory application. To deal with this obstacle, we propose a multilevel cell which stores two bits per memory cell. In addition, we propose a novel sensing scheme to read out the stored data in the multilevel SOT-MRAM cell. Our simulation results show that the proposed cell can achieve 3X more energy efficient write operation in comparison with the conventional STT-MRAMs. In addition, the proposed cell store two bits without any area penalty in comparison to the conventional one bit SOT-MRAM cells.", "title": "" }, { "docid": "7917e6a788cedd9f1dcb9c3fa132656e", "text": "The smartphone industry has been one of the fastest growing technological areas in recent years. Naturally, the considerable market share of the Android OS and the diversity of app distribution channels besides the official Google Play Store has attracted the attention of malware authors. To deal with the increasing numbers of malicious Android apps in the wild, malware analysts typically rely on analysis tools to extract characteristic information about an app in an automated fashion. While the importance of such tools has been addressed by the research community [8], [24], [25], [27], the resulting prototypes remain limited in terms of analysis capabilities and availability. In this paper we present ANDRUBIS, a completely automated, publicly available and comprehensive analysis system for Android applications. ANDRUBIS combines static analysis techniques with dynamic analysis on both Dalvik VM and system level, as well as several stimulation techniques to increase code coverage.", "title": "" }, { "docid": "da1f5a7c5c39f50c70948eeba5cd9716", "text": "Mushrooms have long been used not only as food but also for the treatment of various ailments. Although at its infancy, accumulated evidence suggested that culinary-medicinal mushrooms may play an important role in the prevention of many age-associated neurological dysfunctions, including Alzheimer's and Parkinson's diseases. Therefore, efforts have been devoted to a search for more mushroom species that may improve memory and cognition functions. Such mushrooms include Hericium erinaceus, Ganoderma lucidum, Sarcodon spp., Antrodia camphorata, Pleurotus giganteus, Lignosus rhinocerotis, Grifola frondosa, and many more. Here, we review over 20 different brain-improving culinary-medicinal mushrooms and at least 80 different bioactive secondary metabolites isolated from them. The mushrooms (either extracts from basidiocarps/mycelia or isolated compounds) reduced beta amyloid-induced neurotoxicity and had anti-acetylcholinesterase, neurite outgrowth stimulation, nerve growth factor (NGF) synthesis, neuroprotective, antioxidant, and anti-(neuro)inflammatory effects. The in vitro and in vivo studies on the molecular mechanisms responsible for the bioactive effects of mushrooms are also discussed. Mushrooms can be considered as useful therapeutic agents in the management and/or treatment of neurodegeneration diseases. However, this review focuses on in vitro evidence and clinical trials with humans are needed.", "title": "" }, { "docid": "6e7d629c5dd111df1064b969755863ef", "text": "Recently proposed universal filtered multicarrier (UFMC) system is not an orthogonal system in multipath channel environments and might cause significant performance loss. In this paper, the authors propose a cyclic prefix (CP) based UFMC system and first analyze the conditions for interference-free one-tap equalization in the absence of transceiver imperfections. Then the corresponding signal model and output signal-to-noise ratio expression are derived. In the presence of carrier frequency offset, timing offset, and insufficient CP length, the authors establish an analytical system model as a summation of desired signal, intersymbol interference, intercarrier interference, and noise. New channel equalization algorithms are proposed based on the derived analytical signal model. Numerical results show that the derived model matches the simulation results precisely, and the proposed equalization algorithms improve the UFMC system performance in terms of bit error rate.", "title": "" }, { "docid": "b76462ec4dc505e3e7d4e2126a461668", "text": "This paper describes an effective and efficient image classification framework nominated distributed deep representation learning model (DDRL). The aim is to strike the balance between the computational intensive deep learning approaches (tuned parameters) which are intended for distributed computing, and the approaches that focused on the designed parameters but often limited by sequential computing and cannot scale up. In the evaluation of our approach, it is shown that DDRL is able to achieve state-of-art classification accuracy efficiently on both medium and large datasets. The result implies that our approach is more efficient than the conventional deep learning approaches, and can be applied to big data that is too complex for parameter designing focused approaches. More specifically, DDRL contains two main components, i.e., feature extraction and selection. A hierarchical distributed deep representation learning algorithm is designed to extract image statistics and a nonlinear mapping algorithm is used to map the inherent statistics into abstract features. Both algorithms are carefully designed to avoid millions of parameters tuning. This leads to a more compact solution for image classification of big data. We note that the proposed approach is designed to be friendly with parallel computing. It is generic and easy to be deployed to different distributed computing resources. In the experiments, the large∗Corresponding author. Tel.:+86 13981763623; Fax: +86-28-61831655. Email address: ledong@uestc.edu.cn (Le Dong) Preprint submitted to Pattern Recognition July 5, 2016 scale image datasets are classified with a DDRM implementation on Hadoop MapReduce, which shows high scalability and resilience.", "title": "" }, { "docid": "a0029a2a13b25db7dc9f0755be57f0c1", "text": "Diversity in parasite virulence is one of the factors that contribute to the clinical outcome of malaria infections. The association between the severity of Plasmodium falciparum malaria and the number of distinct parasite populations infecting the host (multiplicity of infection) or polymorphism within any of the specific antigen genes was investigated. The study included 164 children presenting with mild and severe malaria from central Uganda where malaria is meso-endemic. The polymorphic regions of the circumsporozoite protein (csp), merozoite surface proteins 1 and 2 (msp1 and msp2), and glutamate-rich protein (glurp) were genotyped by polymerase chain reaction methods and fragment analysis by gel electrophoresis. In a subset of samples fragment analysis was also performed by fluorescent PCR genotyping followed by capillary electrophoresis. The multiplicity of infection (MOI), determined as the highest number of alleles detected within any of the four genetic loci, was significantly higher in severe than in mild malaria cases (mean 3.7 and 3.0, respectively, P = 0.002). No particular genotype or allelic family of msp1 or msp2 was associated with severity of malaria, and nor did the genotyping method reveal any significant difference in MOI when only assessed by msp2 genotyping. Severity of malaria was not linked to the predominance of any particular msp1 or msp2 allelic types, independent of methods used for genotyping. Monitoring the dynamics of multiple clone infections in relation to disease outcome, host immune status and genetic factors will provide more insight into parasite virulence mechanisms.", "title": "" }, { "docid": "39ff54263fa91d9d178a143a49239f68", "text": "A series of 3-(2H-1,2,4-triazol-5-yl)-1,3-thiazolidin-4-one derivatives (7c-l) was designed and synthesized. Their structures have been elucidated based on analytical and spectral data. They were evaluated for their antibacterial and antifungal activities. Compound 7h showed the highest activity against all tested strains, except P. vulgaris, with MIC 8 μg/mL and 4 μg/mL against S. aureus and C. albicans, respectively. Furthermore, Compounds 7c, 7h, and 7j demonstrated moderate anti-mycobacterium activity. The binding mode of the synthesized thiazolidinones to bacterial MurB enzyme was also studied. Good interactions between the docked compounds to the MurB active site were observed primarily with Asn83, Arg310, Arg188 and Ser82 amino acid residues.", "title": "" }, { "docid": "9c7afcb568fab9551886174c3f4a329b", "text": "Automatic semantic annotation of data from databases or the web is an important pre-process for data cleansing and record linkage. It can be used to resolve the problem of imperfect field alignment in a database or identify comparable fields for matching records from multiple sources. The annotation process is not trivial because data values may be noisy, such as abbreviations, variations or misspellings. In particular, overlapping features usually exist in a lexicon-based approach. In this work, we present a probabilistic address parser based on linear-chain conditional random fields (CRFs), which allow more expressive token-level features compared to hidden Markov models (HMMs). In additions, we also proposed two general enhancement techniques to improve the performance. One is taking original semi-structure of the data into account. Another is post-processing of the output sequences of the parser by combining its conditional probability and a score function, which is based on a learned stochastic regular grammar (SRG) that captures segment-level dependencies. Experiments were conducted by comparing the CRF parser to a HMM parser and a semi-Markov CRF parser in two real-world datasets. The CRF parser out-performed the HMM parser and the semi-Markov CRF in both datasets in terms of classification accuracy. Leveraging the structure of the data and combining the linear-chain CRF with the SRG further improved the parser to achieve an accuracy of 97% on a postal dataset and 96% on a company dataset.", "title": "" }, { "docid": "3f5a6580d3c8d13a8cefaea9fd6f68b2", "text": "Most theorizing on the relationship between corporate social/environmental performance (CSP) and corporate financial performance (CFP) assumes that the current evidence is too fractured or too variable to draw any generalizable conclusions. With this integrative, quantitative study, we intend to show that the mainstream claim that we have little generalizable knowledge about CSP and CFP is built on shaky grounds. Providing a methodologically more rigorous review than previous efforts, we conduct a meta-analysis of 52 studies (which represent the population of prior quantitative inquiry) yielding a total sample size of 33,878 observations. The metaanalytic findings suggest that corporate virtue in the form of social responsibility and, to a lesser extent, environmental responsibility is likely to pay off, although the operationalizations of CSP and CFP also moderate the positive association. For example, CSP appears to be more highly correlated with accounting-based measures of CFP than with market-based indicators, and CSP reputation indices are more highly correlated with CFP than are other indicators of CSP. This meta-analysis establishes a greater degree of certainty with respect to the CSP–CFP relationship than is currently assumed to exist by many business scholars.", "title": "" }, { "docid": "e29850d33a695ecfdb321019409c0f03", "text": "Repairing double-strand breaks (DSBs) is particularly challenging in pericentromeric heterochromatin, where the abundance of repeated sequences exacerbates the risk of ectopic recombination and chromosome rearrangements. Recent studies in Drosophila cells revealed that faithful homologous recombination (HR) repair of heterochromatic DSBs relies on the relocalization of DSBs to the nuclear periphery before Rad51 recruitment. We summarize here the exciting progress in understanding this pathway, including conserved responses in mammalian cells and surprising similarities with mechanisms in yeast that deal with DSBs in distinct sites that are difficult to repair, including other repeated sequences. We will also point out some of the most important open questions in the field and emerging evidence suggesting that deregulating these pathways might have dramatic consequences for human health.", "title": "" }, { "docid": "6cbac94d232ef2cc8192771f52d32d15", "text": "Network functions virtualization (NFV) is an emerging network technology. Instead of deploying hardware equipments for each network functions, virtualized network functions in NFV are realized through virtual machines (VMs) running various software on top of industry standard high volume servers or cloud computing infrastructure. NFV decreases hardware equipment costs and energy consumption, improves operational efficiency and optimizes network configuration. However, potential security issues is a major concern of NFV. In this paper, we survey the challenges and opportunities in NFV security. We describe the NFV architecture design and some potential NFV security issues and challenges. We also present existing NFV security solutions and products. We also survey NFV security use cases and explore promising research directions in this area.", "title": "" }, { "docid": "66af4d496e98e4b407922fbe9970a582", "text": "Automatic summarization of open-domain spoken dialogues is a relatively new research area. This article introduces the task and the challenges involved and motivates and presents an approach for obtaining automatic-extract summaries for human transcripts of multiparty dialogues of four different genres, without any restriction on domain. We address the following issues, which are intrinsic to spoken-dialogue summarization and typically can be ignored when summarizing written text such as news wire data: (1) detection and removal of speech disfluencies; (2) detection and insertion of sentence boundaries; and (3) detection and linking of cross-speaker information units (question-answer pairs). A system evaluation is performed using a corpus of 23 dialogue excerpts with an average duration of about 10 minutes, comprising 80 topical segments and about 47,000 words total. The corpus was manually annotated for relevant text spans by six human annotators. The global evaluation shows that for the two more informal genres, our summarization system using dialogue-specific components significantly outperforms two baselines: (1) a maximum-marginal-relevance ranking algorithm using TFIDF term weighting, and (2) a LEAD baseline that extracts the first n words from a text.", "title": "" }, { "docid": "f51583c6eb5a0d6e27823e0714d40ef5", "text": "Studies of emotion regulation typically contrast two or more strategies (e.g., reappraisal vs. suppression) and ignore variation within each strategy. To address such variation, we focused on cognitive reappraisal and considered the effects of goals (i.e., what people are trying to achieve) and tactics (i.e., what people actually do) on outcomes (i.e., how affective responses change). To examine goals, we randomly assigned participants to either increase positive emotion or decrease negative emotion to a negative stimulus. To examine tactics, we categorized participants' reports of how they reappraised. To examine reappraisal outcomes, we measured experience and electrodermal responding. Findings indicated that (a) the goal of increasing positive emotion led to greater increases in positive affect and smaller decreases in skin conductance than the goal of decreasing negative emotion, and (b) use of the reality challenge tactic was associated with smaller increases in positive affect during reappraisal. These findings suggest that reappraisal can be implemented in the service of different emotion goals, using different tactics. Such differences are associated with different outcomes, and they should be considered in future research and applied attempts to maximize reappraisal success.", "title": "" } ]
scidocsrr
74d7dcad0b1dfec38eec24f8fccef8b9
Audio recapture detection using deep learning
[ { "docid": "3223563162967868075a43ca86c1d31a", "text": "Deep learning research aims at discovering learning algorithms that discover multiple levels of distributed representations, with higher levels representing more abstract concepts. Although the study of deep learning has already led to impressive theoretical results, learning algorithms and breakthrough experiments, several challenges lie ahead. This paper proposes to examine some of these challenges, centering on the questions of scaling deep learning algorithms to much larger models and datasets, reducing optimization difficulties due to ill-conditioning or local minima, designing more efficient and powerful inference and sampling procedures, and learning to disentangle the factors of variation underlying the observed data. It also proposes a few forward-looking research directions aimed at overcoming these", "title": "" } ]
[ { "docid": "34e21b8051f3733c077d7087c035be2f", "text": "This paper deals with the synthesis of a speed control strategy for a DC motor drive based on an output feedback backstepping controller. The backstepping method takes into account the non linearities of the system in the design control law and leads to a system asymptotically stable in the context of Lyapunov theory. Simulated results are displayed to validate the feasibility and the effectiveness of the proposed strategy.", "title": "" }, { "docid": "a06d00c783ef31008a622a8500a4ca86", "text": "Wandering is a common and risky behavior in people with dementia (PWD). In this paper, we present a mobile healthcare application to detect wandering patterns in indoor settings. The application harnesses consumer electronics devices including WiFi access points and mobile phones and has been tested successfully in a home environment. Experimental results show that the mobile-health application is able to detect wandering patterns including lapping, pacing and random in real-time. Once wandering is detected, an alert message is sent using SMS (Short Message Service) to attending caregivers or physicians for further examination and timely interventions.", "title": "" }, { "docid": "7555bad7391b1fe2f0336648d035c6f4", "text": "A signal analysis technique is developed for discriminating a set of lower arm and wrist functions using surface EMG signals. Data wete obtained from four electrodes placed around the proximal forearm. The functions analyzed included wrist flexion/extension, wrist abduction/adduction, and forearm pronation/supination. Multivariate autoregression models were derived for each function; discrimination was performed using a multiple-model hypothesis detection technique. This approach extends the work of Graupe and Cline [1] by including spatial correlations and by using a more generalized detection philosophy, based on analysis of the time history of all limb function probabilities. These probabilities are the sufficient statistics for the problem if the EMG data are stationary Gauss-Markov processes. Experimental results on-normal subjects are presented which demonstrate the advantages of using the spatial and time correlation of the signals. This technique should be useful in generating control signals for prosthetic devices.", "title": "" }, { "docid": "6951f051c3fe9ab24259dcc6f812fc68", "text": "User Generated Content has become very popular since the birth of web services such as YouTube allowing the distribution of such user-produced media content in an easy manner. YouTube-like services are different from existing traditional VoD services because the service provider has only limited control over the creation of new content. We analyze how the content distribution in YouTube is realized and then conduct a measurement study of YouTube traffic in a large university campus network. The analysis of the traffic shows that: (1) No strong correlation is observed between global and local popularity; (2) neither time scale nor user population has an impact on the local popularity distribution; (3) video clips of local interest have a high local popularity. Using our measurement data to drive trace-driven simulations, we also demonstrate the implications of alternative distribution infrastructures on the performance of a YouTube-like VoD service. The results of these simulations show that client-based local caching, P2P-based distribution, and proxy caching can reduce network traffic significantly and allow faster access to video clips.", "title": "" }, { "docid": "47fcf50c200818440def43ed97d2edd1", "text": "A unique case of accidental hanging due to compression of the neck of an adult by the branches of a coffee tree is reported. The decedent was a 42-year-old male who was found dead in a semi prone position on a slope. His neck was lodged in a wedge formed by two branches of a coffee tree, with his legs angled downwards on the slope. Autopsy revealed two friction abrasions located horizontally on either side of the front of the neck, just above the larynx. The findings were compatible with compression of the neck by the branches of the tree, with the body weight of the decedent contributing to compression. Subsequent complete autopsy examination confirmed the cause of death as hanging. Following an inquest the death was ruled to be accidental.", "title": "" }, { "docid": "89eee86640807e11fa02d0de4862b3a5", "text": "The evolving fifth generation (5G) cellular wireless networks are envisioned to overcome the fundamental challenges of existing cellular networks, for example, higher data rates, excellent end-to-end performance, and user-coverage in hot-spots and crowded areas with lower latency, energy consumption, and cost per information transfer. To address these challenges, 5G systems will adopt a multi-tier architecture consisting of macrocells, different types of licensed small cells, relays, and device-to-device (D2D) networks to serve users with different quality-of-service (QoS) requirements in a spectrum and energy-efficient manner. Starting with the visions and requirements of 5G multi-tier networks, this article outlines the challenges of interference management (e.g. power control, cell association) in these networks with shared spectrum access (i.e. when the different network tiers share the same licensed spectrum). It is argued that the existing interference management schemes will not be able to address the interference management problem in prioritized 5G multi-tier networks where users in different tiers have different priorities for channel access. In this context a survey and qualitative comparison of the existing cell association and power control schemes is provided to demonstrate their limitations for interference management in 5G networks. Open challenges are highlighted and guidelines are provided to modify the existing schemes in order to overcome these limitations and make them suitable for the emerging 5G systems.", "title": "" }, { "docid": "2b471e61a6b95221d9ca9c740660a726", "text": "We propose a low-overhead sampling infrastructure for gathering information from the executions experienced by a program's user community. Several example applications illustrate ways to use sampled instrumentation to isolate bugs. Assertion-dense code can be transformed to share the cost of assertions among many users. Lacking assertions, broad guesses can be made about predicates that predict program errors and a process of elimination used to whittle these down to the true bug. Finally, even for non-deterministic bugs such as memory corruption, statistical modeling based on logistic regression allows us to identify program behaviors that are strongly correlated with failure and are therefore likely places to look for the error.", "title": "" }, { "docid": "2bf619a1af1bab48b4b6f57df8f29598", "text": "Alcoholism and drug addiction have marked impacts on the ability of families to function. Much of the literature has been focused on adult members of a family who present with substance dependency. There is limited research into the effects of adolescent substance dependence on parenting and family functioning; little attention has been paid to the parents' experience. This qualitative study looks at the parental perspective as they attempted to adapt and cope with substance dependency in their teenage children. The research looks into family life and adds to family functioning knowledge when the identified client is a youth as opposed to an adult family member. Thirty-one adult caregivers of 21 teenagers were interviewed, resulting in eight significant themes: (1) finding out about the substance dependence problem; (2) experiences as the problems escalated; (3) looking for explanations other than substance dependence; (4) connecting to the parent's own history; (5) trying to cope; (6) challenges of getting help; (7) impact on siblings; and (8) choosing long-term rehabilitation. Implications of this research for clinical practice are discussed.", "title": "" }, { "docid": "332db7a0d5bf73f65e55c6f2e97dd22c", "text": "The knowledge of surface electromyography (SEMG) and the number of applications have increased considerably during the past ten years. However, most methodological developments have taken place locally, resulting in different methodologies among the different groups of users.A specific objective of the European concerted action SENIAM (surface EMG for a non-invasive assessment of muscles) was, besides creating more collaboration among the various European groups, to develop recommendations on sensors, sensor placement, signal processing and modeling. This paper will present the process and the results of the development of the recommendations for the SEMG sensors and sensor placement procedures. Execution of the SENIAM sensor tasks, in the period 1996-1999, has been handled in a number of partly parallel and partly sequential activities. A literature scan was carried out on the use of sensors and sensor placement procedures in European laboratories. In total, 144 peer-reviewed papers were scanned on the applied SEMG sensor properties and sensor placement procedures. This showed a large variability of methodology as well as a rather insufficient description. A special workshop provided an overview on the scientific and clinical knowledge of the effects of sensor properties and sensor placement procedures on the SEMG characteristics. Based on the inventory, the results of the topical workshop and generally accepted state-of-the-art knowledge, a first proposal for sensors and sensor placement procedures was defined. Besides containing a general procedure and recommendations for sensor placement, this was worked out in detail for 27 different muscles. This proposal was evaluated in several European laboratories with respect to technical and practical aspects and also sent to all members of the SENIAM club (>100 members) together with a questionnaire to obtain their comments. Based on this evaluation the final recommendations of SENIAM were made and published (SENIAM 8: European recommendations for surface electromyography, 1999), both as a booklet and as a CD-ROM. In this way a common body of knowledge has been created on SEMG sensors and sensor placement properties as well as practical guidelines for the proper use of SEMG.", "title": "" }, { "docid": "9407bdf78114e1369e6cc90283fbe892", "text": "Making machines understand human expressions enables various useful applications in human-machine interaction. In this article, we present a novel facial expression recognition approach with 3D Mesh Convolutional Neural Networks (3DMCNN) and a visual analytics-guided 3DMCNN design and optimization scheme. From an RGBD camera, we first reconstruct a 3D face model of a subject with facial expressions and then compute the geometric properties of the surface. Instead of using regular Convolutional Neural Networks (CNNs) to learn intensities of the facial images, we convolve the geometric properties on the surface of the 3D model using 3DMCNN. We design a geodesic distance-based convolution method to overcome the difficulties raised from the irregular sampling of the face surface mesh. We further present interactive visual analytics for the purpose of designing and modifying the networks to analyze the learned features and cluster similar nodes in 3DMCNN. By removing low-activity nodes in the network, the performance of the network is greatly improved. We compare our method with the regular CNN-based method by interactively visualizing each layer of the networks and analyze the effectiveness of our method by studying representative cases. Testing on public datasets, our method achieves a higher recognition accuracy than traditional image-based CNN and other 3D CNNs. The proposed framework, including 3DMCNN and interactive visual analytics of the CNN, can be extended to other applications.", "title": "" }, { "docid": "f8836ddc384c799d9264b8ea43f9685a", "text": "Pattern matching has proved an extremely powerful and durable notion in functional programming. This paper contributes a new programming notation for type theory which elaborates the notion in various ways. First, as is by now quite well-known in the type theory community, definition by pattern matching becomes a more discriminating tool in the presence of dependent types, since it refines the explanation of types as well as values. This becomes all the more true in the presence of the rich class of datatypes known as inductive families (Dybjer, 1991). Secondly, as proposed by Peyton Jones (1997) for Haskell, and independently rediscovered by us, subsidiary case analyses on the results of intermediate computations, which commonly take place on the right-hand side of definitions by pattern matching, should rather be handled on the left. In simply-typed languages, this subsumes the trivial case of Boolean guards; in our setting it becomes yet more powerful. Thirdly, elementary pattern matching decompositions have a well-defined interface given by a dependent type; they correspond to the statement of an induction principle for the datatype. More general, user-definable decompositions may be defined which also have types of the same general form. Elementary pattern matching may therefore be recast in abstract form, with a semantics given by translation. Such abstract decompositions of data generalize Wadler’s (1987) notion of ‘view’. The programmer wishing to introduce a new view of a type T , and exploit it directly in pattern matching, may do so via a standard programming idiom. The type theorist, looking through the Curry–Howard lens, may see this as proving a theorem, one which establishes the validity of a new induction principle for T . We develop enough syntax and semantics to account for this high-level style of programming in dependent type theory. We close with the development of a typechecker for the simply-typed lambda calculus, which furnishes a view of raw terms as either being well-typed, or containing an error. The implementation of this view is ipso facto a proof that typechecking is decidable.", "title": "" }, { "docid": "850f29a1d3c5bc96bb36787aba428331", "text": "In this paper, we introduce a novel framework for WEakly supervised Learning of Deep cOnvolutional neural Networks (WELDON). Our method is dedicated to automatically selecting relevant image regions from weak annotations, e.g. global image labels, and encompasses the following contributions. Firstly, WELDON leverages recent improvements on the Multiple Instance Learning paradigm, i.e. negative evidence scoring and top instance selection. Secondly, the deep CNN is trained to optimize Average Precision, and fine-tuned on the target dataset with efficient computations due to convolutional feature sharing. A thorough experimental validation shows that WELDON outperforms state-of-the-art results on six different datasets.", "title": "" }, { "docid": "cebeaf1d155d5d7e4c62ec84cf36c087", "text": "This paper presents the comparison of power captured by vertical and horizontal axis wind turbine (VAWT and HAWT). According to Betz, the limit of maximum coefficient power (CP) is 0.59. In this case CP is important parameter that determines the power extracted by a wind turbine we made. This paper investigates the impact of wind speed variation of wind turbine to extract the power. For VAWT we used H-darrieus type whose swept area is 3.14 m2 and so is HAWT. The wind turbines have 3 blades for each type. The air foil of both wind turbines are NACA 4412. We tested the model of wind turbine with various wind velocity which affects the performance. We have found that CP of HAWT is 0.54 with captured maximum power is 1363.6 Watt while the CP of VAWT is 0.34 with captured maximum power is 505.69 Watt. The power extracted of both wind turbines seems that HAWT power is much better than VAWT power.", "title": "" }, { "docid": "1c80fdc30b2b37443367dae187fbb376", "text": "The web is a catalyst for drawing people together around shared goals, but many groups never reach critical mass. It can thus be risky to commit time or effort to a goal: participants show up only to discover that nobody else did, and organizers devote significant effort to causes that never get off the ground. Crowdfunding has lessened some of this risk by only calling in donations when an effort reaches a collective monetary goal. However, it leaves unsolved the harder problem of mobilizing effort, time and participation. We generalize the concept into activation thresholds, commitments that are conditioned on others' participation. With activation thresholds, supporters only need to show up for an event if enough other people commit as well. Catalyst is a platform that introduces activation thresholds for on-demand events. For more complex coordination needs, Catalyst also provides thresholds based on time or role (e.g., a bake sale requiring commitments for bakers, decorators, and sellers). In a multi-month field deployment, Catalyst helped users organize events including food bank volunteering, on-demand study groups, and mass participation events like a human chess game. Our results suggest that activation thresholds can indeed catalyze a large class of new collective efforts.", "title": "" }, { "docid": "4f8ef942fdc47b08ac864f93c33c0fab", "text": "Managing risks in construction projects has been recognised as a very important management process in order to achieve the project objectives in terms of time, cost, quality, safety and environmental sustainability. However, until now most research has focused on some aspects of construction risk management rather than using a systematic and holistic approach to identify risks and analyse the likelihood of occurrence and impacts of these risks. This paper aims to identify and analyse the risks associated with the development of construction projects from project stakeholder and life cycle perspectives. Postal questionnaire surveys were used to collect data. Based on a comprehensive assessment of the likelihood of occurrence and their impacts on the project objectives, this paper identifies twenty major risk factors. This research found that these risks are mainly related to (in ranking) contractors, clients and designers, with few related to government bodies, subcontractors/suppliers and external issues. Among them, “tight project schedule” is recognised to influence all project objectives maximally, whereas “design variations”, “excessive approval procedures in administrative government departments”, “high performance/quality expectation”, “unsuitable construction program planning”, as well as “variations of construction program” are deemed to impact at least four aspects of project objectives. This research also found that these risks spread through the whole project life cycle and many risks occur at more than one phase, with the construction stage as the most risky phase, followed by the feasibility stage. It is concluded that clients, designers and government bodies must work cooperatively from the feasibility phase onwards to address potential risks in time, and contractors and subcontractors with robust construction and management knowledge must be employed early to make sound preparation for carrying out safe, efficient and quality construction activities.", "title": "" }, { "docid": "09ada66e157c6a99c6317a7cb068367f", "text": "Experience design is a relatively new approach to product design. While there are several possible starting points in designing for positive experiences, we start with experience goals that state a profound source for a meaningful experience. In this paper, we investigate three design cases that used experience goals as the starting point for both incremental and radical design, and analyse them from the perspective of their potential for design space expansion. Our work addresses the recent call for design research directed toward new interpretations of what could be meaningful to people, which is seen as the source for creating new meanings for products, and thereby, possibly leading to radical innovations. Based on this idea, we think about the design space as a set of possible concepts derived from deep meanings that experience goals help to communicate. According to our initial results from the small-scale touchpoint design cases, the type of experience goals we use seem to have the potential to generate not only incremental but also radical design ideas.", "title": "" }, { "docid": "bb2153c927ceff61687f5f183d3b9e65", "text": "A new clock gated flip-flop is presented. The circuit is based on a new clock gating approach to reduce the consumption of clock signal's switching power. It operates with no redundant clock cycles and has reduced number of transistors to minimize the overhead and to make it suitable for data signals with higher switching activity. The proposed flip-flop is used to design 10 bits binary counter and 14 bits successive approximation register. These applications have been designed up to the layout level with 1 V power supply in 90 nm CMOS technology and have been simulated using Spectre. Simulations with the inclusion of parasitics have shown the effectiveness of the new approach on power consumption and transistor count.", "title": "" }, { "docid": "2b98fd7a61fd7c521758651191df74d0", "text": "Nowadays, a great effort is done to find new alternative renewable energy sources to replace part of nuclear energy production. In this context, this paper presents a new axial counter-rotating turbine for small-hydro applications which is developed to recover the energy lost in release valves of water supply. The design of the two PM-generators, their mechanical integration in a bulb placed into the water conduit and the AC-DC Vienna converter developed for these turbines are presented. The sensorless regulation of the two generators is also briefly discussed. Finally, measurements done on the 2-kW prototype are analyzed and compared with the simulation.", "title": "" }, { "docid": "53b1ac64f63cab0d99092764eed4f829", "text": "We present a new unsupervised topic discovery model for a collection of text documents. In contrast to the majority of the state-of-the-art topic models, our model does not break the document's structure such as paragraphs and sentences. In addition, it preserves word order in the document. As a result, it can generate two levels of topics of different granularity, namely, segment-topics and word-topics. In addition, it can generate n-gram words in each topic. We also develop an approximate inference scheme using Gibbs sampling method. We conduct extensive experiments using publicly available data from different collections and show that our model improves the quality of several text mining tasks such as the ability to support fine grained topics with n-gram words in the correlation graph, the ability to segment a document into topically coherent sections, document classification, and document likelihood estimation.", "title": "" } ]
scidocsrr
633f9bb3fd94ab1c8735e9855731df49
Secure and dependable software defined networks
[ { "docid": "2756c08346bfeafaed177a6bf1fde09e", "text": "Current implementations of Internet systems are very hard to be upgraded. The ossification of existing standards restricts the development of more advanced communication systems. New research initiatives, such as virtualization, software-defined radios, and software-defined networks, allow more flexibility for networks. However, until now, those initiatives have been developed individually. We advocate that the convergence of these overlying and complementary technologies can expand the amount of programmability on the network and support different innovative applications. Hence, this paper surveys the most recent research initiatives on programmable networks. We characterize programmable networks, where programmable devices execute specific code, and the network is separated into three planes: data, control, and management planes. We discuss the modern programmable network architectures, emphasizing their research issues, and, when possible, highlight their practical implementations. We survey the wireless and wired elements on the programmable data plane. Next, on the programmable control plane, we survey the divisor and controller elements. We conclude with final considerations, open issues and future challenges.", "title": "" }, { "docid": "6fd511ffcdb44c39ecad1a9f71a592cc", "text": "s Providing Supporting Policy Compositional Operators Functional Composition Network Layered Abstract Topologies Topological Decomposition Packet Extensible Headers Policy & Network Abstractions Pyretic (Contributions)", "title": "" } ]
[ { "docid": "5de5abcd01ec0bb9830ddcb98b5c41b2", "text": "Android has become the most popular mobile OS, as it enables device manufacturers to introduce customizations to compete with value-added services. However, customizations make the OS less dependable and secure, since they can introduce software flaws. Such flaws can be found by using fuzzing, a popular testing technique among security researchers.This paper presents Chizpurfle, a novel \"gray-box\" fuzzing tool for vendor-specific Android services. Testing these services is challenging for existing tools, since vendors do not provide source code and the services cannot be run on a device emulator. Chizpurfle has been designed to run on an unmodified Android OS on an actual device. The tool automatically discovers, fuzzes, and profiles proprietary services. This work evaluates the applicability and performance of Chizpurfle on the Samsung Galaxy S6 Edge, and discusses software bugs found in privileged vendor services.", "title": "" }, { "docid": "149595fcd31fd2ddbf7c6d48ca6339dc", "text": "What factors underlie the adoption dynamics of ecommerce technologies among users in developing countries? Even though the internet promised to be the great equalizer, the nuanced variety of conditions and contingencies that shape user adoption of ecommerce technologies has received little scrutiny. Building on previous research on technology adoption, the paper proposes a global information technology (IT) adoption model. The model includes antecedents of performance expectancy, social influence, and technology opportunism and investigates the crucial influence of facilitating conditions. The proposed model is tested using data from 172 technology users from 37 countries, collected over a 1-year period. The findings suggest that in developing countries, facilitating conditions play a critical moderating role in understanding actual ecommerce adoption, especially when in tandem with technological opportunism. Altogether, the paper offers a preliminary scrutiny of the mechanics of ecommerce adoption in developing countries.", "title": "" }, { "docid": "3207b44dcad92fcee13893b2f254428e", "text": "Remote Data Checking (RDC) is a technique by which clients can establish that data outsourced at untrusted servers remains intact over time. RDC is useful as a prevention tool, allowing clients to periodically check if data has been damaged, and as a repair tool whenever damage has been detected. Initially proposed in the context of a single server, RDC was later extended to verify data integrity in distributed storage systems that rely on replication and on erasure coding to store data redundantly at multiple servers. Recently, a technique was proposed to add redundancy based on network coding, which offers interesting tradeoffs because of its remarkably low communication overhead to repair corrupt servers.\n Unlike previous work on RDC which focused on minimizing the costs of the prevention phase, we take a holistic look and initiate the investigation of RDC schemes for distributed systems that rely on network coding to minimize the combined costs of both the prevention and repair phases. We propose RDC-NC, a novel secure and efficient RDC scheme for network coding-based distributed storage systems. RDC-NC mitigates new attacks that stem from the underlying principle of network coding. The scheme is able to preserve in an adversarial setting the minimal communication overhead of the repair component achieved by network coding in a benign setting. We implement our scheme and experimentally show that it is computationally inexpensive for both clients and servers.", "title": "" }, { "docid": "d434ef675b4d8242340f4d501fdbbae3", "text": "We study the problem of selecting a subset of k random variables to observe that will yield the best linear prediction of another variable of interest, given the pairwise correlations between the observation variables and the predictor variable. Under approximation preserving reductions, this problem is equivalent to the \"sparse approximation\" problem of approximating signals concisely. The subset selection problem is NP-hard in general; in this paper, we propose and analyze exact and approximation algorithms for several special cases of practical interest. Specifically, we give an FPTAS when the covariance matrix has constant bandwidth, and exact algorithms when the associated covariance graph, consisting of edges for pairs of variables with non-zero correlation, forms a tree or has a large (known) independent set. Furthermore, we give an exact algorithm when the variables can be embedded into a line such that the covariance decreases exponentially in the distance, and a constant-factor approximation when the variables have no \"conditional suppressor variables\". Much of our reasoning is based on perturbation results for the R2 multiple correlation measure, which is frequently used as a natural measure for \"goodness-of-fit statistics\". It lies at the core of our FPTAS, and also allows us to extend our exact algorithms to approximation algorithms when the matrix \"nearly\" falls into one of the above classes. We also use our perturbation analysis to prove approximation guarantees for the widely used \"Forward Regression\" heuristic under the assumption that the observation variables are nearly independent.", "title": "" }, { "docid": "d2b7ff4fc41610013b98a70fc32c8176", "text": "Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which require the network to read from extremely large memories. On the other hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks. The memory is organized in a hierarchical structure such that reading from it is done with less computation than soft attention over a flat memory, while also being easier to train than hard attention over a flat memory. Specifically, we propose to incorporate Maximum Inner Product Search (MIPS) in the training and inference procedures for our hierarchical memory network. We explore the use of various state-of-the art approximate MIPS techniques and report results on SimpleQuestions, a challenging large scale factoid question answering task.", "title": "" }, { "docid": "f638fa2d4e358f91a05fc5329d6058f0", "text": "We present a computational framework for Theory of Mind (ToM): the human ability to make joint inferences about the unobservable beliefs and preferences underlying the observed actions of other agents. These mental state attributions can be understood as Bayesian inferences in a probabilistic generative model for rational action, or planning under uncertain and incomplete information, formalized as a Partially Observable Markov Decision Problem (POMDP). That is, we posit that ToM inferences approximately reconstruct the combination of a reward function and belief state trajectory for an agent based on observing that agent’s action sequence in a given environment. We test this POMDP model by showing human subjects the trajectories of agents moving in simple spatial environments and asking for joint inferences about the agents’ utilities and beliefs about unobserved aspects of the environment. Our model performs substantially better than two simpler variants: one in which preferences are inferred without reference to an agents’ beliefs, and another in which beliefs are inferred without reference to the agent’s dynamic observations in the environment. We find that preference inferences are substantially more robust and consistent with our model’s predictions than are belief inferences, in line with classic work showing that the ability to infer goals is more concretely grounded in visual data, develops earlier in infancy, and can be localized to specific neurons in the primate brain.", "title": "" }, { "docid": "2f310c62ada7e2f7696b61a8ee0f74a3", "text": "[This paper is the third revised version (2013). It was originally presented in a philosophical conference in Athens, Greece on 6 June 2006, Athens Institute of Education and Research. It was first published as Chapter 28 in The philosophical landscape. Third edition. Edited by Rolando M. Gripaldo. Manila: Philippine National Philosophical Research Society, 2007. Other editions appeared in Philosophia: International Journal of Philosophy 36/8 (1): January 2007 and in The making of a Filipino philosopher and other essays. [A collection of Gripaldo’s essays.] Chapter 2. Mandaluyong City: National Book Store, 2009.]", "title": "" }, { "docid": "3cae61722ac1c1c06f31aa61fd73b2cd", "text": "AIM\nThere is an emerging body of evidence on the potential effects of regular physical activity on academic performance. The aim of this study was to add to the debate, by examining the association between objectively measured physical activity and academic performance in a relatively large sample of children and adolescents.\n\n\nMETHODS\nThe Spanish UP & DOWN study is a 3-year longitudinal study designed to assess the impact, overtime, of physical activity and sedentary behaviours on health indicators. This present analysis was conducted with 1778 children and adolescents aged 6-18 years. Physical activity was objectively measured by accelerometry. Academic performance was assessed using school grades.\n\n\nRESULTS\nPhysical activity was inversely associated with all academic performance indicators after adjustment for potential confounders, including neonatal variables, fatness and fitness (all p < 0.05). This association became nonsignificant among quartiles of physical activity. There were only slight differences in academic performance between the lowest and the second quartile of physical activity, compared to the highest quartile, with very small effect size (d < 0.20).\n\n\nCONCLUSION\nObjectively measured physical activity may influence academic performance during both childhood and adolescence, but this association was negative and very weak. Longitudinal and intervention studies are necessary to further our understanding.", "title": "" }, { "docid": "d7f878ed79899f72d5d7bf58a7dcaa40", "text": "We report in detail the decoding strategy that we used for the past two Darpa Rich Transcription evaluations (RT’03 and RT’04) which is based on finite state automata (FSA). We discuss the format of the static decoding graphs, the particulars of our Viterbi implementation, the lattice generation and the likelihood evaluation. This paper is intended to familiarize the reader with some of the design issues encountered when building an FSA decoder. Experimental results are given on the EARS database (English conversational telephone speech) with emphasis on our faster than real-time system.", "title": "" }, { "docid": "2d26560f6ae654a546db8f4463ed87be", "text": "Linked Data promises to serve as a disruptor of traditional approaches to data management and use, promoting the push from the traditional Web of documents to a Web of data. The ability for data consumers to adopt a follow your nose approach, traversing links defined within a dataset or across independently-curated datasets, is an essential feature of this new Web of Data, enabling richer knowledge retrieval thanks to synthesis across multiple sources of, and views on, inter-related datasets. But for the Web of Data to be successful, we must design novel ways of interacting with the corresponding very large amounts of complex, interlinked, multi-dimensional data throughout its management cycle. The design of user interfaces for Linked Data, and more specifically interfaces that represent the data visually, play a central role in this respect. Contributions to this special issue on Linked Data visualisation investigate different approaches to harnessing visualisation as a tool for exploratory discovery and basic-to-advanced analysis. The papers in this volume illustrate the design and construction of intuitive means for end-users to obtain new insight and gather more knowledge, as they follow links defined across datasets over the Web of Data.", "title": "" }, { "docid": "1e929868d6d36aa6399d6fa79bd98d7f", "text": "The Prolog programmer’s needs have always been the focus for guiding the development of the SWI-Prolog system. This article accompanies an invited talk about how the SWI-Prolog environment helps the Prolog programmer solve common problems. It describes the central parts of the graphical development environment as well as the command line tools which we see as vital to the success of the system. We hope this comprehensive overview of particularly useful features will both inspire other Prolog developers, and help SWI-Prolog users to make more productive use of the system.", "title": "" }, { "docid": "e4f648d12495a2d7615fe13c84f35bbe", "text": "We propose a simple modification to existing neural machine translation (NMT) models that enables using a single universal model to translate between multiple languages while allowing for language specific parameterization, and that can also be used for domain adaptation. Our approach requires no changes to the model architecture of a standard NMT system, but instead introduces a new component, the contextual parameter generator (CPG), that generates the parameters of the system (e.g., weights in a neural network). This parameter generator accepts source and target language embeddings as input, and generates the parameters for the encoder and the decoder, respectively. The rest of the model remains unchanged and is shared across all languages. We show how this simple modification enables the system to use monolingual data for training and also perform zero-shot translation. We further show it is able to surpass state-of-theart performance for both the IWSLT-15 and IWSLT-17 datasets and that the learned language embeddings are able to uncover interesting relationships between languages.", "title": "" }, { "docid": "bb9d60abf3b8d6e88d5079366b3a0f91", "text": "Dynamic network analysis (DNA) varies from traditional social network analysis in that it can handle large dynamic multi-mode, multi-link networks with varying levels of uncertainty. DNA, like quantum mechanics, would be a theory in which relations are probabilistic, the measurement of a node changes its properties, movement in one part of the system propagates through the system, and so on. However, unlike quantum mechanics, the nodes in the DNA, the atoms, can learn. An approach to DNA is described that builds DNA theory through the combined use of multi-agent modeling, machine learning, and meta-matrix approach to network representation. A set of candidate metric for describing the DNA are defined. Then, a model built using this approach is presented. Results concerning the evolution and destabilization of networks are described.", "title": "" }, { "docid": "83525470a770a036e9c7bb737dfe0535", "text": "It is known that the performance of the i-vectors/PLDA based speaker verification systems is affected in the cases of short utterances and limited training data. The performance degradation appears because the shorter the utterance, the less reliable the extracted i-vector is, and because the total variability covariance matrix and the underlying PLDA matrices need a significant amount of data to be robustly estimated. Considering the “MIT Mobile Device Speaker Verification Corpus” (MIT-MDSVC) as a representative dataset for robust speaker verification tasks on limited amount of training data, this paper investigates which configuration and which parameters lead to the best performance of an i-vectors/PLDA based speaker verification. The i-vectors/PLDA based system achieved good performance only when the total variability matrix and the underlying PLDA matrices were trained with data belonging to the enrolled speakers. This way of training means that the system should be fully retrained when new enrolled speakers were added. The performance of the system was more sensitive to the amount of training data of the underlying PLDA matrices than to the amount of training data of the total variability matrix. Overall, the Equal Error Rate performance of the i-vectors/PLDA based system was around 1% below the performance of a GMM-UBM system on the chosen dataset. The paper presents at the end some preliminary experiments in which the utterances comprised in the CSTR VCTK corpus were used besides utterances from MIT-MDSVC for training the total variability covariance matrix and the underlying PLDA matrices.", "title": "" }, { "docid": "3d3f2c536a397007338572a17da80b7b", "text": "Traffic engineering is an important mechanism for Internet network providers seeking to optimize network performance and traffic delivery. Routing optimization plays a key role in traffic engineering, finding efficient routes so as to achieve the desired network performance. In this survey we review Internet traffic engineering from the perspective of routing optimization. A taxonomy of routing algorithms in the literature is provided, dating from the advent of the TE concept in the late 1990s. We classify the algorithms into multiple dimensions: unicast/multicast, intra-/inter- domain, IP-/MPLS-based and offline/online TE schemes. In addition, we investigate some important traffic engineering issues, including robustness, TE interactions, and interoperability with overlay selfish routing. In addition to a review of existing solutions, we also point out some challenges in TE operation and important issues that are worthy of investigation in future research activities.", "title": "" }, { "docid": "16c205cd85d33eed145724bc6b015ba1", "text": "Telematics data is becoming increasingly available due to the ubiquity of devices that collect data during drives, for different purposes, such as usage based insurance (UBI), fleet management, navigation of connected vehicles, etc. Consequently, a variety of data-analytic applications have become feasible that extract valuable insights from the data. In this paper, we address the especially challenging problem of discovering behavior-based driving patterns from only externally observable phenomena (e.g. vehicle's speed). We present a trajectory segmentation approach capable of discovering driving patterns as separate segments, based on the behavior of drivers. This segmentation approach includes a novel transformation of trajectories along with a dynamic programming approach for segmentation. We apply the segmentation approach on a real-word, rich dataset of personal car trajectories provided by a major insurance company based in Columbus, Ohio. Analysis and preliminary results show the applicability of approach for finding significant driving patterns.", "title": "" }, { "docid": "2d686ca335954077a59f77898de9a333", "text": "Information and communication technologies (ICT) can be instrumental in progressing towards smarter city environments, which improve city services, sustainability, and citizens’ quality of life. Smart City software platforms can support the development and integration of Smart City applications. However, the ICT community must overcome current technological and scientific challenges before these platforms can be widely adopted. This article surveys the state of the art in software platforms for Smart Cities. We analyzed 23 projects concerning the most used enabling technologies, as well as functional and non-functional requirements, classifying them into four categories: Cyber-Physical Systems, Internet of Things, Big Data, and Cloud Computing. Based on these results, we derived a reference architecture to guide the development of next-generation software platforms for Smart Cities. Finally, we enumerated the most frequently cited open research challenges and discussed future opportunities. This survey provides important references to help application developers, city managers, system operators, end-users, and Smart City researchers make project, investment, and research decisions.", "title": "" }, { "docid": "095f8d5c3191d6b70b2647b562887aeb", "text": "Hardware specialization, in the form of datapath and control circuitry customized to particular algorithms or applications, promises impressive performance and energy advantages compared to traditional architectures. Current research in accelerators relies on RTL-based synthesis flows to produce accurate timing, power, and area estimates. Such techniques not only require significant effort and expertise but also are slow and tedious to use, making large design space exploration infeasible. To overcome this problem, the authors developed Aladdin, a pre-RTL, power-performance accelerator modeling framework and demonstrated its application to system-on-chip (SoC) simulation. Aladdin estimates performance, power, and area of accelerators within 0.9, 4.9, and 6.6 percent with respect to RTL implementations. Integrated with architecture-level general-purpose core and memory hierarchy simulators, Aladdin provides researchers with a fast but accurate way to model the power and performance of accelerators in an SoC environment.", "title": "" }, { "docid": "43f1cc712b3803ef7ac8273136dbe75d", "text": "Improved understanding of the anatomy and physiology of the aging face has laid the foundation for adopting an earlier and more comprehensive approach to facial rejuvenation, shifting the focus from individual wrinkle treatment and lift procedures to a holistic paradigm that considers the entire face and its structural framework. This article presents an overview of a comprehensive method to address facial aging. The key components to the reported strategy for improving facial cosmesis include, in addition to augmentation of volume loss, protection with sunscreens and antioxidants; promotion of epidermal cell turnover with techniques such as superficial chemical peels; microlaser peels and microdermabrasion; collagen stimulation and remodeling via light, ultrasound, or radiofrequency (RF)-based methods; and muscle control with botulinum toxin. For the treatment of wrinkles and for the augmentation of pan-facial dermal lipoatrophy, several types of fillers and volumizers including hyaluronic acid (HA), autologous fat, and calcium hydroxylapatite (CaHA) or injectable poly-l-lactic acid (PLLA) are available. A novel bimodal, trivector technique to restore structural facial volume loss that combines supraperiosteal depot injections of volume-depleted fat pads and dermal/subcutaneous injections for panfacial lipoatrophy with PLLA is presented. The combination of treatments with fillers; toxins; light-, sound-, and RF-based technologies; and surgical procedures may help to forestall the facial aging process and provide more natural results than are possible with any of these techniques alone. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .", "title": "" } ]
scidocsrr
609357fc447a1fc0f169b57c5a91e5ba
Tests of Cognitive Ability The Sustained Attention to Response Task ( SART )
[ { "docid": "8feb5dce809acf0efb63d322f0526fcf", "text": "Recent studies of eye movements in reading and other information processing tasks, such as music reading, typing, visual search, and scene perception, are reviewed. The major emphasis of the review is on reading as a specific example of cognitive processing. Basic topics discussed with respect to reading are (a) the characteristics of eye movements, (b) the perceptual span, (c) integration of information across saccades, (d) eye movement control, and (e) individual differences (including dyslexia). Similar topics are discussed with respect to the other tasks examined. The basic theme of the review is that eye movement data reflect moment-to-moment cognitive processes in the various tasks examined. Theoretical and practical considerations concerning the use of eye movement data are also discussed.", "title": "" } ]
[ { "docid": "eebd97c5499c4cd9efa4dbc8bf1bfab0", "text": "This paper proposes an Energy Management System for the optimal operation of Smart Grids and Microgrids, using Fully Connected Neuron Networks combined with Optimal Power Flow. An adaptive training algorithm based on Genetic Algorithms, Fuzzy Clustering and Neuron-by-Neuron Algorithms is used for generating new clusters and new neural networks. The proposed approach, integrating Demand Side Management and Active Management Schemes, allows significant enhancements in energy saving, customers' active participation in the open market and exploitation of renewable energy resources. The effectiveness of the proposed Energy Management System and adaptive training algorithm is verified on a 23-bus 11 kV microgrid.", "title": "" }, { "docid": "cc9a7866ac04788badc9e8e8b043491d", "text": "Based on the understanding of flicker noise generation in “silicon metal-oxide semiconductor field-effect transistors” (MOSFETs), a novel method for improving the phase noise performance of a CMOS LC oscillator is presented. Zhou et al. and Hoogee have suggested that the 1 noise can be reduced through a switched gate, and the flicker noise generated is inversely proportional to the gate switching frequency. The novel tail transistor topology is compared to the two popular tail transistor topologies, namely, the fixed biasing tail transistor and without tail transistor. Through this technique, a figure of merit of 193 dB is achieved using a fully integrated CMOS oscillator with a tank quality factor of about 9.", "title": "" }, { "docid": "f3c1f5a799ec231be31a28aeb57f3c11", "text": "With the increasing role of computational modeling in engineering design, performance estimation, and safety assessment, improved methods are needed for comparing computational results and experimental measurements. Traditional methods of graphically comparing computational and experimental results, though valuable, are essentially qualitative. Computable measures are needed that can quantitatively compare computational and experimental results over a range of input, or control, variables to sharpen assessment of computational accuracy. This type of measure has been recently referred to as a validation metric. We discuss various features that we believe should be incorporated in a validation metric, as well as features that we believe should be excluded. We develop a new validation metric that is based on the statistical concept of confidence intervals. Using this fundamental concept, we construct two specific metrics: one that requires interpolation of experimental data and one that requires regression (curve fitting) of experimental data. We apply the metrics to three example problems: thermal decomposition of a polyurethane foam, a turbulent buoyant plume of helium, and compressibility effects on the growth rate of a turbulent free-shear layer. We discuss how the present metrics are easily interpretable for assessing computational model accuracy, as well as the impact of experimental measurement uncertainty on the accuracy assessment. Published by Elsevier Inc.", "title": "" }, { "docid": "1749bfd76f18ced4a987c09013108cbf", "text": "The mm-Wave bands defined as the new radio in the fifth generation (5G) mobile networks would decrease the dimension of the antenna into the scale of package level. In this study, a patch antenna array with stacked patches was designed for a wider operation frequency band than a typical patch. By considering a better electrical performance of the antenna in package (AiP), an unbalanced substrate of 4-layer metal stack-up within the processing capacity is proposed in this paper. The proposed unbalanced substrate structure is more elegant than the conventional substrate structure because of fewer substrate layers. The electrical and dimensional data are collected and analyzed. The designed patch antenna in this paper shows good correlations between simulations and measurements. The measured results show that the 1×4 patch array achieves a bandwidth of about 15.4 % with -10 dB return loss and gain of 10.8 dBi.", "title": "" }, { "docid": "85678fca24cfa94efcc36570b3f1ef62", "text": "Content-based recommender systems use preference ratings and features that characterize media to model users' interests or information needs for making future recommendations. While previously developed in the music and text domains, we present an initial exploration of content-based recommendation for spoken documents using a corpus of public domain internet audio. Unlike familiar speech technologies of topic identification and spoken document retrieval, our recommendation task requires a more comprehensive notion of document relevance than bags-of-words would supply. Inspired by music recommender systems, we automatically extract a wide variety of content-based features to characterize non-linguistic aspects of the audio such as speaker, language, gender, and environment. To combine these heterogeneous information sources into a single relevance judgement, we evaluate feature, score, and hybrid fusion techniques. Our study provides an essential first exploration of the task and clearly demonstrates the value of a multisource approach over a bag-of-words baseline.", "title": "" }, { "docid": "8f5ca16c82dfdb7d551fdf203c9ebf7a", "text": "We analyze a few of the commonly used statistics based and machine learning algorithms for natural language disambiguation tasks and observe that they can bc recast as learning linear separators in the feature space. Each of the methods makes a priori assumptions, which it employs, given the data, when searching for its hypothesis. Nevertheless, as we show, it searches a space that is as rich as the space of all linear separators. We use this to build an argument for a data driven approach which merely searches for a good linear separator in the feature space, without further assumptions on the domain or a specific problem. We present such an approach a sparse network of linear separators, utilizing the Winnow learning aigorlthrn and show how to use it in a variety of ambiguity resolution problems. The learning approach presented is attribute-efficient and, therefore, appropriate for domains having very large number of attributes. In particular, we present an extensive experimental comparison of our approach with other methods on several well studied lexical disambiguation tasks such as context-sensltlve spelling correction, prepositional phrase attachment and part of speech tagging. In all cases we show that our approach either outperforms other methods tried for these tasks or performs comparably to the best.", "title": "" }, { "docid": "c3cb261d9dc6b92a6e69e4be7ec44978", "text": "An increasing number of studies in political communication focus on the “sentiment” or “tone” of news content, political speeches, or advertisements. This growing interest in measuring sentiment coincides with a dramatic increase in the volume of digitized information. Computer automation has a great deal of potential in this new media environment. The objective here is to outline and validate a new automated measurement instrument for sentiment analysis in political texts. Our instrument uses a dictionary-based approach consisting of a simple word count of the frequency of keywords in a text from a predefined dictionary. The design of the freely available Lexicoder Sentiment Dictionary (LSD) is discussed in detail here. The dictionary is tested against a body of human-coded news content, and the resulting codes are also compared to results from nine existing content-analytic dictionaries. Analyses suggest that the LSD produces results that are more systematically related to human coding than are results based on the other available dictionaries. The LSD is thus a useful starting point for a revived discussion about dictionary construction and validation in sentiment analysis for political communication.", "title": "" }, { "docid": "0a3feaa346f4fd6bfc0bbda6ba92efc6", "text": "We present Magic Finger, a small device worn on the fingertip, which supports always-available input. Magic Finger inverts the typical relationship between the finger and an interactive surface: with Magic Finger, we instrument the user's finger itself, rather than the surface it is touching. Magic Finger senses touch through an optical mouse sensor, enabling any surface to act as a touch screen. Magic Finger also senses texture through a micro RGB camera, allowing contextual actions to be carried out based on the particular surface being touched. A technical evaluation shows that Magic Finger can accurately sense 22 textures with an accuracy of 98.9%. We explore the interaction design space enabled by Magic Finger, and implement a number of novel interaction techniques that leverage its unique capabilities.", "title": "" }, { "docid": "845ee0b77e30a01d87e836c6a84b7d66", "text": "This paper proposes an efficient and effective scheme to applying the sliding window approach popular in computer vision to 3D data. Specifically, the sparse nature of the problem is exploited via a voting scheme to enable a search through all putative object locations at any orientation. We prove that this voting scheme is mathematically equivalent to a convolution on a sparse feature grid and thus enables the processing, in full 3D, of any point cloud irrespective of the number of vantage points required to construct it. As such it is versatile enough to operate on data from popular 3D laser scanners such as a Velodyne as well as on 3D data obtained from increasingly popular push-broom configurations. Our approach is “embarrassingly parallelisable” and capable of processing a point cloud containing over 100K points at eight orientations in less than 0.5s. For the object classes car, pedestrian and bicyclist the resulting detector achieves best-in-class detection and timing performance relative to prior art on the KITTI dataset as well as compared to another existing 3D object detection approach.", "title": "" }, { "docid": "21f56bb6edbef3448275a0925bd54b3a", "text": "Dr. Stephanie L. Cincotta (Psychiatry): A 35-year-old woman was seen in the emergency department of this hospital because of a pruritic rash. The patient had a history of hepatitis C virus (HCV) infection, acne, depression, and drug dependency. She had been in her usual health until 2 weeks before this presentation, when insomnia developed, which she attributed to her loss of a prescription for zolpidem. During the 10 days before this presentation, she reported seeing white “granular balls,” which she thought were mites or larvae, emerging from and crawling on her skin, sheets, and clothing and in her feces, apartment, and car, as well as having an associated pruritic rash. She was seen by her physician, who referred her to a dermatologist for consideration of other possible causes of the persistent rash, such as porphyria cutanea tarda, which is associated with HCV infection. Three days before this presentation, the patient ran out of clonazepam (after an undefined period during which she reportedly took more than the prescribed dose) and had increasing anxiety and insomnia. The same day, she reported seeing “bugs” on her 15-month-old son that were emerging from his scalp and were present on his skin and in his diaper and sputum. The patient scratched her skin and her child’s skin to remove the offending agents. The day before this presentation, she called emergency medical services and she and her child were transported by ambulance to the emergency department of another hospital. A diagnosis of possible cheyletiellosis was made. She was advised to use selenium sulfide shampoo and to follow up with her physician; the patient returned home with her child. On the morning of admission, while bathing her child, she noted that his scalp was turning red and he was crying. She came with her son to the emergency department of this hospital. The patient reported the presence of bugs on her skin, which she attempted to point out to examiners. She acknowledged a habit of picking at her skin since adolescence, which she said had a calming effect. Fourteen months earlier, shortly after the birth of her son, worsening acne developed that did not respond to treatment with topical antimicrobial agents and tretinoin. Four months later, a facial abscess due From the Departments of Psychiatry (S.R.B., N.K.) and Dermatology (D.K.), Massachusetts General Hospital, and the Departments of Psychiatry (S.R.B., N.K.) and Dermatology (D.K.), Harvard Medi‐ cal School — both in Boston.", "title": "" }, { "docid": "d388a5fc232f952435de21331379a0fa", "text": "This article provides a selective review of advances in scientific knowledge about autism spectrum disorder (ASD), using DSM-5 (Diagnostic and Statistical Manual of Mental Disorders, fifth edition) diagnostic criteria as a framework for the discussion. We review literature that prompted changes to the organization of ASD symptoms and diagnostic subtypes in DSM-IV, and we examine the rationale for new DSM-5 specifiers, modifiers, and severity ratings as well as the introduction of the diagnosis of social (pragmatic) communication disorder. Our goal is to summarize and critically consider the contribution of clinical psychology research, along with that of other disciplines, to the current conceptualization of ASD.", "title": "" }, { "docid": "a64a83791259350d5d76dc1ea097a7fb", "text": "Today the channels for expressing opinions seem to increase daily. When these opinions are relevant to a company, they are important sources of business insight, whether they represent critical intelligence about a customer's defection risk, the impact of an influential reviewer on other people's purchase decisions, or early feedback on product releases, company news or competitors. Capturing and analyzing these opinions is a necessity for proactive product planning, marketing and customer service and it is also critical in maintaining brand integrity. The importance of harnessing opinion is growing as consumers use technologies such as Twitter to express their views directly to other consumers. Tracking the disparate sources of opinion is hard - but even harder is quickly and accurately extracting the meaning so companies can analyze and act. Tweets' Language is complicated and contextual, especially when people are expressing opinions and requires reliable sentiment analysis based on parsing many linguistic shades of gray. This article argues that using the R programming platform for analyzing tweets programmatically simplifies the task of sentiment analysis and opinion mining. An R programming technique has been used for testing different sentiment lexicons as well as different scoring schemes. Experiments on analyzing the tweets of users over six NHL hockey teams reveals the effectively of using the opinion lexicon and the Latent Dirichlet Allocation (LDA) scoring scheme.", "title": "" }, { "docid": "4b7a885d463022a1792d99ff0c76be72", "text": "Emerging applications in sensor systems and network-wide IP traffic analysis present many technical challenges. They need distributed monitoring and continuous tracking of events. They have severe resource constraints not only at each site in terms of per-update processing time and archival space for highspeed streams of observations, but also crucially, communication constraints for collaborating on the monitoring task. These elements have been addressed in a series of recent works. A fundamental issue that arises is that one cannot make the \"uniqueness\" assumption on observed events which is present in previous works, since widescale monitoring invariably encounters the same events at different points. For example, within the network of an Internet Service Provider packets of the same flow will be observed in different routers; similarly, the same individual will be observed by multiple mobile sensors in monitoring wild animals. Aggregates of interest on such distributed environments must be resilient to duplicate observations. We study such duplicate-resilient aggregates that measure the extent of the duplication―how many unique observations are there, how many observations are unique―as well as standard holistic aggregates such as quantiles and heavy hitters over the unique items. We present accuracy guaranteed, highly communication-efficient algorithms for these aggregates that work within the time and space constraints of high speed streams. We also present results of a detailed experimental study on both real-life and synthetic data.", "title": "" }, { "docid": "513455013ecb2f4368566ba30cdb8d7f", "text": "Many modern multi-core processors sport a large shared cache with the primary goal of enhancing the statistic performance of computing workloads. However, due to resulting cache interference among tasks, the uncontrolled use of such a shared cache can significantly hamper the predictability and analyzability of multi-core real-time systems. Software cache partitioning has been considered as an attractive approach to address this issue because it does not require any hardware support beyond that available on many modern processors. However, the state-of-the-art software cache partitioning techniques face two challenges: (1) the memory co-partitioning problem, which results in page swapping or waste of memory, and (2) the availability of a limited number of cache partitions, which causes degraded performance. These are major impediments to the practical adoption of software cache partitioning. In this paper, we propose a practical OS-level cache management scheme for multi-core real-time systems. Our scheme provides predictable cache performance, addresses the aforementioned problems of existing software cache partitioning, and efficiently allocates cache partitions to schedule a given task set. We have implemented and evaluated our scheme in Linux/RK running on the Intel Core i7 quad-core processor. Experimental results indicate that, compared to the traditional approaches, our scheme is up to 39% more memory space efficient and consumes up to 25% less cache partitions while maintaining cache predictability. Our scheme also yields a significant utilization benefit that increases with the number of tasks.", "title": "" }, { "docid": "51ac4581fa82be87a28f7c080e026ae6", "text": "III", "title": "" }, { "docid": "b4b55c02185c93e49e48944c64094e27", "text": "This paper focuses on causal structure estimation from time series data in which measurements are obtained at a coarser timescale than the causal timescale of the underlying system. Previous work has shown that such subsampling can lead to significant errors about the system's causal structure if not properly taken into account. In this paper, we first consider the search for the system timescale causal structures that correspond to a given measurement timescale structure. We provide a constraint satisfaction procedure whose computational performance is several orders of magnitude better than previous approaches. We then consider finite-sample data as input, and propose the first constraint optimization approach for recovering the system timescale causal structure. This algorithm optimally recovers from possible conflicts due to statistical errors. More generally, these advances allow for a robust and non-parametric estimation of system timescale causal structures from subsampled time series data.", "title": "" }, { "docid": "37a574d4d969fc681c93508bd14cc904", "text": "A new low offset dynamic comparator for high resolution high speed analog-to-digital application has been designed. Inputs are reconfigured from the typical differential pair comparator such that near equal current distribution in the input transistors can be achieved for a meta-stable point of the comparator. Restricted signal swing clock for the tail current is also used to ensure constant currents in the differential pairs. Simulation based sensitivity analysis is performed to demonstrate the robustness of the new comparator with respect to stray capacitances, common mode voltage errors and timing errors in a TSMC 0.18mu process. Less than 10mV offset can be easily achieved with the proposed structure making it favorable for flash and pipeline data conversion applications", "title": "" }, { "docid": "4d502d1fbcdc5ea30bf54b43daa33352", "text": "This paper investigates linearity enhancements in GaN based Doherty power amplifiers (DPA) with the implementation of forward gate current blocking. Using a simple p-n diode to limit gate current, both open loop and digitally pre-distorted (DPD) linearity for wideband, high peak to average ratio modulated signals, such as LTE, are improved. Forward gate current blocking (FCB) is compatible with normally-on III-V HEMT technology where positive gate current is observed which results in nonlinear operation of RF transistor. By blocking positive gate current, waveform clipping is mitigated at the device gate node. Consequently, through dynamic biasing, the effective gate bias at the transistor input is adjusted limiting the RF input signal peaks entering the non-linear regime of the gate Schottky diode inherent to GaN devices. The proposed technique demonstrates more than a 3 dBc improvement in DPD corrected linearity in adjacent channels when four 20 MHz LTE carriers are applied.", "title": "" }, { "docid": "2549ed70fd2e06c749bf00193dad1f4d", "text": "Phenylketonuria (PKU) is an inborn error of metabolism caused by deficiency of the hepatic enzyme phenylalanine hydroxylase (PAH) which leads to high blood phenylalanine (Phe) levels and consequent damage of the developing brain with severe mental retardation if left untreated in early infancy. The current dietary Phe restriction treatment has certain clinical limitations. To explore a long-term nondietary restriction treatment, a somatic gene transfer approach in a PKU mouse model (C57Bl/6-Pahenu2) was employed to examine its preclinical feasibility. A recombinant adeno-associated virus (rAAV) vector containing the murine Pah-cDNA was generated, pseudotyped with capsids from AAV serotype 8, and delivered into the liver of PKU mice via single intraportal or tail vein injections. The blood Phe concentrations decreased to normal levels (⩽100 μM or 1.7 mg/dl) 2 weeks after vector application, independent of the sex of the PKU animals and the route of application. In particular, the therapeutic long-term correction in females was also dramatic, which had previously been shown to be difficult to achieve. Therapeutic ranges of Phe were accompanied by the phenotypic reversion from brown to black hair. In treated mice, PAH enzyme activity in whole liver extracts reversed to normal and neither hepatic toxicity nor immunogenicity was observed. In contrast, a lentiviral vector expressing the murine Pah-cDNA, delivered via intraportal vein injection into PKU mice, did not result in therapeutic levels of blood Phe. This study demonstrates the complete correction of hyperphenylalaninemia in both males and females with a rAAV serotype 8 vector. More importantly, the feasibility of a single intravenous injection may pave the way to develop a clinical gene therapy procedure for PKU patients.", "title": "" }, { "docid": "dd9fa480ec5fb7241a161608c20896aa", "text": "A new image appearance model, designated iCAM06, was developed for High-Dynamic-Range (HDR) image rendering. The model, based on the iCAM framework, incorporates the spatial processing models in the human visual system for contrast enhancement, photoreceptor light adaptation functions that enhance local details in highlights and shadows, and functions that predict a wide range of color appearance phenomena. Evaluation of the model proved iCAM06 to have consistently good HDR rendering performance in both preference and accuracy making iCAM06 a good candidate for a general-purpose tone-mapping operator with further potential applications to a wide-range of image appearance research and practice. 2007 Elsevier Inc. All rights reserved.", "title": "" } ]
scidocsrr
61f46ae2b50e769f990ea12eefb376a3
Darknet as a Source of Cyber Intelligence: Survey, Taxonomy, and Characterization
[ { "docid": "b46885c79ece056211faeaa23cbb5c20", "text": "We have been developing the Network Incident analysis Center for Tactical Emergency Response (nicter), whose objective is to detect and identify propagating malwares. The nicter mainly monitors darknet, a set of unused IP addresses, to observe global trends of network threats, while it captures and analyzes malware executables. By correlating the network threats with analysis results of malware, the nicter identifies the root causes (malwares) of the detected network threats. Through a long-term operation of the nicter for more than five years, we have achieved some key findings that would help us to understand the intentions of attackers and the comprehensive threat landscape of the Internet. With a focus on a well-knwon malware, i. e., W32.Downadup, this paper provides some practical case studies with considerations and consequently we could obtain a threat landscape that more than 60% of attacking hosts observed in our dark-net could be infected by W32.Downadup. As an evaluation, we confirmed that the result of the correlation analysis was correct in a rate of 86.18%.", "title": "" }, { "docid": "7c7bec32e3949f3a6c0e1109cacd80f5", "text": "Attackers can render distributed denial-of-service attacks more difficult to defend against by bouncing their flooding traffic off of reflectors; that is, by spoofing requests from the victim to a large set of Internet servers that will in turn send their combined replies to the victim. The resulting dilution of locality in the flooding stream complicates the victim's abilities both to isolate the attack traffic in order to block it, and to use traceback techniques for locating the source of streams of packets with spoofed source addresses, such as ITRACE [Be00a], probabilistic packet marking [SWKA00], [SP01], and SPIE [S+01]. We discuss a number of possible defenses against reflector attacks, finding that most prove impractical, and then assess the degree to which different forms of reflector traffic will have characteristic signatures that the victim can use to identify and filter out the attack traffic. Our analysis indicates that three types of reflectors pose particularly significant threats: DNS and Gnutella servers, and TCP-based servers (particularly Web servers) running on TCP implementations that suffer from predictable initial sequence numbers. We argue in conclusion in support of \"reverse ITRACE\" [Ba00] and for the utility of packet traceback techniques that work even for low volume flows, such as SPIE.", "title": "" } ]
[ { "docid": "e9e7cb42ed686ace9e9785fafd3c72f8", "text": "We present a fully automated multimodal medical image matching technique. Our method extends the concepts used in the computer vision SIFT technique for extracting and matching distinctive scale invariant features in 2D scalar images to scalar images of arbitrary dimensionality. This extension involves using hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. These features were successfully applied to determine accurate feature point correspondence between pairs of medical images (3D) and dynamic volumetric data (3D+time).", "title": "" }, { "docid": "e603d2a71580691cf6a61f0e892127cc", "text": "Advances in tourism economics have enabled us to collect massive amounts of travel tour data. If properly analyzed, this data can be a source of rich intelligence for providing real-time decision making and for the provision of travel tour recommendations. However, tour recommendation is quite different from traditional recommendations, because the tourist's choice is directly affected by the travel cost, which includes the financial cost and the time. To that end, in this paper, we provide a focused study of cost-aware tour recommendation. Along this line, we develop two cost-aware latent factor models to recommend travel packages by considering both the travel cost and the tourist's interests. Specifically, we first design a cPMF model, which models the tourist's cost with a 2-dimensional vector. Also, in this cPMF model, the tourist's interests and the travel cost are learnt by exploring travel tour data. Furthermore, in order to model the uncertainty in the travel cost, we further introduce a Gaussian prior into the cPMF model and develop the GcPMF model, where the Gaussian prior is used to express the uncertainty of the travel cost. Finally, experiments on real-world travel tour data show that the cost-aware recommendation models outperform state-of-the-art latent factor models with a significant margin. Also, the GcPMF model with the Gaussian prior can better capture the impact of the uncertainty of the travel cost, and thus performs better than the cPMF model.", "title": "" }, { "docid": "3466b63bb3c1fc1f8735ee94e2a644a0", "text": "Sentence simplification reduces semantic complexity to benefit people with language impairments. Previous simplification studies on the sentence level and word level have achieved promising results but also meet great challenges. For sentencelevel studies, sentences after simplification are fluent but sometimes are not really simplified. For word-level studies, words are simplified but also have potential grammar errors due to different usages of words before and after simplification. In this paper, we propose a two-step simplification framework by combining both the word-level and the sentence-level simplifications, making use of their corresponding advantages. Based on the twostep framework, we implement a novel constrained neural generation model to simplify sentences given simplified words. The final results on Wikipedia and Simple Wikipedia aligned datasets indicate that our method yields better performance than various baselines.", "title": "" }, { "docid": "6aa9c8e07665bfdbc75cf34e203c7dae", "text": "Article history: Received 24 September 2009 Received in revised form 1 October 2009 Accepted 1 October 2009 Available online 17 November 2009 The advent of social computing on the Web has led to a new generation of Web applications that are powerful and world-changing. However, we argue that we are just at the beginning of this age of “social machines” and that their continued evolution and growth requires the cooperation of Web and AI researchers. In this paper, we show how the growing Semantic Web provides necessary support for these technologies, outline the challenges we see in bringing the technology to the next level, and propose some starting places for the research. © 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "58cd097f54320125f728dcc5f3ce9099", "text": "Modern manufacturing systems are facing a globally competitive market to an extent not experienced before. This competitive pressure forces manufacturers to produce more products with shorter life span and better quality, yet at a lower cost. To succeed in this environment, manufacturing firms need to have an accurate estimate of product design and development costs. This is especially important since the shorter life span of products accentuates design and development", "title": "" }, { "docid": "09062173db6b5f5190ab7c8f7f6ce6fd", "text": "This paper presents component techniques essential for converting executables to a high-level intermediate representation (IR) of an existing compiler. The compiler IR is then employed for three distinct applications: binary rewriting using the compiler's binary back-end, vulnerability detection using source-level symbolic execution, and source-code recovery using the compiler's C backend. Our techniques enable complex high-level transformations not possible in existing binary systems, address a major challenge of input-derived memory addresses in symbolic execution and are the first to enable recovery of a fully functional source-code.\n We present techniques to segment the flat address space in an executable containing undifferentiated blocks of memory. We demonstrate the inadequacy of existing variable identification methods for their promotion to symbols and present our methods for symbol promotion. We also present methods to convert the physically addressed stack in an executable (with a stack pointer) to an abstract stack (without a stack pointer). Our methods do not use symbolic, relocation, or debug information since these are usually absent in deployed executables.\n We have integrated our techniques with a prototype x86 binary framework called SecondWrite that uses LLVM as IR. The robustness of the framework is demonstrated by handling executables totaling more than a million lines of source-code, produced by two different compilers (gcc and Microsoft Visual Studio compiler), three languages (C, C++, and Fortran), two operating systems (Windows and Linux) and a real world program (Apache server).", "title": "" }, { "docid": "1c94dec13517bedf7a8140e207e0a6d9", "text": "Art and anatomy were particularly closely intertwined during the Renaissance period and numerous painters and sculptors expressed themselves in both fields. Among them was Michelangelo Buonarroti (1475-1564), who is renowned for having produced some of the most famous of all works of art, the frescoes on the ceiling and on the wall behind the altar of the Sistine Chapel in Rome. Recently, a unique association was discovered between one of Michelangelo's most celebrated works (The Creation of Adam fresco) and the Divine Proportion/Golden Ratio (GR) (1.6). The GR can be found not only in natural phenomena but also in a variety of human-made objects and works of art. Here, using Image-Pro Plus 6.0 software, we present mathematical evidence that Michelangelo also used the GR when he painted Saint Bartholomew in the fresco of The Last Judgment, which is on the wall behind the altar. This discovery will add a new dimension to understanding the great works of Michelangelo Buonarroti.", "title": "" }, { "docid": "df10984391cfb52e8ece9ae3766754c1", "text": "A major challenge that arises in Weakly Supervised Object Detection (WSOD) is that only image-level labels are available, whereas WSOD trains instance-level object detectors. A typical approach to WSOD is to 1) generate a series of region proposals for each image and assign the image-level label to all the proposals in that image; 2) train a classifier using all the proposals; and 3) use the classifier to select proposals with high confidence scores as the positive instances for another round of training. In this way, the image-level labels are iteratively transferred to instance-level labels.\n We aim to resolve the following two fundamental problems within this paradigm. First, existing proposal generation algorithms are not yet robust, thus the object proposals are often inaccurate. Second, the selected positive instances are sometimes noisy and unreliable, which hinders the training at subsequent iterations. We adopt two separate neural networks, one to focus on each problem, to better utilize the specific characteristic of region proposal refinement and positive instance selection. Further, to leverage the mutual benefits of the two tasks, the two neural networks are jointly trained and reinforced iteratively in a progressive manner, starting with easy and reliable instances and then gradually incorporating difficult ones at a later stage when the selection classifier is more robust. Extensive experiments on the PASCAL VOC dataset show that our method achieves state-of-the-art performance.", "title": "" }, { "docid": "b7dde96d6afeff55f35e655bbee8dfa3", "text": "With increasing age, dogs develop a form of neurodegenerative disease which has many similarities to age related cognitive impairment and Alzheimer's disease in humans. A decline in learning and memory can be demonstrated in dogs beginning as young as 7 years of age using a variety of neuropsychological tests. However, clinical cases of cognitive dysfunction syndrome are seldom identified until the age of 11 years or older. This is likely due to the fact that the owners are relying on clinical observations such as house-soiling, sleep-wake cycles and disorientation, rather than tests of learning and memory. On the other hand, dogs that are trained to more exacting tasks such as guide dogs for the visually impaired, or bomb detection and agility trained dogs might be noticed to have a decline in performance at a much earlier age. Through the use of standardized neuropsychological testing protocols, a number of drugs, natural products and supplement formulations have been developed for use in dogs with cognitive dysfunction and, in some cases clinical trials have validated their efficacy. Furthermore, the testing of products currently licensed and in the pipeline for the treatment of cognitive decline and Alzheimer's in humans, may provide additional therapeutic agents for the treatment of senior dogs, as well as provide insight as to the potential for the efficacy of these compounds in humans. This review will examine those products that are now marketed along with some that might be considered for use in senior dogs with cognitive dysfunction as well as the research that has been used to validate the efficacy (or lack thereof) of these compounds.", "title": "" }, { "docid": "8e92ade2f4096cbfabd51e018138c2f6", "text": "Recent results by Martin et al. (2014) showed in 3D SPH simulations that tilted discs in binary systems can be unstable to the development of global, damped Kozai–Lidov (KL) oscillations in which the discs exchange tilt for eccentricity. We investigate the linear stability of KL modes for tilted inviscid discs under the approximations that the disc eccentricity is small and the disc remains flat. By using 1D equations, we are able to probe regimes of large ratios of outer to inner disc edge radii that are realistic for binary systems of hundreds of AU separations and are not easily probed by multidimensional simulations. For order unity binary mass ratios, KL instability is possible for a window of disc aspect ratios H/r in the outer parts of a disc that roughly scale as (nb/n) 2 < ∼ H/r< ∼ nb/n, for binary orbital frequency nb and orbital frequency n at the disc outer edge. We present a framework for understanding the zones of instability based on the determination of branches of marginally unstable modes. In general, multiple growing eccentric KL modes can be present in a disc. Coplanar apsidal-nodal precession resonances delineate instability branches. We determine the range of tilt angles for unstable modes as a function of disc aspect ratio. Unlike the KL instability for free particles that involves a critical (minimum) tilt angle, disc instability is possible for any nonzero tilt angle depending on the disc aspect ratio.", "title": "" }, { "docid": "6e678ccfefa93d1d27a36b28ac5737c4", "text": "BACKGROUND\nBiofilm formation is a major virulence factor in different bacteria. Biofilms allow bacteria to resist treatment with antibacterial agents. The biofilm formation on glass and steel surfaces, which are extremely useful surfaces in food industries and medical devices, has always had an important role in the distribution and transmission of infectious diseases.\n\n\nOBJECTIVES\nIn this study, the effect of coating glass and steel surfaces by copper nanoparticles (CuNPs) in inhibiting the biofilm formation by Listeria monocytogenes and Pseudomonas aeruginosa was examined.\n\n\nMATERIALS AND METHODS\nThe minimal inhibitory concentrations (MICs) of synthesized CuNPs were measured against L. monocytogenes and P. aeruginosa by using the broth-dilution method. The cell-surface hydrophobicity of the selected bacteria was assessed using the bacterial adhesion to hydrocarbon (BATH) method. Also, the effect of the CuNP-coated surfaces on the biofilm formation of the selected bacteria was calculated via the surface assay.\n\n\nRESULTS\nThe MICs for the CuNPs according to the broth-dilution method were ≤ 16 mg/L for L. monocytogenes and ≤ 32 mg/L for P. aeruginosa. The hydrophobicity of P. aeruginosa and L. monocytogenes was calculated as 74% and 67%, respectively. The results for the surface assay showed a significant decrease in bacterial attachment and colonization on the CuNP-covered surfaces.\n\n\nCONCLUSIONS\nOur data demonstrated that the CuNPs inhibited bacterial growth and that the CuNP-coated surfaces decreased the microbial count and the microbial biofilm formation. Such CuNP-coated surfaces can be used in medical devices and food industries, although further studies in order to measure their level of toxicity would be necessary.", "title": "" }, { "docid": "bcee490d287e146ff1c4fe7f1dee2cbf", "text": "Biometrics is a growing technology, which has been widely used in forensics, secured access and prison security. A biometric system is fundamentally a pattern recognition system that recognizes a person by determining the authentication by using his different biological features i.e. Fingerprint, retina-scan, iris scan, hand geometry, and face recognition are leading physiological biometrics and behavioral characteristic are Voice recognition, keystroke-scan, and signature-scan. In this paper different biometrics techniques such as Iris scan, retina scan and face recognition techniques are discussed. Keyword: Biometric, Biometric techniques, Eigenface, Face recognition.", "title": "" }, { "docid": "3534e4321560c826057e02c52d4915dd", "text": "While hexahedral mesh elements are preferred by a variety of simulation techniques, constructing quality all-hex meshes of general shapes remains a challenge. An attractive hex-meshing approach, often referred to as submapping, uses a low distortion mapping between the input model and a PolyCube (a solid formed from a union of cubes), to transfer a regular hex grid from the PolyCube to the input model. Unfortunately, the construction of suitable PolyCubes and corresponding volumetric maps for arbitrary shapes remains an open problem. Our work introduces a new method for computing low-distortion volumetric PolyCube deformations of general shapes and for subsequent all-hex remeshing. For a given input model, our method simultaneously generates an appropriate PolyCube structure and mapping between the input model and the PolyCube. From these we automatically generate good quality all-hex meshes of complex natural and man-made shapes.", "title": "" }, { "docid": "605d2fed747be856d0ae47ddb559d177", "text": "Leukemia is a malignant neoplasm of the blood or bone marrow that affects both children and adults and remains a leading cause of death around the world. Acute lymphoblastic leukemia (ALL) is the most common type of leukemia and is more common among children and young adults. ALL diagnosis through microscopic examination of the peripheral blood and bone marrow tissue samples is performed by hematologists and has been an indispensable technique long since. However, such visual examinations of blood samples are often slow and are also limited by subjective interpretations and less accurate diagnosis. The objective of this work is to improve the ALL diagnostic accuracy by analyzing morphological and textural features from the blood image using image processing. This paper aims at proposing a quantitative microscopic approach toward the discrimination of lymphoblasts (malignant) from lymphocytes (normal) in stained blood smear and bone marrow samples and to assist in the development of a computer-aided screening of ALL. Automated recognition of lymphoblasts is accomplished using image segmentation, feature extraction, and classification over light microscopic images of stained blood films. Accurate and authentic diagnosis of ALL is obtained with the use of improved segmentation methodology, prominent features, and an ensemble classifier, facilitating rapid screening of patients. Experimental results are obtained and compared over the available image data set. It is observed that an ensemble of classifiers leads to 99 % accuracy in comparison with other standard classifiers, i.e., naive Bayesian (NB), K-nearest neighbor (KNN), multilayer perceptron (MLP), radial basis functional network (RBFN), and support vector machines (SVM).", "title": "" }, { "docid": "0e57965abb5fd33280cdd02c42a88edb", "text": "It is known that Naïve Bayesian classifier (NB) works very well on some domains, and poorly on some. The performance of NB suffers in domains that involve correlated features. C4.5 decision trees, on the other hand, typically perform better than the Naïve Bayesian algorithm on such domains. This paper describes a Selective Bayesian classifier (SBC) that simply uses only those features that C4.5 would use in its decision tree when learning a small example of a training set, a combination of the two different natures of classifiers. Experiments conducted on ten datasets indicate that SBC performs reliably better than NB on all domains, and SBC outperforms C4.5 on many datasets of which C4.5 outperform NB. Augmented Bayesian classifier (ABC) are also tested on the same data, and SBC appears to perform as well as ABC. SBC also can eliminate, on most cases, more than half of the original attributes, which can greatly reduce the size of the training and test data, as well as the running time. Further, the SBC algorithm typically learns faster than both C4.5 and NB, needing fewer training examples to reach high accuracy of classification.", "title": "" }, { "docid": "7af729438f32c198d328a1ebc83d2eeb", "text": "The development of natural language interfaces (NLI's) for databases has been a challenging problem in natural language processing (NLP) since the 1970's. The need for NLI's has become more pronounced due to the widespread access to complex databases now available through the Internet. A challenging problem for empirical NLP is the automated acquisition of NLI's from training examples. We present a method for integrating statistical and relational learning techniques for this task which exploits the strength of both approaches. Experimental results from three different domains suggest that such an approach is more robust than a previous purely logicbased approach. 1 I n t r o d u c t i o n We use the term semantic parsing to refer to the process of mapping a natural language sentence to a structured meaning representation. One interesting application of semantic parsing is building natural language interfaces for online databases. The need for such applications is growing since when information is delivered through the Internet, most users do not know the underlying database access language. An example of such an interface that we have developed is shown in Figure 1. Traditional (rationalist) approaches to constructing database interfaces require an expert to hand-craft an appropriate semantic parser (Woods, 1970; Hendrix et al., 1978). However, such hand-crafted parsers are time consllming to develop and suffer from problems with robustness and incompleteness even for domain specific applications. Nevertheless, very little research in empirical NLP has explored the task of automatically acquiring such interfaces from annotated training examples. The only exceptions of which we are aware axe a statistical approach to mapping airline-information queries into SQL presented in (Miller et al., 1996), a probabilistic decision-tree method for the same task described in (Kuhn and De Mori, 1995), and an approach using relational learning (a.k.a. inductive logic programming, ILP) to learn a logic-based semantic parser described in (Zelle and Mooney, 1996). The existing empirical systems for this task employ either a purely logical or purely statistical approach. The former uses a deterministic parser, which can suffer from some of the same robustness problems as rationalist methods. The latter constructs a probabilistic grammar, which requires supplying a sytactic parse tree as well as a semantic representation for each training sentence, and requires hand-crafting a small set of contextual features on which to condition the parameters of the model. Combining relational and statistical approaches can overcome the need to supply parse-trees and hand-crafted features while retaining the robustness of statistical parsing. The current work is based on the CHILL logic-based parser-acquisition framework (Zelle and Mooney, 1996), retaining access to the complete parse state for making decisions, but building a probabilistic relational model that allows for statistical parsing2 O v e r v i e w o f t h e A p p r o a c h This section reviews our overall approach using an interface developed for a U.S. Geography database (Geoquery) as a sample application (ZeUe and Mooney, 1996) which is available on the Web (see hl:tp://gvg, c s . u t e z a s , edu/users/n~./geo .html). 2.1 S e m a n t i c R e p r e s e n t a t i o n First-order logic is used as a semantic representation language. CHILL has also been applied to a restaurant database in which the logical form resembles SQL, and is translated", "title": "" }, { "docid": "83e0fdbaa10c01aecdbe9cf853511230", "text": "We use an online travel context to test three aspects of communication", "title": "" }, { "docid": "7b9df4427a6290cf5efda9c41612ad64", "text": "A systematic design of planar MIMO monopole antennas with significantly reduced mutual coupling is presented, based on the concept of metamaterials. The design is performed by means of individual rectangular loop resonators, placed in the space between the antenna elements. The underlying principle is that resonators act like small metamaterial samples, thus providing an effective means of controlling electromagnetic wave propagation. The proposed design achieves considerably high levels of isolation between antenna elements, without essentially affecting the simplicity and planarity of the MIMO antenna.", "title": "" }, { "docid": "4fb62f06132119cb396e7f21a47d8682", "text": "It has long been an important issue in various disciplines to examine massive multidimensional data superimposed by a high level of noises and interferences by extracting the embedded multi-way factors. With the quick increases of data scales and dimensions in the big data era, research challenges arise in order to (1) reflect the dynamics of large tensors while introducing no significant distortions in the factorization procedure and (2) handle influences of the noises in sophisticated applications. A hierarchical parallel processing framework over a GPU cluster, namely H-PARAFAC, has been developed to enable scalable factorization of large tensors upon a “divide-and-conquer” theory for Parallel Factor Analysis (PARAFAC). The H-PARAFAC framework incorporates a coarse-grained model for coordinating the processing of sub-tensors and a fine-grained parallel model for computing each sub-tensor and fusing sub-factors. Experimental results indicate that (1) the proposed method breaks the limitation on the scale of multidimensional data to be factorized and dramatically outperforms the traditional counterparts in terms of both scalability and efficiency, e.g., the runtime increases in the order of <inline-formula> <tex-math notation=\"LaTeX\">$n^2$</tex-math><alternatives><inline-graphic xlink:href=\"wang-ieq1-2613054.gif\"/> </alternatives></inline-formula> when the data volume increases in the order of <inline-formula> <tex-math notation=\"LaTeX\">$n^3$</tex-math><alternatives><inline-graphic xlink:href=\"wang-ieq2-2613054.gif\"/> </alternatives></inline-formula>, (2) H-PARAFAC has potentials in refraining the influences of significant noises, and (3) H-PARAFAC is far superior to the conventional window-based counterparts in preserving the features of multiple modes of large tensors.", "title": "" } ]
scidocsrr
ba3f876c018365093adc11feddc71ed8
Emotion and decision-making explained: Response to commentators
[ { "docid": "995bca87ad29c6ddd665aa9b73f250d3", "text": "Research on the neural systems underlying emotion in animal models over the past two decades has implicated the amygdala in fear and other emotional processes. This work stimulated interest in pursuing the brain mechanisms of emotion in humans. Here, we review research on the role of the amygdala in emotional processes in both animal models and humans. The review is not exhaustive, but it highlights five major research topics that illustrate parallel roles for the amygdala in humans and other animals, including implicit emotional learning and memory, emotional modulation of memory, emotional influences on attention and perception, emotion and social behavior, and emotion inhibition and regulation.", "title": "" } ]
[ { "docid": "11f2adab1fb7a93e0c9009a702389af1", "text": "OBJECTIVE\nThe authors present clinical outcome data and satisfaction of patients who underwent minimally invasive vertebral body corpectomy and cage placement via a mini-open, extreme lateral, transpsoas approach and posterior short-segment instrumentation for lumbar burst fractures.\n\n\nMETHODS\nPatients with unstable lumbar burst fractures who underwent corpectomy and anterior column reconstruction via a mini-open, extreme lateral, transpsoas approach with short-segment posterior fixation were reviewed retrospectively. Demographic information, operative parameters, perioperative radiographic measurements, and complications were analyzed. Patient-reported outcome instruments (Oswestry Disability Index [ODI], 12-Item Short Form Health Survey [SF-12]) and an anterior scar-specific patient satisfaction questionnaire were recorded at the latest follow-up.\n\n\nRESULTS\nTwelve patients (7 men, 5 women, average age 42 years, range 22-68 years) met the inclusion criteria. Lumbar corpectomies with anterior column support were performed (L-1, n = 8; L-2, n = 2; L-3, n = 2) and supplemented with short-segment posterior instrumentation (4 open, 8 percutaneous). Four patients had preoperative neurological deficits, all of which improved after surgery. No new neurological complications were noted. The anterior incision on average was 6.4 cm (range 5-8 cm) in length, caused mild pain and disability, and was aesthetically acceptable to the large majority of patients. Three patients required chest tube placement for pleural violation, and 1 patient required reoperation for cage subsidence/hardware failure. Average clinical follow-up was 38 months (range 16-68 months), and average radiographic follow-up was 37 months (range 6-68 months). Preoperative lumbar lordosis and focal lordosis were significantly improved/maintained after surgery. Patients were satisfied with their outcomes, had minimal/moderate disability (average ODI score 20, range 0-52), and had good physical (SF-12 physical component score 41.7% ± 10.4%) and mental health outcomes (SF-12 mental component score 50.2% ± 11.6%) after surgery.\n\n\nCONCLUSIONS\nAnterior corpectomy and cage placement via a mini-open, extreme lateral, transpsoas approach supplemented by short-segment posterior instrumentation is a safe, effective alternative to conventional approaches in the treatment of single-level unstable burst fractures and is associated with excellent functional outcomes and patient satisfaction.", "title": "" }, { "docid": "dd741d612ee466aecbb03f5e1be89b90", "text": "To date, many of the methods for information extraction of biological information from scientific articles are restricted to the abstract of the article. However, full text articles in electronic version, which offer larger sources of data, are currently available. Several questions arise as to whether the effort of scanning full text articles is worthy, or whether the information that can be extracted from the different sections of an article can be relevant. In this work we addressed those questions showing that the keyword content of the different sections of a standard scientific article (abstract, introduction, methods, results, and discussion) is very heterogeneous. Although the abstract contains the best ratio of keywords per total of words, other sections of the article may be a better source of biologically relevant data.", "title": "" }, { "docid": "a2c9c975788253957e6bbebc94eb5a4b", "text": "The implementation of Substrate Integrated Waveguide (SIW) structures in paper-based inkjet-printed technology is presented in this paper for the first time. SIW interconnects and components have been fabricated and tested on a multilayer paper substrate, which permits to implement low-cost and eco-friendly structures. A broadband and compact ridge substrate integrated slab waveguide covering the entire UWB frequency range is proposed and preliminarily verified. SIW structures appear particularly suitable for implementation on paper, due to the possibility to easily realize multilayered topologies and conformal geometries.", "title": "" }, { "docid": "b75eca4c07d5f04b73c4c8e447cbc878", "text": "For a conventional offline Buck-Boost LED driver, significant low frequency ripple current is produced when a high power factor has been achieved. In this paper, an innovative LED driver technology based on the Buck-Boost topology has been proposed. The featured configuration has greatly reduced the low frequency ripple current without compromising power factor performance. High efficiency and low component cost features have also been retained from conventional Buck-Boost LED driver. A 10W, 50V-0.2A experimental prototype has been constructed to verify the performance of the proposed technology.", "title": "" }, { "docid": "6745e91294ae763f1f7ad7790bc9ccb4", "text": "In this paper we propose an asymmetric semantic similarity among instances within an ontology. We aim to define a measurement of semantic similarity that exploit as much as possible the knowledge stored in the ontology taking into account different hints hidden in the ontology definition. The proposed similarity measurement considers different existing similarities, which we have combined and extended. Moreover, the similarity assessment is explicitly parameterised according to the criteria induced by the context. The parameterisation aims to assist the user in the decision making pertaining to similarity evaluation, as the criteria can be refined according to user needs. Experiments and an evaluation of the similarity assessment are presented showing the efficiency of the method.", "title": "" }, { "docid": "f1bf6f8124e4def3b63fefd73ca5ec54", "text": "Garbage in and garbage out. A Q&A system must receive a well formulated question that matches the user’s intent or she has no chance to receive satisfactory answers. In this paper, we propose a keywords to questions (K2Q) system to assist a user to articulate and refine questions. K2Q generates candidate questions and refinement words from a set of input keywords. After specifying some initial keywords, a user receives a list of candidate questions as well as a list of refinement words. The user can then select a satisfactory question, or select a refinement word to generate a new list of candidate questions and refinement words. We propose a User Inquiry Intent (UII) model to describe the joint generation process of keywords and questions for ranking questions, suggesting refinement words, and generating questions that may not have previously appeared. Empirical study shows UII to be useful and effective for the K2Q task.", "title": "" }, { "docid": "7456ceee02f50c9e92a665d362a9a419", "text": "Visualization of dynamically changing networks (graphs) is a significant challenge for researchers. Previous work has experimentally compared animation, small multiples, and other techniques, and found trade-offs between these. One potential way to avoid such trade-offs is to combine previous techniques in a hybrid visualization. We present two taxonomies of visualizations of dynamic graphs: one of non-hybrid techniques, and one of hybrid techniques. We also describe a prototype, called DiffAni, that allows a graph to be visualized as a sequence of three kinds of tiles: diff tiles that show difference maps over some time interval, animation tiles that show the evolution of the graph over some time interval, and small multiple tiles that show the graph state at an individual time slice. This sequence of tiles is ordered by time and covers all time slices in the data. An experimental evaluation of DiffAni shows that our hybrid approach has advantages over non-hybrid techniques in certain cases.", "title": "" }, { "docid": "83355cf4228e84a718bffba06250520a", "text": "Fabric defect detection is now an active area of research for identifying and resolving problems of textile industry, to enhance the performance and also to maintain the quality of fabric. The traditional system of visual inspection by human beings is extremely time consuming, high on costs as well as not reliable since it is highly error prone. Defect detection & classification are the major challenges in defect inspection. Hence in order to overcome these drawbacks, faster and cost effective automatic defect detection is very necessary. Considering these necessities, this paper proposes wavelet filter method. It also explains in detail its various techniques of getting final output like preprocessing, decomposition, thresholding, and noise eliminating.", "title": "" }, { "docid": "f41e19c3568499ae811b9ffce8590530", "text": "In the past few years, with the rapid development of heterogeneous computing systems (HCS), the issue of energy consumption has attracted a great deal of attention. How to reduce energy consumption is currently a critical issue in designing HCS. In response to this challenge, many energy-aware scheduling algorithms have been developed primarily using the dynamic voltage-frequency scaling (DVFS) capability which has been incorporated into recent commodity processors. However, these techniques are unsatisfactory in minimizing both schedule length and energy consumption. Furthermore, most algorithms schedule tasks according to their average-case execution times and do not consider task execution times with probability distributions in the real-world. In realizing this, we study the problem of scheduling a bag-of-tasks (BoT) application, made of a collection of independent stochastic tasks with normal distributions of task execution times, on a heterogeneous platform with deadline and energy consumption budget constraints. We build execution time and energy consumption models for stochastic tasks on a single processor. We derive the expected value and variance of schedule length on HCS by Clark's equations. We formulate our stochastic task scheduling problem as a linear programming problem, in which we maximize the weighted probability of combined schedule length and energy consumption metric under deadline and energy consumption budget constraints. We propose a heuristic energy-aware stochastic task scheduling algorithm called ESTS to solve this problem. Our algorithm can achieve high scheduling performance for BoT applications with low time complexity O(n(M + logn)), where n is the number of tasks and M is the total number of processor frequencies. Our extensive simulations for performance evaluation based on randomly generated stochastic applications and real-world applications clearly demonstrate that our proposed heuristic algorithm can improve the weighted probability that both the deadline and the energy consumption budget constraints can be met, and has the capability of balancing between schedule length and energy consumption.", "title": "" }, { "docid": "64094eef703f761aa82509326533c796", "text": "Grammatical error correction, like other machine learning tasks, greatly benefits from large quantities of high quality training data, which is typically expensive to produce. While writing a program to automatically generate realistic grammatical errors would be difficult, one could learn the distribution of naturallyoccurring errors and attempt to introduce them into other datasets. Initial work on inducing errors in this way using statistical machine translation has shown promise; we investigate cheaply constructing synthetic samples, given a small corpus of human-annotated data, using an off-the-rack attentive sequence-to-sequence model and a straight-forward post-processing procedure. Our approach yields error-filled artificial data that helps a vanilla bi-directional LSTM to outperform the previous state of the art at grammatical error detection, and a previously introduced model to gain further improvements of over 5% F0.5 score. When attempting to determine if a given sentence is synthetic, a human annotator at best achieves 39.39 F1 score, indicating that our model generates mostly human-like instances.", "title": "" }, { "docid": "45f9e645fae1f0a131c369164ba4079f", "text": "Gasification is one of the promising technologies to convert biomass to gaseous fuels for distributed power generation. However, the commercial exploitation of biomass energy suffers from a number of logistics and technological challenges. In this review, the barriers in each of the steps from the collection of biomass to electricity generation are highlighted. The effects of parameters in supply chain management, pretreatment and conversion of biomass to gas, and cleaning and utilization of gas for power generation are discussed. Based on the studies, until recently, the gasification of biomass and gas cleaning are the most challenging part. For electricity generation, either using engine or gas turbine requires a stringent specification of gas composition and tar concentration in the product gas. Different types of updraft and downdraft gasifiers have been developed for gasification and a number of physical and catalytic tar separation methods have been investigated. However, the most efficient and popular one is yet to be developed for commercial purpose. In fact, the efficient gasification and gas cleaning methods can produce highly burnable gas with less tar content, so as to reduce the total consumption of biomass for a desired quantity of electricity generation. According to the recent report, an advanced gasification method with efficient tar cleaning can significantly reduce the biomass consumption, and thus the logistics and biomass pretreatment problems can be ultimately reduced. & 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "df78964a221e583f886a8707d7868827", "text": "Mobile phones have evolved from devices that are just used for voice and text communication to platforms that are able to capture and transmit a range of data types (image, audio, and location). The adoption of these increasingly capable devices by society has enabled a potentially pervasive sensing paradigm participatory sensing. A coordinated participatory sensing system engages individuals carrying mobile phones to explore phenomena of interest using in situ data collection. For participatory sensing to succeed, several technical challenges need to be solved. In this paper, we discuss one particular issue: developing a recruitment framework to enable organizers to identify well-suited participants for data collections based on geographic and temporal availability as well as participation habits. This recruitment system is evaluated through a series of pilot data collections where volunteers explored sustainable processes on a university campus.", "title": "" }, { "docid": "89f157fd5c42ba827b7d613f80770992", "text": "Generating emotional language is a key step towards building empathetic natural language processing agents. However, a major challenge for this line of research is the lack of large-scale labeled training data, and previous studies are limited to only small sets of human annotated sentiment labels. Additionally, explicitly controlling the emotion and sentiment of generated text is also difficult. In this paper, we take a more radical approach: we exploit the idea of leveraging Twitter data that are naturally labeled with emojis. We collect a large corpus of Twitter conversations that include emojis in the response and assume the emojis convey the underlying emotions of the sentence. We investigate several conditional variational autoencoders training on these conversations, which allow us to use emojis to control the emotion of the generated text. Experimentally, we show in our quantitative and qualitative analyses that the proposed models can successfully generate highquality abstractive conversation responses in accordance with designated emotions.", "title": "" }, { "docid": "a1b5821ec18904ad805c57e6b478ef92", "text": "To extract English name mentions, we apply a linear-chain CRFs model trained from ACE 20032005 corpora (Li et al., 2012a). For Chinese and Spanish, we use Stanford name tagger (Finkel et al., 2005). We also encode several regular expression based rules to extract poster name mentions in discussion forum posts. In this year’s task, person nominal mentions extraction is added. There are two major challenges: (1) Only person nominal mentions referring to specific, individual real-world entities need to be extracted. Therefore, a system should be able to distinguish specific and generic person nominal mentions; (2) within-document coreference resolution should be applied to clustering person nominial and name mentions. We apply heuristic rules to try to solve these two challenges: (1) We consider person nominal mentions that appear after indefinite articles (e.g., a/an) or conditional conjunctions (e.g., if ) as generic. The person nomnial mention extraction F1 score of this approach is around 46% for English training data. (2) For coreference resolution, if the closest mention of a person nominal mention is a name, then we consider they are coreferential. The accuracy of this approach is 67% using perfect mentions in English training data.", "title": "" }, { "docid": "41e595e0e403cf18c2448c2223d8eef7", "text": "In this paper we present a method for summarizing Hindi Text document by creating rich semantic graph(RSG) of original document and identifying substructures of graph that can extract meaningful sentences for generating a document summary. This paper contributes the idea to summarize Hindi text document using abstractive method. We extract a set of features from each sentence that helps identify its importance in the document. It uses Hindi WordNet to identify appropriate position of word for checking SOV (Subject-Object-Verb) qualification. Therefore to optimize the summary, we find similarity among the sentences and merge the sentence which represented using Rich Semantic Sub graph which in turn produces a summarized text document.", "title": "" }, { "docid": "20710cf5fac30800217c5b9568d3541a", "text": "BACKGROUND\nAcne scarring is treatable by a variety of modalities. Ablative carbon dioxide laser (ACL), while effective, is associated with undesirable side effect profiles. Newer modalities using the principles of fractional photothermolysis (FP) produce modest results than traditional carbon dioxide (CO(2)) lasers but with fewer side effects. A novel ablative CO(2) laser device use a technique called ablative fractional resurfacing (AFR), combines CO(2) ablation with a FP system. This study was conducted to compare the efficacy of Q-switched 1064-nm Nd: YAG laser and that of fractional CO(2) laser in the treatment of patients with moderate to severe acne scarring.\n\n\nMETHODS\nSixty four subjects with moderate to severe facial acne scars were divided randomly into two groups. Group A received Q-Switched 1064-nm Nd: YAG laser and group B received fractional CO(2) laser. Two groups underwent four session treatment with laser at one month intervals. Results were evaluated by patients based on subjective satisfaction and physicians' assessment and photo evaluation by two blinded dermatologists. Assessments were obtained at baseline and at three and six months after final treatment.\n\n\nRESULTS\nPost-treatment side effects were mild and transient in both groups. According to subjective satisfaction (p = 0.01) and physicians' assessment (p < 0.001), fractional CO(2) laser was significantly more effective than Q- Switched 1064- nm Nd: YAG laser.\n\n\nCONCLUSIONS\nFractional CO2 laser has the most significant effect on the improvement of atrophic facial acne scars, compared with Q-Switched 1064-nm Nd: YAG laser.", "title": "" }, { "docid": "4daec6170f18cc8896411e808e53355f", "text": "The goal of this note is to point out that any distributed representation can be turned into a classifier through inversion via Bayes rule. The approach is simple and modular, in that it will work with any language representation whose training can be formulated as optimizing a probability model. In our application to 2 million sentences from Yelp reviews, we also find that it performs as well as or better than complex purpose-built algorithms.", "title": "" }, { "docid": "3d6744ae85a9aa07d8c4cb68c79290c7", "text": "Control over the motional degrees of freedom of atoms, ions, and molecules in a field-free environment enables unrivalled measurement accuracies but has yet to be applied to highly charged ions (HCIs), which are of particular interest to future atomic clock designs and searches for physics beyond the Standard Model. Here, we report on the Coulomb crystallization of HCIs (specifically 40Ar13+) produced in an electron beam ion trap and retrapped in a cryogenic linear radiofrequency trap by means of sympathetic motional cooling through Coulomb interaction with a directly laser-cooled ensemble of Be+ ions. We also demonstrate cooling of a single Ar13+ ion by a single Be+ ion—the prerequisite for quantum logic spectroscopy with a potential 10−19 accuracy level. Achieving a seven-orders-of-magnitude decrease in HCI temperature starting at megakelvin down to the millikelvin range removes the major obstacle for HCI investigation with high-precision laser spectroscopy.", "title": "" }, { "docid": "dacf68b5e159211d6e9bb8983ef8bb3c", "text": "Analog-to-Digital converters plays vital role in medical and signal processing applications. Normally low power ADC's were required for long term and battery operated applications. SAR ADC is best suited for low power, medium resolution and moderate speed applications. This paper presents a 10-bit low power SAR ADC which is simulated in 180nm CMOS technology. Based on literature survey, low power consumption is attained by using Capacitive DAC. Capacitive DAC also incorporate Sample-and-Hold circuit in it. Dynamic latch comparator is used to increase in speed of operation and to get lower power consumption.", "title": "" }, { "docid": "095dd4efbb23bc91b72dea1cd1c627ab", "text": "Cell-cell communication is critical across an assortment of physiological and pathological processes. Extracellular vesicles (EVs) represent an integral facet of intercellular communication largely through the transfer of functional cargo such as proteins, messenger RNAs (mRNAs), microRNA (miRNAs), DNAs and lipids. EVs, especially exosomes and shed microvesicles, represent an important delivery medium in the tumour micro-environment through the reciprocal dissemination of signals between cancer and resident stromal cells to facilitate tumorigenesis and metastasis. An important step of the metastatic cascade is the reprogramming of cancer cells from an epithelial to mesenchymal phenotype (epithelial-mesenchymal transition, EMT), which is associated with increased aggressiveness, invasiveness and metastatic potential. There is now increasing evidence demonstrating that EVs released by cells undergoing EMT are reprogrammed (protein and RNA content) during this process. This review summarises current knowledge of EV-mediated functional transfer of proteins and RNA species (mRNA, miRNA, long non-coding RNA) between cells in cancer biology and the EMT process. An in-depth understanding of EVs associated with EMT, with emphasis on molecular composition (proteins and RNA species), will provide fundamental insights into cancer biology.", "title": "" } ]
scidocsrr
2e3dc3d89275323188e4b361e579fcf7
AODV routing protocol implementation design
[ { "docid": "4f49a5cc49f1eeb864b4a6f347263710", "text": "Future wireless applications will take advantage of rapidly deployable, self-configuring multihop ad hoc networks. Because of the difficulty of obtaining IEEE 802.11 feedback about link connectivity in real networks, many multihop ad hoc networks utilize hello messages to determine local connectivity. This paper uses an implementation of the Ad hoc On-demand Distance Vector (AODV) routing protocol to examine the effectiveness of hello messages for monitoring link status. In this study, it is determined that many factors influence the utility of hello messages, including allowed hello message loss settings, discrepancy between data and hello message size and 802.11b packet handling. This paper examines these factors and experimentally evaluates a variety of approaches for improving the accuracy of hello messages as an indicator of local connectivity.", "title": "" } ]
[ { "docid": "3e9845c255b5e816741c04c4f7cf5295", "text": "This paper presents the packaging technology and the integrated antenna design for a miniaturized 122-GHz radar sensor. The package layout and the assembly process are shortly explained. Measurements of the antenna including the flip chip interconnect are presented that have been achieved by replacing the IC with a dummy chip that only contains a through-line. Afterwards, radiation pattern measurements are shown that were recorded using the radar sensor as transmitter. Finally, details of the fully integrated radar sensor are given, together with results of the first Doppler measurements.", "title": "" }, { "docid": "8af28adbd019c07a9d27e6189abecb7a", "text": "We present a method for automatically generating input parsers from English specifications of input file formats. We use a Bayesian generative model to capture relevant natural language phenomena and translate the English specification into a specification tree, which is then translated into a C++ input parser. We model the problem as a joint dependency parsing and semantic role labeling task. Our method is based on two sources of information: (1) the correlation between the text and the specification tree and (2) noisy supervision as determined by the success of the generated C++ parser in reading input examples. Our results show that our approach achieves 80.0% F-Score accuracy compared to an F-Score of 66.7% produced by a state-of-the-art semantic parser on a dataset of input format specifications from the ACM International Collegiate Programming Contest (which were written in English for humans with no intention of providing support for automated processing).1", "title": "" }, { "docid": "cb8b31d00a55f80db7508e5d2cfd34ae", "text": "Reinforcement learning (RL) is a paradigm for learning sequential decision making tasks. However, typically the user must hand-tune exploration parameters for each different domain and/or algorithm that they are using. In this work, we present an algorithm called leo for learning these exploration strategies on-line. This algorithm makes use of bandit-type algorithms to adaptively select exploration strategies based on the rewards received when following them. We show empirically that this method performs well across a set of five domains. In contrast, for a given algorithm, no set of parameters is best across all domains. Our results demonstrate that the leo algorithm successfully learns the best exploration strategies on-line, increasing the received reward over static parameterizations of exploration and reducing the need for hand-tuning exploration parameters.", "title": "" }, { "docid": "47b9d5585a0ca7d10cb0fd9da673dd0f", "text": "A novel deep architecture, the tensor deep stacking network (T-DSN), is presented. The T-DSN consists of multiple, stacked blocks, where each block contains a bilinear mapping from two hidden layers to the output layer, using a weight tensor to incorporate higher order statistics of the hidden binary (([0,1])) features. A learning algorithm for the T-DSN's weight matrices and tensors is developed and described in which the main parameter estimation burden is shifted to a convex subproblem with a closed-form solution. Using an efficient and scalable parallel implementation for CPU clusters, we train sets of T-DSNs in three popular tasks in increasing order of the data size: handwritten digit recognition using MNIST (60k), isolated state/phone classification and continuous phone recognition using TIMIT (1.1 m), and isolated phone classification using WSJ0 (5.2 m). Experimental results in all three tasks demonstrate the effectiveness of the T-DSN and the associated learning methods in a consistent manner. In particular, a sufficient depth of the T-DSN, a symmetry in the two hidden layers structure in each T-DSN block, our model parameter learning algorithm, and a softmax layer on top of T-DSN are shown to have all contributed to the low error rates observed in the experiments for all three tasks.", "title": "" }, { "docid": "ed80c1ad22dbf51bfb20351b3d7a2b8b", "text": "Three central problems in the recent literature on visual attention are reviewed. The first concerns the control of attention by top-down (or goal-directed) and bottom-up (or stimulus-driven) processes. The second concerns the representational basis for visual selection, including how much attention can be said to be location- or object-based. Finally, we consider the time course of attention as it is directed to one stimulus after another.", "title": "" }, { "docid": "3e5041c6883ce6ab59234ed2c8c995b7", "text": "Self-amputation of the penis treated immediately: case report and review of the literature. Self-amputation of the penis is rare in urological practice. It occurs more often in a context psychotic disease. It can also be secondary to alcohol or drugs abuse. Treatment and care vary according on the severity of the injury, the delay of consultation and the patient's mental state. The authors report a case of self-amputation of the penis in an alcoholic context. The authors analyze the etiological and urological aspects of this trauma.", "title": "" }, { "docid": "287572e1c394ec6959853f62b7707233", "text": "This paper presents a method for state estimation on a ballbot; i.e., a robot balancing on a single sphere. Within the framework of an extended Kalman filter and by utilizing a complete kinematic model of the robot, sensory information from different sources is combined and fused to obtain accurate estimates of the robot's attitude, velocity, and position. This information is to be used for state feedback control of the dynamically unstable system. Three incremental encoders (attached to the omniwheels that drive the ball of the robot) as well as three rate gyroscopes and accelerometers (attached to the robot's main body) are used as sensors. For the presented method, observability is proven analytically for all essential states in the system, and the algorithm is experimentally evaluated on the Ballbot Rezero.", "title": "" }, { "docid": "12d5480f42ef606a049047ee5f4d2d26", "text": "The authors investigated the development of a disposition toward empathy and its genetic and environmental origins. Young twins' (N = 409 pairs) cognitive (hypothesis testing) and affective (empathic concern) empathy and prosocial behavior in response to simulated pain by mothers and examiners were observed at multiple time points. Children's mean level of empathy and prosociality increased from 14 to 36 months. Positive concurrent and longitudinal correlations indicated that empathy was a relatively stable disposition, generalizing across ages, across its affective and cognitive components, and across mother and examiner. Multivariate genetic analyses showed that genetic effects increased, and that shared environmental effects decreased, with age. Genetic effects contributed to both change and continuity in children's empathy, whereas shared environmental effects contributed to stability and nonshared environmental effects contributed to change. Empathy was associated with prosocial behavior, and this relationship was mainly due to environmental effects.", "title": "" }, { "docid": "252f86d5e0725ce0ff8b15b9a147ee61", "text": "In the vision of the Internet of Things (IoT), an increasing number of embedded devices of all sorts (e.g., sensors, mobile phones, cameras, smart meters, smart cars, traffic lights, smart home appliances, etc.) are now capable of communicating and sharing data over the Internet. Although the concept of using embedded systems to control devices, tools and appliances has been proposed for almost decades now, with every new generation, the ever-increasing capabilities of computation and communication pose new opportunities, but also new challenges. As IoT becomes an active research area, different methods from various points of view have been explored to promote the development and popularity of IoT. One trend is viewing IoT as Web of Things (WoT) where the open Web standards are supported for information sharing and device interoperation. By penetrating smart things into existing Web, the conventional web services are enriched with physical world services. This WoT vision enables a new way of narrowing the barrier between virtual and physical worlds. In this paper, we elaborate the architecture and some key enabling technologies of WoT. Some pioneer open platforms and prototypes are also illustrated. The most recent research results are carefully summarized. Furthermore, many systematic comparisons are made to provide the insight in the evolution and future of WoT. Finally, we point out some open challenging issues that shall be faced and tackled by research community.", "title": "" }, { "docid": "0a9047c6dfe8dc7819e4d3772b823117", "text": "An increasing number of wireless applications rely on GPS signals for localization, navigation, and time synchronization. However, civilian GPS signals are known to be susceptible to spoofing attacks which make GPS receivers in range believe that they reside at locations different than their real physical locations. In this paper, we investigate the requirements for successful GPS spoofing attacks on individuals and groups of victims with civilian or military GPS receivers. In particular, we are interested in identifying from which locations and with which precision the attacker needs to generate its signals in order to successfully spoof the receivers. We will show, for example, that any number of receivers can easily be spoofed to one arbitrary location; however, the attacker is restricted to only few transmission locations when spoofing a group of receivers while preserving their constellation. In addition, we investigate the practical aspects of a satellite-lock takeover, in which a victim receives spoofed signals after first being locked on to legitimate GPS signals. Using a civilian GPS signal generator, we perform a set of experiments and find the minimal precision of the attacker's spoofing signals required for covert satellite-lock takeover.", "title": "" }, { "docid": "86177ff4fbc089fde87d1acd8452d322", "text": "Age of acquisition (AoA) effects have been used to support the notion of a critical period for first language acquisition. In this study, we examine AoA effects in deaf British Sign Language (BSL) users via a grammaticality judgment task. When English reading performance and nonverbal IQ are factored out, results show that accuracy of grammaticality judgement decreases as AoA increases, until around age 8, thus showing the unique effect of AoA on grammatical judgement in early learners. No such effects were found in those who acquired BSL after age 8. These late learners appear to have first language proficiency in English instead, which may have been used to scaffold learning of BSL as a second language later in life.", "title": "" }, { "docid": "1a3b49298f6217cc8600e00886751f7f", "text": "A person's language use reveals much about the person's social identity, which is based on the social categories a person belongs to including age and gender. We discuss the development of TweetGenie, a computer program that predicts the age of Twitter users based on their language use. We explore age prediction in three different ways: classifying users into age categories, by life stages, and predicting their exact age. An automatic system achieves better performance than humans on these tasks. Both humans and the automatic systems tend to underpredict the age of older people. We find that most linguistic changes occur when people are young, and that after around 30 years the studied variables show little change, making it difficult to predict the ages of older Twitter users.", "title": "" }, { "docid": "8d1b5872ee975ab63d275108998400e7", "text": "In May of 2008, we published online a series of software visualization videos using a method called code_swarm. Shortly thereafter, we made the code open source and its popularity took off. This paper is a study of our code swarm application, comprising its design, results and public response. We share our design methodology, including why we chose the organic information visualization technique, how we designed for both developers and a casual audience, and what lessons we learned from our experiment. We validate the results produced by code_swarm through a qualitative analysis and by gathering online user comments. Furthermore, we successfully released the code as open source, and the software community used it to visualize their own projects and shared their results as well. In the end, we believe code_swarm has positive implications for the future of organic information design and open source information visualization practice.", "title": "" }, { "docid": "59678b6abdc3264bad930cd31f1a0481", "text": "Supervised learning with large scale labeled datasets and deep layered models has made a paradigm shift in diverse areas in learning and recognition. However, this approach still suffers generalization issues under the presence of a domain shift between the training and the test data distribution. In this regard, unsupervised domain adaptation algorithms have been proposed to directly address the domain shift problem. In this paper, we approach the problem from a transductive perspective. We incorporate the domain shift and the transductive target inference into our framework by jointly solving for an asymmetric similarity metric and the optimal transductive target label assignment. We also show that our model can easily be extended for deep feature learning in order to learn features which are discriminative in the target domain. Our experiments show that the proposed method significantly outperforms state-of-the-art algorithms in both object recognition and digit classification experiments by a large margin.", "title": "" }, { "docid": "9eae072c6ec02c109ac48fa63e0b6237", "text": "Learning disentangled representations from visual data, where different high-level generative factors are independently encoded, is of importance for many computer vision tasks. Solving this problem, however, typically requires to explicitly label all the factors of interest in training images. To alleviate the annotation cost, we introduce a learning setting which we refer to as reference-based disentangling. Given a pool of unlabelled images, the goal is to learn a representation where a set of target factors are disentangled from others. The only supervision comes from an auxiliary reference set containing images where the factors of interest are constant. In order to address this problem, we propose reference-based variational autoencoders, a novel deep generative model designed to exploit the weak-supervision provided by the reference set. By addressing tasks such as feature learning, conditional image generation or attribute transfer, we validate the ability of the proposed model to learn disentangled representations from this minimal form of supervision.", "title": "" }, { "docid": "d3f717f0e6b121e61740e4e0458e5920", "text": "The anchor mechanism of Faster R-CNN and SSD framework is considered not effective enough to scene text detection, which can be attributed to its IoU based matching criterion between anchors and ground-truth boxes. In order to better enclose scene text instances of various shapes, it requires to design anchors of various scales, aspect ratios and even orientations manually, which makes anchor-based methods sophisticated and inefficient. In this paper, we propose a novel anchor-free region proposal network (AF-RPN) to replace the original anchor-based RPN in the Faster R-CNN framework to address the above problem. Compared with a vanilla RPN and FPN-RPN, AF-RPN can get rid of complicated anchor design and achieve higher recall rate on large-scale COCO-Text dataset. Owing to the high-quality text proposals, our Faster R-CNN based two-stage text detection approach achieves state-of-the-art results on ICDAR-2017 MLT, ICDAR-2015 and ICDAR-2013 text detection benchmarks when using single-scale and single-model (ResNet50) testing only.", "title": "" }, { "docid": "b44f24b54e45974421f799527391a9db", "text": "Dengue fever is a noncontagious infectious disease caused by dengue virus (DENV). DENV belongs to the family Flaviviridae, genus Flavivirus, and is classified into four antigenically distinct serotypes: DENV-1, DENV-2, DENV-3, and DENV-4. The number of nations and people affected has increased steadily and today is considered the most widely spread arbovirus (arthropod-borne viral disease) in the world. The absence of an appropriate animal model for studying the disease has hindered the understanding of dengue pathogenesis. In our study, we have found that immunocompetent C57BL/6 mice infected intraperitoneally with DENV-1 presented some signs of dengue disease such as thrombocytopenia, spleen hemorrhage, liver damage, and increase in production of IFNγ and TNFα cytokines. Moreover, the animals became viremic and the virus was detected in several organs by real-time RT-PCR. Thus, this animal model could be used to study mechanism of dengue virus infection, to test antiviral drugs, as well as to evaluate candidate vaccines.", "title": "" }, { "docid": "adc06292106114e5e69aa45c5e65cacc", "text": "The surveillance systems have been widely used in automatic teller machines (ATMs), banks, convenient stores, etc. For example, when a customer uses the ATM, the surveillance systems will record his/her face information. The information will help us understand and trace who withdrew money. However, when criminals use the ATM to withdraw illegal money, they usually block their faces with something (in Taiwan, criminals usually use safety helmets or masks to block their faces). That will degrade the purpose of the surveillance system. In previous work, we already proposed a technology for safety helmet detection. In this paper, we propose a mask detection technology based upon automatic face recognition methods. We use the Gabor filters to generate facial features and utilize geometric analysis algorithms for mask detection. The technology can give an early warning to save-guards when any \"customer\" or \"intruder\" blocks his/her face information with a mask. Besides, the technology can assist face detection in the automatic face recognition system. Experimental results show the performance and reliability of the proposed technology.", "title": "" }, { "docid": "74fd65e8298a95b61bc323d9435eaa05", "text": "Next-generation communication systems have to comply with very strict requirements for increased flexibility in heterogeneous environments, high spectral efficiency, and agility of carrier aggregation. This fact motivates research in advanced multicarrier modulation (MCM) schemes, such as filter bank-based multicarrier (FBMC) modulation. This paper focuses on the offset quadrature amplitude modulation (OQAM)-based FBMC variant, known as FBMC/OQAM, which presents outstanding spectral efficiency and confinement in a number of channels and applications. Its special nature, however, generates a number of new signal processing challenges that are not present in other MCM schemes, notably, in orthogonal-frequency-division multiplexing (OFDM). In multiple-input multiple-output (MIMO) architectures, which are expected to play a primary role in future communication systems, these challenges are intensified, creating new interesting research problems and calling for new ideas and methods that are adapted to the particularities of the MIMO-FBMC/OQAM system. The goal of this paper is to focus on these signal processing problems and provide a concise yet comprehensive overview of the recent advances in this area. Open problems and associated directions for future research are also discussed.", "title": "" }, { "docid": "03c13e81803517d2be66e8bc25b7012c", "text": "Extractors and taggers turn unstructured text into entity-relation(ER) graphs where nodes are entities (email, paper, person,conference, company) and edges are relations (wrote, cited,works-for). Typed proximity search of the form <B>type=personNEAR company~\"IBM\", paper~\"XML\"</B> is an increasingly usefulsearch paradigm in ER graphs. Proximity search implementations either perform a Pagerank-like computation at query time, which is slow, or precompute, store and combine per-word Pageranks, which can be very expensive in terms of preprocessing time and space. We present HubRank, a new system for fast, dynamic, space-efficient proximity searches in ER graphs. During preprocessing, HubRank computesand indexes certain \"sketchy\" random walk fingerprints for a small fraction of nodes, carefully chosen using query log statistics. At query time, a small \"active\" subgraph is identified, bordered bynodes with indexed fingerprints. These fingerprints are adaptively loaded to various resolutions to form approximate personalized Pagerank vectors (PPVs). PPVs at remaining active nodes are now computed iteratively. We report on experiments with CiteSeer's ER graph and millions of real Cite Seer queries. Some representative numbers follow. On our testbed, HubRank preprocesses and indexes 52 times faster than whole-vocabulary PPV computation. A text index occupies 56 MB. Whole-vocabulary PPVs would consume 102GB. If PPVs are truncated to 56 MB, precision compared to true Pagerank drops to 0.55; incontrast, HubRank has precision 0.91 at 63MB. HubRank's average querytime is 200-300 milliseconds; query-time Pagerank computation takes 11 seconds on average.", "title": "" } ]
scidocsrr
22d0aae6557b59e7e9300fdb2bd07f76
Design and feedback control of a biologically-inspired miniature quadruped
[ { "docid": "76d10dbe734c5a1341dd914a4fdcc1af", "text": "This paper describes novel highly mobile small robots called “Mini-Whegs” that can run and jump (see video). They are derived from our larger Whegs series of robots, which benefit from abstracted cockroach locomotion principles. Key to their success are the three spoked appendages, called “whegs,” which combine the speed and simplicity of wheels with the climbing mobility of legs. To be more compact than the larger Whegs vehicles, Mini-Whegs uses four whegs in an alternating diagonal gait. These 9 cm long robots can run at sustained speeds of over 10 body lengths per second and climb obstacles that are taller than their leg length. They can run forward and backward, on either side. Their robust construction allows them to tumble down a flight of stairs with no damage and carry a payload equal to twice their weight. A jumping mechanism has also been developed that enables Mini-Whegs to surmount much larger obstacles, such as stair steps.", "title": "" } ]
[ { "docid": "c0553fb9b00fe7e13efe73efdcdbd11e", "text": "Generative adversarial networks (GANs) have shown impressive results, however, the generator and the discriminator are optimized in finite parameter space which means their performance still need to be improved. In this paper, we propose a novel approach of adversarial training between one generator and an exponential number of critics which are sampled from the original discriminative neural network via dropout. As discrepancy between outputs of different sub-networks of a same sample can measure the consistency of these critics, we encourage the critics to be consistent to real samples and inconsistent to generated samples during training, while the generator is trained to generate consistent samples for different critics. Experimental results demonstrate that our method can obtain state-of-the-art Inception scores of 9.17 and 10.02 on supervised CIFAR-10 and unsupervised STL10 image generation tasks, respectively, as well as achieve competitive semi-supervised classification results on several benchmarks. Importantly, we demonstrate that our method can maintain stability in training and alleviate mode collapse.", "title": "" }, { "docid": "df163d94fbf0414af1dde4a9e7fe7624", "text": "This paper introduces a web image dataset created by NUS's Lab for Media Search. The dataset includes: (1) 269,648 images and the associated tags from Flickr, with a total of 5,018 unique tags; (2) six types of low-level features extracted from these images, including 64-D color histogram, 144-D color correlogram, 73-D edge direction histogram, 128-D wavelet texture, 225-D block-wise color moments extracted over 5x5 fixed grid partitions, and 500-D bag of words based on SIFT descriptions; and (3) ground-truth for 81 concepts that can be used for evaluation. Based on this dataset, we highlight characteristics of Web image collections and identify four research issues on web image annotation and retrieval. We also provide the baseline results for web image annotation by learning from the tags using the traditional k-NN algorithm. The benchmark results indicate that it is possible to learn effective models from sufficiently large image dataset to facilitate general image retrieval.", "title": "" }, { "docid": "718e31eabfd386768353f9b75d9714eb", "text": "The mathematical structure of Sudoku puzzles is akin to hard constraint satisfaction problems lying at the basis of many applications, including protein folding and the ground-state problem of glassy spin systems. Via an exact mapping of Sudoku into a deterministic, continuous-time dynamical system, here we show that the difficulty of Sudoku translates into transient chaotic behavior exhibited by this system. We also show that the escape rate κ, an invariant of transient chaos, provides a scalar measure of the puzzle's hardness that correlates well with human difficulty ratings. Accordingly, η = -log₁₀κ can be used to define a \"Richter\"-type scale for puzzle hardness, with easy puzzles having 0 < η ≤ 1, medium ones 1 < η ≤ 2, hard with 2 < η ≤ 3 and ultra-hard with η > 3. To our best knowledge, there are no known puzzles with η > 4.", "title": "" }, { "docid": "e18f1b9c12538554627b77f76c1608a1", "text": "Deployment of PHEV will initiate an integration of transportation and power systems. Intuitively, the PHEVs will constitute an additional demand to the electricity grid, potentially violating converter or line capacities when recharging. Smart management schemes can alleviate possible congestions in power systems, intelligently distributing available energy. As PHEV are inherently independent entities, an agent based approach is expedient. Nonlinear pricing will be adapted to model and manage recharging behavior of large numbers of autonomous PHEV agents connecting in one urban area modelled as an energy hub. The scheme will incorporate price dependability. An aggregation entity, with no private information about its customers, will manage the PHEV agents whose individual parameters will be based on technical constraints and individual objectives. Analysis of the management scheme will give implications for PHEV modelling and integration schemes as well as tentative ideas of possible repercussions on power systems.", "title": "" }, { "docid": "03d5eadaefc71b1da1b26f4e2923a082", "text": "Sleep is characterized by a structured combination of neuronal oscillations. In the hippocampus, slow-wave sleep (SWS) is marked by high-frequency network oscillations (approximately 200 Hz \"ripples\"), whereas neocortical SWS activity is organized into low-frequency delta (1-4 Hz) and spindle (7-14 Hz) oscillations. While these types of hippocampal and cortical oscillations have been studied extensively in isolation, the relationships between them remain unknown. Here, we demonstrate the existence of temporal correlations between hippocampal ripples and cortical spindles that are also reflected in the correlated activity of single neurons within these brain structures. Spindle-ripple episodes may thus constitute an important mechanism of cortico-hippocampal communication during sleep. This coactivation of hippocampal and neocortical pathways may be important for the process of memory consolidation, during which memories are gradually translated from short-term hippocampal to longer-term neocortical stores.", "title": "" }, { "docid": "542d698fbc97e07809c23cbef5bcb799", "text": "Liver fibrosis is a major cause of morbidity and mortality worldwide due to chronic viral hepatitis and, more recently, from fatty liver disease associated with obesity. Hepatic stellate cell activation represents a critical event in fibrosis because these cells become the primary source of extracellular matrix in liver upon injury. Use of cell-culture and animal models has expanded our understanding of the mechanisms underlying stellate cell activation and has shed new light on genetic regulation, the contribution of immune signaling, and the potential reversibility of the disease. As pathways of fibrogenesis are increasingly clarified, the key challenge will be translating new advances into the development of antifibrotic therapies for patients with chronic liver disease.", "title": "" }, { "docid": "f5a8b13fbf2376cf94acd47e5ffe1178", "text": "P is computed by a Seq2Seq model with attention, requires utterance x but not logical form y. ● Active learning score = linear combination of features using weights from binary classifier. ○ Predict if Forward S2S selects utterances. ○ Trained on ATIS dev corpus. ● Binary classifier to predict Forward S2S using ○ RNN LF language model ○ Backward S2S model ● Margins between the best and second best hypotheses ● Source token frequency ● Utterance log loss ● Encoder and decoder last hidden states Backward Classifier", "title": "" }, { "docid": "56d3545ec63503b743a7a80db012d7e5", "text": "Concrete objects used to illustrate mathematical ideas are commonly known as manipulatives. Manipulatives are ubiquitous in North American elementary classrooms in the early years, and although they can be beneficial, they do not guarantee learning. In the present study, the authors examined two factors hypothesized to impact second-graders’ learning of place value and regrouping with manipulatives: (a) the sequencing of concrete (base-ten blocks) and abstract (written symbols) representations of the standard addition algorithm; and (b) the level of instructional guidance on the structural relations between the representations. Results from a classroom experiment with second-grade students (N = 87) indicated that place value knowledge increased from pre-test to post-test when the base-ten blocks were presented before the symbols, but only when no instructional guidance was offered. When guidance was given, only students in the symbols-first condition improved their place value knowledge. Students who received instruction increased their understanding of regrouping, irrespective of representational sequence. No effects were found for iterative sequencing of concrete and abstract representations. Practical implications for teaching mathematics with manipulatives are considered.", "title": "" }, { "docid": "9b1769eb8e1991c5e1bb6b58c806d249", "text": "Online reviews play a crucial role in today's electronic commerce. Due to the pervasive spam reviews, customers can be misled to buy low-quality products, while decent stores can be defamed by malicious reviews. We observe that, in reality, a great portion (> 90% in the data we study) of the reviewers write only one review (singleton review). These reviews are so enormous in number that they can almost determine a store's rating and impression. However, existing methods ignore these reviewers. To address this problem, we observe that the normal reviewers' arrival pattern is stable and uncorrelated to their rating pattern temporally. In contrast, spam attacks are usually bursty and either positively or negatively correlated to the rating. Thus, we propose to detect such attacks via unusually correlated temporal patterns. We identify and construct multidimensional time series based on aggregate statistics, in order to depict and mine such correlation. Experimental results show that the proposed method is effective in detecting singleton review attacks. We discover that singleton review is a significant source of spam reviews and largely affects the ratings of online stores.", "title": "" }, { "docid": "9ba3c67136d573c4a10b133a2391d8bc", "text": "Modern text collections often contain large documents that span several subject areas. Such documents are problematic for relevance feedback since inappropriate terms can easi 1y be chosen. This study explores the highly effective approach of feeding back passages of large documents. A less-expensive method that discards long documents is also reviewed and found to be effective if there are enough relevant documents. A hybrid approach that feeds back short documents and passages of long documents may be the best compromise.", "title": "" }, { "docid": "c3112126fa386710fb478dcfe978630e", "text": "In recent years, distributed intelligent microelectromechanical systems (DiMEMSs) have appeared as a new form of distributed embedded systems. DiMEMSs contain thousands or millions of removable autonomous devices, which will collaborate with each other to achieve the final target of the whole system. Programming such systems is becoming an extremely difficult problem. The difficulty is due not only to their inherent nature of distributed collaboration, mobility, large scale, and limited resources of their devices (e.g., in terms of energy, memory, communication, and computation) but also to the requirements of real-time control and tolerance for uncertainties such as inaccurate actuation and unreliable communications. As a result, existing programming languages for traditional distributed and embedded systems are not suitable for DiMEMSs. In this article, we first introduce the origin and characteristics of DiMEMSs and then survey typical implementations of DiMEMSs and related research hotspots. Finally, we propose a real-time programming framework that can be used to design new real-time programming languages for DiMEMSs. The framework is composed of three layers: a real-time programming model layer, a compilation layer, and a runtime system layer. The design challenges and requirements of these layers are investigated. The framework is then discussed in further detail and suggestions for future research are given.", "title": "" }, { "docid": "d7c84c8282526c46d63e93091861e04d", "text": "We propose a sketch-based two-step neural model for generating structured queries (SQL) based on a user’s request in natural language. The sketch is obtained by using placeholders for specific entities in the SQL query, such as column names, table names, aliases and variables, in a process similar to semantic parsing. The first step is to apply a sequence-to-sequence (SEQ2SEQ) model to determine the most probable SQL sketch based on the request in natural language. Then, a second network designed as a dual-encoder SEQ2SEQ model using both the text query and the previously obtained sketch is employed to generate the final SQL query. Our approach shows improvements over previous approaches on two recent large datasets (WikiSQL and SENLIDB) suitable for data-driven solutions for natural language interfaces for databases.", "title": "" }, { "docid": "65a4197d7f12c320a34fdd7fcac556af", "text": "The article presents an overview of current specialized ontology engineering tools, as well as texts’ annotation tools based on ontologies. The main functions and features of these tools, their advantages and disadvantages are discussed. A systematic comparative analysis of means for engineering ontologies is presented. ACM Classification", "title": "" }, { "docid": "153c225c68871a76db8249ea70284dfd", "text": "A case of carcinoma of the penis in a 55-year-old landlord is described. He presented with a fungating growth on the shaft of his penis with an unusual history. The lesion started as a nodule in the coronal sulcus leading to thinning of urinary stream and ultimately retention of urine, which was diagnosed and treated as a case of urethral stricture. Wedge biopsy of the growth revealed the case of squamous cell carcinoma of penis. Ultrasonography and CT scan of pelvis and abdomen proved the disease to be localized to penis and total penectomy was carried out.", "title": "" }, { "docid": "62e7bac45a035733b539d0853360c2c8", "text": "192 words] Purpose: To develop a computer based method for the automated assessment of image quality in the context of diabetic retinopathy (DR) to guide the photographer. Methods: A deep learning framework was trained to grade the images automatically. A large representative set of 7000 color fundus images were used for the experiment which were obtained from the EyePACS (http://www.eyepacs.com/) that were made available by the California Healthcare Foundation. Three retinal image analysis experts were employed to categorize these images into ‘accept’ and ‘reject’ classes based on the precise definition of image quality in the context of DR. A deep learning framework was trained using 3428 images. Results: A total of 3572 images were used for the evaluation of the proposed method. The method shows an accuracy of 100% to successfully categorise ‘accept’ and ‘reject’ images. Conclusion: Image quality is an essential prerequisite for the grading of DR. In this paper we have proposed a deep learning based automated image quality assessment method in the context of DR. The", "title": "" }, { "docid": "d522f9a8b0d2a870a8142e20acff5028", "text": "Node-list and N-list, two novel data structure proposed in recent years, have been proven to be very efficient for mining frequent itemsets. The main problem of these structures is that they both need to encode each node of a PPC-tree with pre-order and post-order code. This causes that they are memory consuming and inconvenient to mine frequent itemsets. In this paper, we propose Nodeset, a more efficient data structure, for mining frequent itemsets. Nodesets require only the pre-order (or post-order code) of each node, which makes it saves half of memory compared with N-lists and Node-lists. Based on Nodesets, we present an efficient algorithm called FIN to mining frequent itemsets. For evaluating the performance of FIN, we have conduct experiments to compare it with PrePost and FP-growth ⁄ , two state-of-the-art algorithms, on a variety of real and synthetic datasets. The experimental results show that FIN is high performance on both running time and memory usage. Frequent itemset mining, first proposed by Agrawal, Imielinski, and Swami (1993), has become a fundamental task in the field of data mining because it has been widely used in many important data mining tasks such as mining associations, correlations, episodes , and etc. Since the first proposal of frequent itemset mining, hundreds of algorithms have been proposed on various kinds of extensions and applications, ranging from scalable data mining methodologies, to handling a wide diversity of data types, various extended mining tasks, and a variety of new applications (Han, Cheng, Xin, & Yan, 2007). In recent years, we present two data structures called Node-list (Deng & Wang, 2010) and N-list (Deng, Wang, & Jiang, 2012) for facilitating the mining process of frequent itemsets. Both structures use nodes with pre-order and post-order to represent an itemset. Based on Node-list and N-list, two algorithms called PPV (Deng & Wang, 2010) and PrePost (Deng et al., 2012) are proposed, respectively for mining frequent itemsets. The high efficiency of PPV and PrePost is achieved by the compressed characteristic of Node-lists and N-lists. However, they are memory-consuming because Node-lists and N-lists need to encode a node with pre-order and post-order. In addition, the nodes' code model of Node-list and N-list is not suitable to join Node-lists or N-lists of two short itemsets to generate the Node-list or N-list of a long itemset. This may affect the efficiency of corresponding algorithms. Therefore, how to design an efficient data structure without …", "title": "" }, { "docid": "22ca260d9c6bc4d5ad9435e247be51c6", "text": "This book is a fruitful discussion of the Internet and social media addiction in the digital era. It includes informative and impressive facts about this issue. It clarifies the consequences of the extreme usage of Internet and social media in four educative chapters. The publisher of this book, “ReferencePoint Press”, introduced the “Compact Research Series” to think deeply with focusing on 3 types of information namely objective single author narratives, opinion-based primary source quotations as well as facts and statistics. The addiction series consists of five books examine the risk and challenges of different addictions, from synthetic drugs and heroin to gambling and Internet addiction. Andrea C. Nakaya, the author of this book has a Master’s degree in Communication from San Diego State University. She has been working as a freelance author for almost a decade.", "title": "" }, { "docid": "e6e19f678bfe46d8390e32f28f1d675d", "text": "In this paper, a miniaturized printed dipole antenna with the V-shaped ground is proposed for radio frequency identification (RFID) readers operating at the frequency of 2.45 GHz. The principles of the microstrip balun and the printed dipole are analyzed and design considerations are formulated. Through extending and shaping the ground to reduce the coupling between the balun and the dipole, the antenna’s impedance bandwidth is broadened and the antenna’s radiation pattern is improved. The 3D finite difference time domain (FDTD) Electromagnetic simulations are carried out to evaluate the antenna’s performance. The effects of the extending angle and the position of the ground are investigated to obtain the optimized parameters. The antenna was fabricated and measured in a microwave anechoic chamber. The results show that the proposed antenna achieves a broader impedance bandwidth, a higher forward radiation gain and a stronger suppression to backward radiation compared with the one without such a ground.", "title": "" }, { "docid": "156c62aac106229928ba323cfb9bd53f", "text": "The Internet is becoming increasingly influential, but some observers have noted that heavy Internet users seem alienated from normal social contacts and may even cut these off as the Internet becomes the predominate social factor in their lives. Kraut, Patterson, Lundmark, Kiesler, Mukopadhyay, and Scherlis [American Psychologist 53 (1998) 65] carried out a longitudinal study from which they concluded that Internet use leads to loneliness among its users. However, their study did not take into account that the population of Internet users is not uniform and comprises many different personality types. People use the Internet in a variety of ways in keeping with their own personal preference. Therefore, the results of this interaction between personality and Internet use are likely to vary among different individuals and similarly the impact on user well-being will not be uniform. One of the personality characteristics that has been found to influence Internet use is that of extroversion and neuroticism [Hamburger & Ben-Artzi, Computers in Human Behavior 16 (2000) 441]. For this study, 89 participants completed questionnaires pertaining to their own Internet use and feelings of loneliness and extroversion and neuroticism. The results were compared to two models (a) the Kraut et al. (1998) model which argues that Internet use leads to loneliness (b) an alternative model which argues that it is those people who are already lonely who spend time on the Internet. A satisfactory goodness of fit was found for the alternative model. Building on these results, several different directions are suggested for continuing research in this field. # 2002 Published by Elsevier Science Ltd.", "title": "" } ]
scidocsrr
2f19614f7c6f106109ed13a716534cbe
Active pebbles: parallel programming for data-driven applications
[ { "docid": "032589c39e258890e29196ca013a3e22", "text": "We describe Charm++, an object oriented portable parallel programming language based on Cff. Its design philosophy, implementation, sample applications and their performance on various parallel machines are described. Charm++ is an explicitly parallel language consisting of Cft with a few extensions. It provides a clear separation between sequential and parallel objects. The execution model of Charm++ is message driven, thus helping one write programs that are latencytolerant. The language supports multiple inheritance, dynamic binding, overloading, strong typing, and reuse for parallel objects. Charm++ provides specific modes for sharing information between parallel objects. Extensive dynamic load balancing strategies are provided. It is based on the Charm parallel programming system, and its runtime system implementation reuses most of the runtime system for Charm.", "title": "" } ]
[ { "docid": "72d47983c009c7892155fc3c491c9f52", "text": "To improve the stability accuracy of stable platform of unmanned aerial vehicle (UAV), a line-of-sight stabilized control system is developed by using an inertial and optical-mechanical (fast steering mirror) combined method in a closed loop with visual feedback. The system is based on Peripheral Component Interconnect (PCI), included an image-deviation-obtained system and a combined controller using a PQ method. The method changes the series-wound structure to the shunt-wound structure of dual-input/single-output (DISO), and decouples the actuator range and frequency of inertial stabilization and fast steering mirror stabilization. Test results show the stability accuracy improves from 20μrad of inertial method to 5μrad of inertial and optical-mechanical combined method, and prove the effectiveness of the combined line-of-sight stabilization control system.", "title": "" }, { "docid": "f896ba5c4009f83cccff857af6d9ef0d", "text": "Based on the frameworks of dual-process theories, we examined the interplay between intuitive and controlled cognitive processes related to moral and social judgments. In a virtual reality (VR) setting we performed an experiment investigating the progression from fast, automatic decisions towards more controlled decisions over multiple trials in the context of a sacrificing scenario. We repeatedly exposed participants to a modified ten-to-one version and to three one-to-one versions of the trolley dilemma in VR and varied avatar properties, such as their gender and ethnicity, and their orientation in space. We also investigated the influence of arousing music on decisions. Our experiment replicated the behavioral pattern observed in studies using text versions of the trolley dilemma, thereby validating the use of virtual environments in research on moral judgments. Additionally, we found a general tendency towards sacrificing male individuals which correlated with socially desirable responding. As indicated by differences in response times, the ten-to-one version of the trolley dilemma seems to be faster to decide than decisions requiring comparisons based on specific avatar properties as a result of differing moral content. Building upon research on music-based emotion induction, we used music to induce emotional arousal on a physiological level as measured by pupil diameter. We found a specific temporal signature displaying a peak in arousal around the moment of decision. This signature occurs independently of the overall arousal level. Furthermore, we found context-dependent gaze durations during sacrificing decisions, leading participants to look prolonged at their victim if they had to choose between avatars differing in gender. Our study confirmed that moral decisions can be explained within the framework of dual-process theories and shows that pupillometric measurements are a promising tool for investigating affective responses in dilemma situations.", "title": "" }, { "docid": "807026f9ebe3d3b7fdcfcd388223b811", "text": "A fundamental problem related to graph structured databases is searching for substructures. One issue with respect to optimizing such searches is the ability to estimate the frequency of substructures within a query graph. In this work, we present and evaluate two techniques for estimating the frequency of subgraphs from a summary of the data graph. In the first technique, we assume that edge occurrences on edge sequences are position independent and summarize only the most informative dependencies. In the second technique, we prune small subgraphs using a valuation scheme that blends information about their importance and estimation power. In both techniques, we assume conditional independence to estimate the frequencies of larger subgraphs. We validate the effectiveness of our techniques through experiments on real and synthetic datasets.", "title": "" }, { "docid": "6042abbb698a8d8be6ea87690db9fbd2", "text": "Machine learning is used in a number of security related applications such as biometric user authentication, speaker identification etc. A type of causative integrity attack against machine le arning called Poisoning attack works by injecting specially crafted data points in the training data so as to increase the false positive rate of the classifier. In the context of the biometric authentication, this means that more intruders will be classified as valid user, and in case of speaker identification system, user A will be classified user B. In this paper, we examine poisoning attack against SVM and introduce Curie a method to protect the SVM classifier from the poisoning attack. The basic idea of our method is to identify the poisoned data points injected by the adversary and filter them out. Our method is light weight and can be easily integrated into existing systems. Experimental results show that it works very well in filtering out the poisoned data.", "title": "" }, { "docid": "e947cf1b4670c10f2453b9012078c3b5", "text": "BACKGROUND\nDyadic suicide pacts are cases in which two individuals (and very rarely more) agree to die together. These account for fewer than 1% of all completed suicides.\n\n\nOBJECTIVE\nThe authors describe two men in a long-term domestic partnership who entered into a suicide pact and, despite utilizing a high-lethality method (simultaneous arm amputation with a power saw), survived.\n\n\nMETHOD\nThe authors investigated the psychiatric, psychological, and social causes of suicide pacts by delving into the history of these two participants, who displayed a very high degree of suicidal intent. Psychiatric interviews and a family conference call, along with the strong support of one patient's family, were elicited.\n\n\nRESULTS\nThe patients, both HIV-positive, showed high levels of depression and hopelessness, as well as social isolation and financial hardship. With the support of his family, one patient was discharged to their care, while the other partner was hospitalized pending reunion with his partner.\n\n\nDISCUSSION\nThis case illustrates many of the key, defining features of suicide pacts that are carried out and also highlights the nature of the dependency relationship.", "title": "" }, { "docid": "4b854c1c1ed2ece94e88b7300b1395fa", "text": "Spam web pages intend to achieve higher-than-deserved ranking by various techniques. While human experts could easily identify spam web pages, the manual evaluating process of a large number of pages is still time consuming and cost consuming. To assist manual evaluation, we propose an algorithm to assign spam values to web pages and semi-automatically select potential spam web pages. We first manually select a small set of spam pages as seeds. Then, based on the link structure of the web, the initial R-SpamRank values assigned to the seed pages propagate through links and distribute among the whole web page set. After sorting the pages according to their R-SpamRank values, the pages with high values are selected. Our experiments and analyses show that the algorithm is highly successful in identifying spam pages, which gains a precision of 99.1% in the top 10,000 web pages with the highest R-SpamRank values.", "title": "" }, { "docid": "3a3b898ae050456c7bf2b5997f7c12ca", "text": "The Budeanu definitions of reactive and distortion power in circuits with nonsinusoidal waveforms have been widely used for almost 60 years. There have been objections, concerned mainly with the questions of whether these powers should be defined in the frequency domain and whether they can be measured as defined. The main drawbacks of these definitions include the fact that the Budeanu reactive and distortion powers do not possess any attributes which might be related to the power phenomena in the circuit; that their values do not provide any information which would allow the design of compensating circuits; and that the distortion power value does not provide any information about waveform distortion. It is concluded that Budeanu's concept has led the power theory of circuits with nonsinusoidal waveforms into a blind alley.", "title": "" }, { "docid": "259794d0416876b6c490fba53f2eaf69", "text": "All Rights Reserved © 2012 IJARCET Abstract – Now a days, the classification and grading is performed based on observations and through experience. The system utilizes image-processing techniques to classify and grade fruits. The developed system starts the process by capturing the fruit’s image using a regular digital camera. Then, the image is transmitted to the processing level where feature extraction, classification and grading is done using MATLAB. The fruits are classified based on color and graded based on size. Both classification and grading are realized by Fuzzy Logic approach. The results obtained are very promising.", "title": "" }, { "docid": "c3566171b68e4025931a72064e74e4ae", "text": "Training a Fully Convolutional Network (FCN) for semantic segmentation requires a large number of pixel-level masks, which involves a large amount of human labour and time for annotation. In contrast, image-level labels are much easier to obtain. In this work, we propose a novel method for weakly supervised semantic segmentation with only image-level labels. The method relies on a large scale co-segmentation framework that can produce object masks for a group of images containing objects belonging to the same semantic class. We first retrieve images from search engines, e.g. Flickr and Google, using semantic class names as queries, e.g. class names in PASCAL VOC 2012. We then use high quality masks produced by co-segmentation on the retrieved images as well as the target dataset images with image level labels to train segmentation networks. We obtain IoU 56.9 on test set of PASCAL VOC 2012, which reaches state of the art performance.", "title": "" }, { "docid": "7e5c3e774572e59180637da0d3b2d71a", "text": "Relationship marketing—establishing, developing, and maintaining successful relational exchanges—constitutes a major shift in marketing theory and practice. After conceptualizing relationship marketing and discussing its ten forms, the authors (1) theorize that successful relationship marketing requires relationship commitment and tnjst, (2) model relationship commitment and trust as key mediating variables, (3) test this key mediating variable model using data from automobile tire retailers, and (4) compare their model with a rival that does not allow relationship commitment and trust to function as mediating variables. Given the favorable test results for the key mediating variable model, suggestions for further explicating and testing it are offered.", "title": "" }, { "docid": "49329aef5ac732cc87b3cc78520c7ff5", "text": "This paper surveys the previous and ongoing research on surface electromyogram (sEMG) signal processing implementation through various hardware platforms. The development of system that incorporates sEMG analysis capability is essential in rehabilitation devices, prosthesis arm/limb and pervasive healthcare in general. Most advanced EMG signal processing algorithms rely heavily on computational resource of a PC that negates the elements of portability, size and power dissipation of a pervasive healthcare system. Signal processing techniques applicable to sEMG are discussed with aim for proper execution in platform other than full-fledge PC. Performance and design parameters issues in some hardware implementation are also being pointed up. The paper also outlines the trends and alternatives solutions in developing portable and efficient EMG signal processing hardware.", "title": "" }, { "docid": "d1e43c347f708547aefa07b3c83ee428", "text": "Studies using Nomura et al.’s “Negative Attitude toward Robots Scale” (NARS) [1] as an attitudinal measure have featured robots that were perceived to be autonomous, independent agents. State of the art telepresence robots require an explicit human-in-the-loop to drive the robot around. In this paper, we investigate if NARS can be used with telepresence robots. To this end, we conducted three studies in which people watched videos of telepresence robots (n=70), operated telepresence robots (n=38), and interacted with telepresence robots (n=12). Overall, the results from our three studies indicated that NARS may be applied to telepresence robots, and culture, gender, and prior robot experience can be influential factors on the NARS score.", "title": "" }, { "docid": "8216a6da70affe452ec3c5998e3c77ba", "text": "In this paper, the performance of a rectangular microstrip patch antenna fed by microstrip line is designed to operate for ultra-wide band applications. It consists of a rectangular patch with U-shaped slot on one side of the substrate and a finite ground plane on the other side. The U-shaped slot and the finite ground plane are used to achieve an excellent impedance matching to increase the bandwidth. The proposed antenna is designed and optimized based on extensive 3D EM simulation studies. The proposed antenna is designed to operate over a frequency range from 3.6 to 15 GHz.", "title": "" }, { "docid": "adabd3971fa0abe5c60fcf7a8bb3f80c", "text": "The present paper describes the development of a query focused multi-document automatic summarization. A graph is constructed, where the nodes are sentences of the documents and edge scores reflect the correlation measure between the nodes. The system clusters similar texts having related topical features from the graph using edge scores. Next, query dependent weights for each sentence are added to the edge score of the sentence and accumulated with the corresponding cluster score. Top ranked sentence of each cluster is identified and compressed using a dependency parser. The compressed sentences are included in the output summary. The inter-document cluster is revisited in order until the length of the summary is less than the maximum limit. The summarizer has been tested on the standard TAC 2008 test data sets of the Update Summarization Track. Evaluation of the summarizer yielded accuracy scores of 0.10317 (ROUGE-2) and 0.13998 (ROUGE–SU-4).", "title": "" }, { "docid": "2b438492cbcb9e77b93c7a23b74f02e6", "text": "The invention of the automobile has transformed how people live, work, and interact in society. Today, with an ever-increasing number of in-vehicle options/activities, as well as the increasing demands being placed on the driver, vehicle platform, and transportation infrastructure, more is being asked of engineers, designers, scientists, and transportation specialists. Signal processing is playing an increasingly substantial role in this domain, including such general topics as monitoring driver distraction, vehicle lane/control detection/tracking, driver assistance through autonomous platforms, and vehicle infrastructure support and planning/monitoring. The diversity of these problems requires a more collaborative effort from engineers and scientists in a diverse set of specialties. The impact to society is massive, including such broad aspects as 1) safety, 2) commerce (i.e., sales and support/maintenance of vehicles), 3) energy costs (i.e., fossil fuel consumption, etc.), and 4) population mobility for effective traffic management. How will signal processing advance today’s vehicles into “smart” cars that are able to think and contribute to the task of operating a vehicle? What safety concerns are there in moving from a 100% driver-controlled vehicle, to driver assistive technologies (e.g., cruise control, assistive braking, lane-departure monitoring, etc.), to full autonomous driving? Many new and emerging challenges arise and need to be addressed in collaborative ways. This special issue provides a venue for summarizing, educating, and sharing the state of the art in signal processing applied to the domain of automotive systems. Due to the significance of this topic from both an engineering/technology as well as a global society perspective, this special issue of IEEE Signal Processing Ma ­ gazine will appear in two parts (part 1 is the current issue, and part 2 is scheduled to be published in the spring of 2017). Highlighted below is the scope of topics addressed in varying degrees by the articles that are explored in both parts: ■ digital signal processing technologies in adaptive automobiles, diagnosis, and maintenance ■ speech, hands-free, and in-car communication algorithms and evaluation ■ in-vehicle dialog systems and human-machine interfaces ■ driver-status monitoring and distraction/stress detection ■ computer vision methods for vehicle recognition and assisted driving ■ multisensor fusion for driver identification and robust driver monitoring ■ signal processing for position and velocity estimation and control ■ signal processing for green vehiclerelated energy management ■ vehicle-to-vehicle and vehicle-toinfrastructure communications and networking ■ autonomous, semiautonomous, and networked vehicular control ■ human factors and cognitive science in enhancing vehicle and driver safety ■ machine learning and data analytics associated with automotive systems ■ issues regarding security and privacy aspects for smart vehicle systems. In planning this special issue, we worked extensively to ensure a wide representation of the field. A large number of white papers were received, and the authors of a select set of white papers were invited to submit full papers that were then peer reviewed. Six articles appearing in the current issue span a broad range of signal processing for vehicle systems. The first group contains three articles that address driver behavior and monitoring: “Driver-Behavior Modeling Using OnRoad Driving Data,” by Miyajima and Takeda, “Driver Status Monitoring Systems for Smart Vehicles Using Physiological Sensors” by Choi et al., and “Smart Driver Monitoring: When Signal Processing Meets Human Factors” by Aghaei at al. Next, Weng et al.’s article, “Conversational In-Vehicle Dialog Signal Processing for Smart Vehicle Technologies", "title": "" }, { "docid": "8ac205b5b2344b64e926a5e18e43322f", "text": "In 2015, Google's Deepmind announced an advancement in creating an autonomous agent based on deep reinforcement learning (DRL) that could beat a professional player in a series of 49 Atari games. However, the current manifestation of DRL is still immature, and has significant drawbacks. One of DRL's imperfections is its lack of \"exploration\" during the training process, especially when working with high-dimensional problems. In this paper, we propose a mixed strategy approach that mimics behaviors of human when interacting with environment, and create a \"thinking\" agent that allows for more efficient exploration in the DRL training process. The simulation results based on the Breakout game show that our scheme achieves a higher probability of obtaining a maximum score than does the baseline DRL algorithm, i.e., the asynchronous advantage actor-critic method. The proposed scheme therefore can be applied effectively to solving a complicated task in a real-world application.", "title": "" }, { "docid": "92da117d31574246744173b339b0d055", "text": "We present a method for gesture detection and localization based on multi-scale and multi-modal deep learning. Each visual modality captures spatial information at a particular spatial scale (such as motion of the upper body or a hand), and the whole system operates at two temporal scales. Key to our technique is a training strategy which exploits i) careful initialization of individual modalities; and ii) gradual fusion of modalities from strongest to weakest cross-modality structure. We present experiments on the ChaLearn 2014 Looking at People Challenge gesture recognition track, in which we placed first out of 17 teams.", "title": "" }, { "docid": "5e95aaa54f8acf073ccc11c08c148fe0", "text": "Billions of dollars of loss are caused every year due to fraudulent credit card transactions. The design of efficient fraud detection algorithms is key for reducing these losses, and more and more algorithms rely on advanced machine learning techniques to assist fraud investigators. The design of fraud detection algorithms is however particularly challenging due to non stationary distribution of the data, highly imbalanced classes distributions and continuous streams of transactions. At the same time public data are scarcely available for confidentiality issues, leaving unanswered many questions about which is the best strategy to deal with them. In this paper we provide some answers from the practitioner’s perspective by focusing on three crucial issues: unbalancedness, non-stationarity and assessment. The analysis is made possible by a real credit card dataset provided by our industrial partner.", "title": "" }, { "docid": "c380f89ac91ce532b9f0250ce487fe5e", "text": "Starting in the seventies, face recognition has become one of the most researched topics in computer vision and biometrics. Traditional methods based on hand-crafted features and traditional machine learning techniques have recently been superseded by deep neural networks trained with very large datasets. In this paper we provide a comprehensive and upto-date literature review of popular face recognition methods including both traditional (geometry-based, holistic, featurebased and hybrid methods) and deep learning methods.", "title": "" }, { "docid": "df114f9765d4c0bba7371c243bad8608", "text": "CAPTCHAs are automated tests to tell computers and humans apart. They are designed to be easily solvable by humans, but unsolvable by machines. With Convolutional Neural Networks these tests can also be solved automatically. However, the strength of CNNs relies on the training data that the classifier is learnt on and especially on the size of the training set. Hence, it is intractable to solve the problem with CNNs in case of insufficient training data. We propose an Active Deep Learning strategy that makes use of the ability to gain new training data for free without any human intervention which is possible in the special case of CAPTCHAs. We discuss how to choose the new samples to re-train the network and present results on an auto-generated CAPTCHA dataset. Our approach dramatically improves the performance of the network if we initially have only few labeled training data.", "title": "" } ]
scidocsrr
6383d9343f894767bcf1fd0f5d0a0c0d
A Simple Machine Learning Method for Commonsense Reasoning? A Short Commentary on Trinh & Le (2018)
[ { "docid": "485cda7203863d2ff0b2070ca61b1126", "text": "Interestingly, understanding natural language that you really wait for now is coming. It's significant to wait for the representative and beneficial books to read. Every book that is provided in better way and utterance will be expected by many peoples. Even you are a good reader or not, feeling to read this book will always appear when you find it. But, when you feel hard to find it as yours, what to do? Borrow to your friends and don't know when to give back it to her or him.", "title": "" } ]
[ { "docid": "9bdd5424d73375a44c3461ffe456a844", "text": "A new suspended plate antenna is presented for the enhancement of impedance bandwidth. The probe-fed plate antenna is suspended above a ground plane and its center portion is concaved to form a \"V\" shape. The experiment and simulation show that without increase in size the proposed antenna is capable of providing an impedance bandwidth of up to 60% for |S/sub 11/|<-10 dB with an acceptable gain of 8 dBi.", "title": "" }, { "docid": "6033682cf01008f027877e3fda4511f8", "text": "The HER-2/neu oncogene is a member of the erbB-like oncogene family, and is related to, but distinct from, the epidermal growth factor receptor. This gene has been shown to be amplified in human breast cancer cell lines. In the current study, alterations of the gene in 189 primary human breast cancers were investigated. HER-2/neu was found to be amplified from 2- to greater than 20-fold in 30% of the tumors. Correlation of gene amplification with several disease parameters was evaluated. Amplification of the HER-2/neu gene was a significant predictor of both overall survival and time to relapse in patients with breast cancer. It retained its significance even when adjustments were made for other known prognostic factors. Moreover, HER-2/neu amplification had greater prognostic value than most currently used prognostic factors, including hormonal-receptor status, in lymph node-positive disease. These data indicate that this gene may play a role in the biologic behavior and/or pathogenesis of human breast cancer.", "title": "" }, { "docid": "a33348ee1396be9be333eb3be8dadb39", "text": "In the multi-MHz low voltage, high current applications, Synchronous Rectification (SR) is strongly needed due to the forward recovery and the high conduction loss of the rectifier diodes. This paper applies the SR technique to a 10-MHz isolated class-Φ2 resonant converter and proposes a self-driven level-shifted Resonant Gate Driver (RGD) for the SR FET. The proposed RGD can reduce the average on-state resistance and the associated conduction loss of the MOSFET. It also provides precise switching timing for the SR so that the body diode conduction time of the SR FET can be minimized. A 10-MHz prototype with 18 V input, 5 V/2 A output was built to verify the advantage of the SR with the proposed RGD. At full load of 2 A, the SR with the proposed RGD improves the converter efficiency from 80.2% using the SR with the conventional RGD to 82% (an improvement of 1.8%). Compared to the efficiency of 77.3% using the diode rectification, the efficiency improvement is 4.7%.", "title": "" }, { "docid": "d277a7e6a819af474b31c7a35b9c840f", "text": "Blending face geometry in different expressions is a popular approach for facial animation in films and games. The quality of the animation relies on the set of blend shape expressions, and creating sufficient blend shapes takes a large amount of time and effort. This paper presents a complete pipeline to create a set of blend shapes in different expressions for a face mesh having only a neutral expression. A template blend shapes model having sufficient expressions is provided and the neutral expression of the template mesh model is registered into the target face mesh using a non-rigid ICP (iterative closest point) algorithm. Deformation gradients between the template and target neutral mesh are then transferred to each expression to form a new set of blend shapes for the target face. We solve optimization problem to consistently map the deformation of the source blend shapes to the target face model. The result is a new set of blend shapes for a target mesh having triangle-wise correspondences between the source face and target faces. After creating blend shapes, the blend shape animation of the source face is retargeted to the target mesh automatically.", "title": "" }, { "docid": "38863f217a610af5378c42e03cd3fe3c", "text": "In human movement learning, it is most common to teach constituent elements of complex movements in isolation, before chaining them into complex movements. Segmentation and recognition of observed movement could thus proceed out of this existing knowledge, which is directly compatible with movement generation. In this paper, we address exactly this scenario. We assume that a library of movement primitives has already been taught, and we wish to identify elements of the library in a complex motor act, where the individual elements have been smoothed together, and, occasionally, there might be a movement segment that is not in our library yet. We employ a flexible machine learning representation of movement primitives based on learnable nonlinear attractor system. For the purpose of movement segmentation and recognition, it is possible to reformulate this representation as a controlled linear dynamical system. An Expectation-Maximization algorithm can be developed to estimate the open parameters of a movement primitive from the library, using as input an observed trajectory piece. If no matching primitive from the library can be found, a new primitive is created. This process allows a straightforward sequential segmentation of observed movement into known and new primitives, which are suitable for robot imitation learning. We illustrate our approach with synthetic examples and data collected from human movement. Appearing in Proceedings of the 15 International Conference on Artificial Intelligence and Statistics (AISTATS) 2012, La Palma, Canary Islands. Volume XX of JMLR: W&CP XX. Copyright 2012 by the authors.", "title": "" }, { "docid": "2c704a11e212b90520e92adf85696674", "text": "The authors in this study examined the function and public reception of critical tweeting in online campaigns of four nationalist populist politicians during major national election campaigns. Using a mix of qualitative coding and case study inductive methods, we analyzed the tweets of Narendra Modi, Nigel Farage, Donald Trump, and Geert Wilders before the 2014 Indian general elections, the 2016 UK Brexit referendum, the 2016 US presidential election, and the 2017 Dutch general election, respectively. Our data show that Trump is a consistent outlier in terms of using critical language on Twitter when compared to Wilders, Farage, and Modi, but that all four leaders show significant investment in various forms of antagonistic messaging including personal insults, sarcasm, and labeling, and that these are rewarded online by higher retweet rates. Building on the work of Murray Edelman and his notion of a political spectacle, we examined Twitter as a performative space for critical rhetoric within the frame of nationalist politics. We found that cultural and political differences among the four settings also impact how each politician employs these tactics. Our work proposes that studies of social media spaces need to bring normative questions into traditional notions of collaboration. As we show here, political actors may benefit from in-group coalescence around antagonistic messaging, which while serving as a call to arms for online collaboration for those ideologically aligned, may on a societal level lead to greater polarization.", "title": "" }, { "docid": "821b88f996e98216b83fcc9d6e6a4450", "text": "Rotation invariant multiview face detection (MVFD) aims to detect faces with arbitrary rotation-in-plane (RIP) and rotation-off-plane (ROP) angles in still images or video sequences. MVFD is crucial as the first step in automatic face processing for general applications since face images are seldom upright and frontal unless they are taken cooperatively. In this paper, we propose a series of innovative methods to construct a high-performance rotation invariant multiview face detector, including the width-first-search (WFS) tree detector structure, the vector boosting algorithm for learning vector-output strong classifiers, the domain-partition-based weak learning method, the sparse feature in granular space, and the heuristic search for sparse feature selection. As a result of that, our multiview face detector achieves low computational complexity, broad detection scope, and high detection accuracy on both standard testing sets and real-life images", "title": "" }, { "docid": "755535335da1eb05e4b4a163a8f3d2ac", "text": "Calcium pyrophosphate (CPP) crystal deposition (CPPD) is associated with ageing and osteoarthritis, and with uncommon disorders such as hyperparathyroidism, hypomagnesemia, hemochromatosis and hypophosphatasia. Elevated levels of synovial fluid pyrophosphate promote CPP crystal formation. This extracellular pyrophosphate originates either from the breakdown of nucleotide triphosphates by plasma-cell membrane glycoprotein 1 (PC-1) or from pyrophosphate transport by the transmembrane protein progressive ankylosis protein homolog (ANK). Although the etiology of apparent sporadic CPPD is not well-established, mutations in the ANK human gene (ANKH) have been shown to cause familial CPPD. In this Review, the key regulators of pyrophosphate metabolism and factors that lead to high extracellular pyrophosphate levels are described. Particular emphasis is placed on the mechanisms by which mutations in ANKH cause CPPD and the clinical phenotype of these mutations is discussed. Cartilage factors predisposing to CPPD and CPP-crystal-induced inflammation and current treatment options for the management of CPPD are also described.", "title": "" }, { "docid": "ed4454269322ed7a083e05bb222b28d9", "text": "To address devastating environmental crises and to improve human well-being, China has been implementing a number of national policies on payments for ecosystem services. Two of them, the Natural Forest Conservation Program (NFCP) and the Grain to Green Program (GTGP), are among the biggest programs in the world because of their ambitious goals, massive scales, huge payments, and potentially enormous impacts. The NFCP conserves natural forests through logging bans and afforestation with incentives to forest enterprises, whereas the GTGP converts cropland on steep slopes to forest and grassland by providing farmers with grain and cash subsidies. Overall ecological effects are beneficial, and socioeconomic effects are mostly positive. Whereas there are time lags in ecological effects, socioeconomic effects are more immediate. Both the NFCP and the GTGP also have global implications because they increase vegetative cover, enhance carbon sequestration, and reduce dust to other countries by controlling soil erosion. The future impacts of these programs may be even bigger. Extended payments for the GTGP have recently been approved by the central government for up to 8 years. The NFCP is likely to follow suit and receive renewed payments. To make these programs more effective, we recommend systematic planning, diversified funding, effective compensation, integrated research, and comprehensive monitoring. Effective implementation of these programs can also provide important experiences and lessons for other ecosystem service payment programs in China and many other parts of the world.", "title": "" }, { "docid": "216698730aa68b3044f03c64b77e0e62", "text": "Portable biomedical instrumentation has become an important part of diagnostic and treatment instrumentation. Low-voltage and low-power tendencies prevail. A two-electrode biopotential amplifier, designed for low-supply voltage (2.7–5.5 V), is presented. This biomedical amplifier design has high differential and sufficiently low common mode input impedances achieved by means of positive feedback, implemented with an original interface stage. The presented circuit makes use of passive components of popular values and tolerances. The amplifier is intended for use in various two-electrode applications, such as Holter monitors, external defibrillators, ECG monitors and other heart beat sensing biomedical devices.", "title": "" }, { "docid": "cb0803dfd3763199519ff3f4427e1298", "text": "Motion deblurring is a long standing problem in computer vision and image processing. In most previous approaches, the blurred image is modeled as the convolution of a latent intensity image with a blur kernel. However, for images captured by a real camera, the blur convolution should be applied to scene irradiance instead of image intensity and the blurred results need to be mapped back to image intensity via the camera’s response function (CRF). In this paper, we present a comprehensive study to analyze the effects of CRFs on motion deblurring. We prove that the intensity-based model closely approximates the irradiance model at low frequency regions. However, at high frequency regions such as edges, the intensity-based approximation introduces large errors and directly applying deconvolution on the intensity image will produce strong ringing artifacts even if the blur kernel is invertible. Based on the approximation error analysis, we further develop a dualimage based solution that captures a pair of sharp/blurred images for both CRF estimation and motion deblurring. Experiments on synthetic and real images validate our theories and demonstrate the robustness and accuracy of our approach.", "title": "" }, { "docid": "e7c8abf3387ba74ca0a6a2da81a26bc4", "text": "An experiment was conducted to test the relationships between users' perceptions of a computerized system's beauty and usability. The experiment used a computerized application as a surrogate for an Automated Teller Machine (ATM). Perceptions were elicited before and after the participants used the system. Pre-experimental measures indicate strong correlations between system's perceived aesthetics and perceived usability. Post-experimental measures indicated that the strong correlation remained intact. A multivariate analysis of covariance revealed that the degree of system's aesthetics affected the post-use perceptions of both aesthetics and usability, whereas the degree of actual usability had no such effect. The results resemble those found by social psychologists regarding the effect of physical attractiveness on the valuation of other personality attributes. The ®ndings stress the importance of studying the aesthetic aspect of human±computer interaction (HCI) design and its relationships to other design dimensions. q 2000 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "b966af7f15e104865944ac44fad23afc", "text": "Five cases are described where minute foci of adenocarcinoma have been demonstrated in the mesorectum several centimetres distal to the apparent lower edge of a rectal cancer. In 2 of these there was no other evidence of lymphatic spread of the tumour. In orthodox anterior resection much of this tissue remains in the pelvis, and its is suggested that these foci might lead to suture-line or pelvic recurrence. Total excision of the mesorectum has, therefore, been carried out as a part of over 100 consecutive anterior resections. Fifty of these, which were classified as 'curative' or 'conceivably curative' operations, have now been followed for over 2 years with no pelvic or staple-line recurrence.", "title": "" }, { "docid": "4daa16553442aa424a1578f02f044c6e", "text": "Cluster structure of gene expression data obtained from DNA microarrays is analyzed and visualized with the Self-Organizing Map (SOM) algorithm. The SOM forms a non-linear mapping of the data to a two-dimensional map grid that can be used as an exploratory data analysis tool for generating hypotheses on the relationships, and ultimately of the function of the genes. Similarity relationships within the data and cluster structures can be visualized and interpreted. The methods are demonstrated by computing a SOM of yeast genes. The relationships of known functional classes of genes are investigated by analyzing their distribution on the SOM, the cluster structure is visualized by the U-matrix method, and the clusters are characterized in terms of the properties of the expression profiles of the genes. Finally, it is shown that the SOM visualizes the similarity of genes in a more trustworthy way than two alternative methods, multidimensional scaling and hierarchical clustering.", "title": "" }, { "docid": "d01a22301de1274220a16351d14d4d83", "text": "In this paper, we propose a solution to the problems and the features encountered in the geometric modeling of the 6 DOF manipulator arm, the Fanuc. Among these, the singularity of the Jacobian matrix obtained by the kinematic model and which has a great influence on the boundaries and accessibility of the workspace of manipulator robot and it reduce the number of solutions found. We can decompose it into several sub-matrices of smaller dimensions, for ultimately a non-linear equation with two unknowns. We validate our work by conducting a simulation software platform that allows us to verify the results of manipulation in a virtual reality environment based on VRML and Matlab software, integration with the CAD model.", "title": "" }, { "docid": "c993d3a77bcd272e8eadc66155ee15e1", "text": "This paper presents animated pose templates (APTs) for detecting short-term, long-term, and contextual actions from cluttered scenes in videos. Each pose template consists of two components: 1) a shape template with deformable parts represented in an And-node whose appearances are represented by the Histogram of Oriented Gradient (HOG) features, and 2) a motion template specifying the motion of the parts by the Histogram of Optical-Flows (HOF) features. A shape template may have more than one motion template represented by an Or-node. Therefore, each action is defined as a mixture (Or-node) of pose templates in an And-Or tree structure. While this pose template is suitable for detecting short-term action snippets in two to five frames, we extend it in two ways: 1) For long-term actions, we animate the pose templates by adding temporal constraints in a Hidden Markov Model (HMM), and 2) for contextual actions, we treat contextual objects as additional parts of the pose templates and add constraints that encode spatial correlations between parts. To train the model, we manually annotate part locations on several keyframes of each video and cluster them into pose templates using EM. This leaves the unknown parameters for our learning algorithm in two groups: 1) latent variables for the unannotated frames including pose-IDs and part locations, 2) model parameters shared by all training samples such as weights for HOG and HOF features, canonical part locations of each pose, coefficients penalizing pose-transition and part-deformation. To learn these parameters, we introduce a semi-supervised structural SVM algorithm that iterates between two steps: 1) learning (updating) model parameters using labeled data by solving a structural SVM optimization, and 2) imputing missing variables (i.e., detecting actions on unlabeled frames) with parameters learned from the previous step and progressively accepting high-score frames as newly labeled examples. This algorithm belongs to a family of optimization methods known as the Concave-Convex Procedure (CCCP) that converge to a local optimal solution. The inference algorithm consists of two components: 1) Detecting top candidates for the pose templates, and 2) computing the sequence of pose templates. Both are done by dynamic programming or, more precisely, beam search. In experiments, we demonstrate that this method is capable of discovering salient poses of actions as well as interactions with contextual objects. We test our method on several public action data sets and a challenging outdoor contextual action data set collected by ourselves. The results show that our model achieves comparable or better performance compared to state-of-the-art methods.", "title": "" }, { "docid": "45b5b7256dc791d8276bf328b833b09c", "text": "Today, embedded, mobile, and cyberphysical systems are ubiquitous and used in many applications, from industrial control systems, modern vehicles, to critical infrastructure. Current trends and initiatives, such as \"Industrie 4.0\" and Internet of Things (IoT), promise innovative business models and novel user experiences through strong connectivity and effective use of next generation of embedded devices. These systems generate, process, and exchange vast amounts of security-critical and privacy-sensitive data, which makes them attractive targets of attacks. Cyberattacks on IoT systems are very critical since they may cause physical damage and even threaten human lives. The complexity of these systems and the potential impact of cyberattacks bring upon new threats.\n This paper gives an introduction to Industrial IoT systems, the related security and privacy challenges, and an outlook on possible solutions towards a holistic security framework for Industrial IoT systems.", "title": "" }, { "docid": "804b320c6f5b07f7f4d7c5be29c572e9", "text": "Softmax is the most commonly used output function for multiclass problems and is widely used in areas such as vision, natural language processing, and recommendation. A softmax model has linear costs in the number of classes which makes it too expensive for many real-world problems. A common approach to speed up training involves sampling only some of the classes at each training step. It is known that this method is biased and that the bias increases the more the sampling distribution deviates from the output distribution. Nevertheless, almost all recent work uses simple sampling distributions that require a large sample size to mitigate the bias. In this work, we propose a new class of kernel based sampling methods and develop an efficient sampling algorithm. Kernel based sampling adapts to the model as it is trained, thus resulting in low bias. It can also be easily applied to many models because it relies only on the model’s last hidden layer. We empirically study the trade-off of bias, sampling distribution and sample size and show that kernel based sampling results in low bias with few samples.", "title": "" }, { "docid": "c432a44e48e777a7a3316c1474f0aa12", "text": "In this paper, we present an algorithm that generates high dynamic range (HDR) images from multi-exposed low dynamic range (LDR) stereo images. The vast majority of cameras in the market only capture a limited dynamic range of a scene. Our algorithm first computes the disparity map between the stereo images. The disparity map is used to compute the camera response function which in turn results in the scene radiance maps. A refinement step for the disparity map is then applied to eliminate edge artifacts in the final HDR image. Existing methods generate HDR images of good quality for still or slow motion scenes, but give defects when the motion is fast. Our algorithm can deal with images taken during fast motion scenes and tolerate saturation and radiometric changes better than other stereo matching algorithms.", "title": "" }, { "docid": "6d1c4530ba67b931729d9773debabb65", "text": "This paper explores the idea that the universe is a virtual reality created by information processing, and relates this strange idea to the findings of modern physics about the physical world. The virtual reality concept is familiar to us from online worlds, but the world as a virtual reality is usually a subject for science fiction rather than science. Yet logically the world could be an information simulation running on a three-dimensional space-time screen. Indeed, that the essence of the universe is information has advantages, e.g. if matter, charge, energy and movement are aspects of information, the many conservation laws could become a single law of information conservation. If the universe were a virtual reality, its creation at the big bang would no longer be paradoxical, as every virtual system must be booted up. It is suggested that whether the world is an objective or a virtual reality is a matter for science to resolve. Modern computer science can help suggest a model that derives core physical properties like space, time, light, matter and movement from information processing. Such an approach could reconcile relativity and quantum theories, with the former being how information processing creates space-time, and the latter how it creates energy and matter.", "title": "" } ]
scidocsrr
385b6df1c1c5205c38553107f3fa29e8
Structural basis of long-term potentiation in single dendritic spines
[ { "docid": "c0dbb410ebd6c84bd97b5f5e767186b3", "text": "A new hypothesis about the role of focused attention is proposed. The feature-integration theory of attention suggests that attention must be directed serially to each stimulus in a display whenever conjunctions of more than one separable feature are needed to characterize or distinguish the possible objects presented. A number of predictions were tested in a variety of paradigms including visual search, texture segregation, identification and localization, and using both separable dimensions (shape and color) and local elements or parts of figures (lines, curves, etc. in letters) as the features to be integrated into complex wholes. The results were in general consistent with the hypothesis. They offer a new set of criteria for distinguishing separable from integral features and a new rationale for predicting which tasks will show attention limits and which will not.", "title": "" } ]
[ { "docid": "b1594132df243bbd68c91c84a54382c3", "text": "Several wearable computing or ubiquitous computing research projects have detected and distinguished user motion activities by attaching accelerometers in known positions and orientations on the user’s body. This paper observes that the orientation constraint can probably be relaxed. An estimate of the constant gravity vector can be obtained by averaging accelerometer samples. This gravity vector estimate in turn enables estimation of the vertical component and the magnitude of the horizontal component of the user’s motion, independently of how the three-axis accelerometer system is oriented.", "title": "" }, { "docid": "e45575a4dad73971f6ed24828a76a4bd", "text": "Sleep paralysis (SP) is a dissociative state that occurs mainly during awakening. SP is characterized by altered motor, perceptual, emotional and cognitive functions, such as inability to perform voluntary movements, visual hallucinations, feelings of chest pressure, delusions about a frightening presence and, in some cases, fear of impending death. Most people experience SP rarely, but typically when sleeping in supine position; however, SP is considered a disease (parasomnia) when recurrent and/or associated to emotional burden. Interestingly, throughout human history, different peoples interpreted SP under a supernatural view. For example, Canadian Eskimos attribute SP to spells of shamans, who hinder the ability to move, and provoke hallucinations of a shapeless presence. In the Japanese tradition, SP is due to a vengeful spirit who suffocates his enemies while sleeping. In Nigerian culture, a female demon attacks during dreaming and provokes paralysis. A modern manifestation of SP is the report of \"alien abductions\", experienced as inability to move during awakening associated with visual hallucinations of aliens. In all, SP is a significant example of how a specific biological phenomenon can be interpreted and shaped by different cultural contexts. In order to further explore the ethnopsychology of SP, in this review we present the \"Pisadeira\", a character of Brazilian folklore originated in the country's Southeast, but also found in other regions with variant names. Pisadeira is described as a crone with long fingernails who lurks on roofs at night and tramples on the chest of those who sleep on a full stomach with the belly up. This legend is mentioned in many anthropological accounts; however, we found no comprehensive reference on the Pisadeira from the perspective of sleep science. Here, we aim to fill this gap. We first review the neuropsychological aspects of SP, and then present the folk tale of the Pisadeira. Finally, we summarize the many historical and artistic manifestations of SP in different cultures, emphasizing the similarities and differences with the Pisadeira.", "title": "" }, { "docid": "9c715e50cf36e14312407ed722fe7a7d", "text": "Usual medical care often fails to meet the needs of chronically ill patients, even in managed, integrated delivery systems. The medical literature suggests strategies to improve outcomes in these patients. Effective interventions tend to fall into one of five areas: the use of evidence-based, planned care; reorganization of practice systems and provider roles; improved patient self-management support; increased access to expertise; and greater availability of clinical information. The challenge is to organize these components into an integrated system of chronic illness care. Whether this can be done most efficiently and effectively in primary care practice rather than requiring specialized systems of care remains unanswered.", "title": "" }, { "docid": "4b3e6253a2b9e6aeb8ee0f6c2446510c", "text": "Autonomous wireless sensor nodes need low-power low-speed ADCs to digitize the sensed signal. State-of-art SAR ADCs can accomplish this goal with high power-efficiency (<;10fJ/conversion-step) [1-4]. The reference voltage design is critical for the ADC performance to obtain good PSRR, low line-sensitivity and a stable supply-independent full-scale range. However, solutions for efficient reference voltage generators (RVGs) are typically ignored in low-power ADC publications. In reality, due to the low power supply (usually sub-1 V) and limited available power (nW-range), the RVG is a challenge within the sensor system. In this work, a 2.4fJ/conversion-step SAR ADC with integrated reference is implemented. The 0.62V CMOS RVG consumes 25nW. To further reduce RVG power, it can be duty-cycled down to 10% with no loss in ADC performance. Additionally, the ADC uses a bidirectional dynamic comparator to improve the power efficiency even more.", "title": "" }, { "docid": "2f08b35bb6f4f9d44d1225e2d26b5395", "text": "An efficient disparity estimation and occlusion detection algorithm for multiocular systems is presented. A dynamic programming algorithm, using a multiview matching cost as well as pure geometrical constraints, is used to estimate disparity and to identify the occluded areas in the extreme left and right views. A significant advantage of the proposed approach is that the exact number of views in which each point appears (is not occluded) can be determined. The disparity and occlusion information obtained may then be used to create virtual images from intermediate viewpoints. Furthermore, techniques are developed for the coding of occlusion and disparity information, which is needed at the receiver for the reproduction of a multiview sequence using the two encoded extreme views. Experimental results illustrate the performance of the proposed techniques.", "title": "" }, { "docid": "a38d0e0d032c3e4074f9ac0f09719737", "text": "A main distinguishing feature of a wireless network compared with a wired network is its broadcast nature, in which the signal transmitted by a node may reach several other nodes, and a node may receive signals from several other nodes simultaneously. Rather than a blessing, this feature is treated more as an interference-inducing nuisance in most wireless networks today (e.g., IEEE 802.11). The goal of this paper is to show how the concept of network coding can be applied at the physical layer to turn the broadcast property into a capacity-boosting advantage in wireless ad hoc networks. Specifically, we propose a physical-layer network coding (PNC) scheme to coordinate transmissions among nodes. In contrast to \"straightforward\" network coding which performs coding arithmetic on digital bit streams after they have been received, PNC makes use of the additive nature of simultaneously arriving electromagnetic (EM) waves for equivalent coding operation. PNC can yield higher capacity than straight-forward network coding when applied to wireless networks. We believe this is a first paper that ventures into EM-wave-based network coding at the physical layer and demonstrates its potential for boosting network capacity. PNC opens up a whole new research area because of its implications and new design requirements for the physical, MAC, and network layers of ad hoc wireless stations. The resolution of the many outstanding but interesting issues in PNC may lead to a revolutionary new paradigm for wireless ad hoc networking.", "title": "" }, { "docid": "8db59f20491739420d9b40311705dbf1", "text": "With object-oriented programming languages, Object Relational Mapping (ORM) frameworks such as Hibernate have gained popularity due to their ease of use and portability to different relational database management systems. Hibernate implements the Java Persistent API, JPA, and frees a developer from authoring software to address the impedance mismatch between objects and relations. In this paper, we evaluate the performance of Hibernate by comparing it with a native JDBC implementation using a benchmark named BG. BG rates the performance of a system for processing interactive social networking actions such as view profile, extend an invitation from one member to another, and other actions. Our key findings are as follows. First, an object-oriented Hibernate implementation of each action issues more SQL queries than its JDBC counterpart. This enables the JDBC implementation to provide response times that are significantly faster. Second, one may use the Hibernate Query Language (HQL) to refine the object-oriented Hibernate implementation to provide performance that approximates the JDBC implementation.", "title": "" }, { "docid": "562551c0f767ab8f467fccc8ff5b8244", "text": "In this research paper an attempt has been made to integrate the programmable logic controller (PLC) with elevator for developing its control system. Thus, this paper describes the application of programmable logic controller for elevator control system. The PLC used for this project is GE FANUC with six inputs and four outputs. The programming language used is ladder diagram.", "title": "" }, { "docid": "a872ab9351dc645b5799d576f5f10eb6", "text": "A new framework for advanced manufacturing is being promoted in Germany, and is increasingly being adopted by other countries. The framework represents a coalescing of digital and physical technologies along the product value chain in an attempt to transform the production of goods and services1. It is an approach that focuses on combining technologies such as additive manufacturing, automation, digital services and the Internet of Things, and it is part of a growing movement towards exploiting the convergence between emerging technologies. This technological convergence is increasingly being referred to as the ‘fourth industrial revolution’, and like its predecessors, it promises to transform the ways we live and the environments we live in. (While there is no universal agreement on what constitutes an ‘industrial revolution’, proponents of the fourth industrial revolution suggest that the first involved harnessing steam power to mechanize production; the second, the use of electricity in mass production; and the third, the use of electronics and information technology to automate production.) Yet, without up-front efforts to ensure its beneficial, responsible and responsive development, there is a very real danger that this fourth industrial revolution will not only fail to deliver on its promise, but also ultimately increase the very challenges its advocates set out to solve. At its heart, the fourth industrial revolution represents an unprecedented fusion between and across digital, physical and biological technologies, and a resulting anticipated transformation in how products are made and used2. This is already being experienced with the growing Internet of Things, where dynamic information exchanges between networked devices are opening up new possibilities from manufacturing to lifestyle enhancement and risk management. Similarly, a rapid amplification of 3D printing capabilities is now emerging through the convergence of additive manufacturing technologies, online data sharing and processing, advanced materials, and ‘printable’ biological systems. And we are just beginning to see the commercial use of potentially transformative convergence between cloud-based artificial intelligence and open-source hardware and software, to create novel platforms for innovative human–machine interfaces. These and other areas of development only scratch the surface of how convergence is anticipated to massively extend the impacts of the individual technologies it draws on. This is a revolution that comes with the promise of transformative social, economic and environmental advances — from eliminating disease, protecting the environment, and providing plentiful energy, food and water, to reducing inequity and empowering individuals and communities. Yet, the path towards this utopia-esque future is fraught with pitfalls — perhaps more so than with any former industrial revolution. As more people get closer to gaining access to increasingly powerful converging technologies, a complex risk landscape is emerging that lies dangerously far beyond the ken of current regulations and governance frameworks. As a result, we are in danger of creating a global ‘wild west’ of technology innovation, where our good intentions may be among the first casualties. Within this emerging landscape, cyber security is becoming an increasingly important challenge, as global digital networks open up access to manufacturing processes and connected products across the world. The risks of cyber ‘insecurity’ increase by orders of magnitude as manufacturing becomes more distributed and less conventionally securable. Distributed manufacturing is another likely outcome of the fourth industrial revolution. A powerful fusion between online resources, modular and open-source tech, and point-of-source production devices, such as 3D printers, will increasingly enable entrepreneurs to set up shop almost anywhere. While this could be a boon for local economies, it magnifies the ease with which manufacturing can slip the net of conventional regulation, while still having the ability to have a global impact. These and other challenges reflect a blurring of the line between hardware and software systems that is characteristic of the fourth industrial revolution. We are heading rapidly towards a future where hardware manufacturers are able to grow, crash and evolve physical products with the same speed that we have become accustomed to with software products. Yet, manufacturing regulations remain based on product development cycles that span years, not hours. Anticipating this high-speed future, we are already seeing the emergence of hardware capabilities that can be updated at the push of a button. Tesla Motors, for instance, recently released a software update that added hardware-based ‘autopilot’ capabilities to the company’s existing fleet of model S vehicles3. This early demonstration of the convergence between hardware and software reflects a growing capacity to rapidly change the behaviour of hardware systems through software modifications that lies far beyond the capacity of current regulations to identify, monitor and control. This in turn increases the potential risks to health, safety and the environment, simply because well-intentioned technologies are at some point going to fall through the holes in an increasingly inadequate regulatory net. There are many other examples where converging technologies are increasing the gap between what we can do and our understanding of how to do it responsibly. The convergence between robotics, nanotechnology and cognitive augmentation, for instance, and that between artificial intelligence, gene editing and maker communities both push us into uncertain territory. Yet despite the vulnerabilities inherent with fast-evolving technological capabilities that are tightly coupled, complex and poorly regulated, we lack even the beginnings of national or international conceptual frameworks to think about responsible decisionmaking and responsive governance. How vulnerable we will be to unintended and unwanted consequences in this convergent technologies future is unclear. What is clear though is that, without new thinking on risk, resilience and governance, and without rapidly emerging abilities to identify early warnings and take corrective action, the chances of systems based around converging technologies failing fast and failing spectacularly will only increase.", "title": "" }, { "docid": "4a3638436c7610b6012293019a646ee1", "text": "We introduce a simple semi-supervised learning approach for images based on in-painting using an adversarial loss. Images with random patches removed are presented to a generator whose task is to fill in the hole, based on the surrounding pixels. The in-painted images are then presented to a discriminator network that judges if they are real (unaltered training images) or not. This task acts as a regularizer for standard supervised training of the discriminator. Using our approach we are able to directly train large VGG-style networks in a semi-supervised fashion. We evaluate on STL-10 and PASCAL datasets, where our approach obtains performance comparable or superior to existing methods.", "title": "" }, { "docid": "90ee6bedafe6a0ad7d6fd2c07bab5af9", "text": "With over 40 years of history, image understanding, in particular, scene classification and recognition remains central to machine vision. With an abundance of image and video databases, it is necessary to be able to sort and retrieve the images and videos in a way that is both efficient and effective. This is possible only if the categories of images and/or their context are known to a user. Hence, the ability to classify and recognize scenes accurately is of utmost importance. This paper presents a brief survey of the advances in scene recognition and classification algorithms. Depending on its goal, image understanding(IU) can be defined in many different ways. However, in general, IU means describing the image content, the objects in it, location and relations between objects, and most recently, describing the events in an image. In (Ralescu 1995) IU is equated with producing a verbal description of the image content. Scene analysis (as part of IU) and categorization is a highly useful ability of humans, who are able to categorize complex natural scenes containing animals or vehicles very quickly (Thorpe, Fize, and Marlot 1996), with little or no attention (Li et al. 2003). When a scene is presented to humans, they are able to quickly identify the scene, i.e., within a short period of exposure (< 100 ms). How do humans perform all of these tasks the way they do, is yet to be fully understood. To date, the classic text by Marr (Marr 1982) remains one of the sources of understanding the human vision systems. Many researchers have tried to imbibe this incredible capability of the human vision system into their algorithms for image processing, scene understanding and recognition. In the presence of a wealth of literature on this and related subjects, surveys of the field, even a limited one, as the present one necessarily is (due to space constraints) are bound to be very useful, by reviewing the methods for scene recognition and classification. Perhaps, the first issue to consider is the concept of scene as a technical concept to capture the natural concept. According to Xiao et al. (Xiao et al. 2010) a scene is a place in which a human can act within, or a place to which a human being could navigate. Therefore, scene recognition and scene classification algorithms must delve into understanding the semantic context of the scene. According to how a scene is recognized in an image, scene recognition algorithms can be broadly divided into two categories. • Scene recognition based on object detection. • Scene recognition using low-level image features Scene recognition using object recognition (SR-OR) Using object recognition for scene classification is a straight-forward and intuitive approach to scene classification and it can assist in distinguishing very complex scenes which might otherwise prove difficult to do using standard low level features. In the paper by Li-Jia Li et al. (Li et al. 2010) the authors argue that although ”robust low-level image features have been proven to be effective representations for scene classification; but pixels, or even local image patches, carry little semantic meanings. For high level visual tasks, such low-level image representations are potentially not enough. ” To combat this drawback of local features, they propose a high-level image representation, called the Object Bank(OB), where an image is represented by integrating the response of the image to various object detectors. These object detectors or filters are blind to the testing dataset or visual task. Using OB representation, superior performances on high level visual recognition tasks can be achieved with simple regularized logistic regression. Their algorithm uses the current state-ofthe-art object detectors of Felzenszwalb et al. (Felzenszwalb et al. 2010), as well as the geometric context classifiers (stuff detectors) of Hoeim et al. (Hoiem, Efros, and Hebert 2005) for pre-training the object detectors. OB offers a rich set of object features, while presenting a challenge – curse of dimensionality due to the presence of multiple class of objects within a single image, which then yields feature vectors of very high dimension. The performance of the system plateaus at a point when the number of object detection filters is too high. According to the authors, the system performance is best, when the number of object filters is moderate. Vineeta Singh et al. MAICS 2017 pp. 85–91", "title": "" }, { "docid": "e1ecae98985cf87523492605bcfb468c", "text": "This four-part series of articles provides an overview of the neurological examination of the elderly patient, particularly as it applies to patients with cognitive impairment, dementia or cerebrovascular disease.The focus is on the method and interpretation of the bedside physical examination; the mental state and cognitive examinations are not covered in this review.Part 1 (featured in the September issue) began with an approach to the neurological examination in normal aging and in disease, and reviewed components of the general physical,head and neck,neurovascular and cranial nerve examinations relevant to aging and dementia.Part 2 (featured in the October issue) covered the motor examination with an emphasis on upper motor neuron signs and movement disorders. Part 3(featured in the November issue) reviewed the assessment of coordination,balance and gait,and Part 4, featured here, discusses the muscle stretch reflexes, pathological and primitive reflexes, and sensory examination, and offers concluding remarks.Throughout this series, special emphasis is placed on the evaluation and interpretation of neurological signs in light of findings considered normal in the elderly.", "title": "" }, { "docid": "123a21d9913767e1a8d1d043f6feab01", "text": "Permanent magnet synchronous machines generate parasitic torque pulsations owing to distortion of the stator flux linkage distribution, variable magnetic reluctance at the stator slots, and secondary phenomena. The consequences are speed oscillations which, although small in magnitude, deteriorate the performance of the drive in demanding applications. The parasitic effects are analysed and modelled using the complex state-variable approach. A fast current control system is employed to produce highfrequency electromagnetic torque components for compensation. A self-commissioning scheme is described which identifies the machine parameters, particularly the torque ripple functions which depend on the angular position of the rotor. Variations of permanent magnet flux density with temperature are compensated by on-line adaptation. The algorithms for adaptation and control are implemented in a standard microcontroller system without additional hardware. The effectiveness of the adaptive torque ripple compensation is demonstrated by experiments.", "title": "" }, { "docid": "e2b3c449dcb2b37f96a91987734ace40", "text": "PURPOSE\n(1) To determine the radiographic correction/healing rate, patient-reported outcomes, reoperation rate, and complication rate after distal femoral osteotomy (DFO) for the valgus knee with lateral compartment pathology. (2) To summarize the reported results of medial closing wedge and lateral opening wedge DFO.\n\n\nMETHODS\nWe conducted a systematic review of PubMed, MEDLINE, and CINAHL to identify studies reporting outcomes of DFOs for the valgus knee. Keywords included \"distal femoral osteotomy,\" \"chondral,\" \"cartilage,\" \"valgus,\" \"joint restoration,\" \"joint preservation,\" \"arthritis,\" and \"gonarthrosis.\" Two authors first reviewed the articles; our study exclusion criteria were then applied, and the articles were included on the basis relevance defined by the aforementioned criteria. The Methodological Index for Nonrandomized Studies scale judged the quality of the literature. Sixteen studies were relevant to the research questions out of 191 studies identified by the original search.\n\n\nRESULTS\nSixteen studies were identified reporting on 372 osteotomies with mean follow-up of 45 to 180 months. All studies reported mean radiographic correction to a near neutral mechanical axis, with 3.2% nonunion and 3.8% delayed union rates. There was a 9% complication rate and a 34% reoperation rate, of which 15% were converted to arthroplasty. There were similar results reported for medial closing wedge and lateral opening wedge techniques, with a higher conversion to arthroplasty in the medial closing wedge that was confounded by longer mean follow-up in this group (mean follow-up 100 v 58 months).\n\n\nCONCLUSIONS\nDFOs for the valgus knee with lateral compartment disease provide improvements in patient-reported knee health-related quality of life at midterm follow-up but have high rates of reoperation. No evidence exists proving better results of either the lateral opening wedge or medial closing wedge techniques.\n\n\nLEVEL OF EVIDENCE\nLevel IV, systematic review of Level IV studies.", "title": "" }, { "docid": "861f76c061b9eb52ed5033bdeb9a3ce5", "text": "2007S. Robson Walton Chair in Accounting, University of Arkansas 2007-2014; 2015-2016 Accounting Department Chair, University of Arkansas 2014Distinguished Professor, University of Arkansas 2005-2014 Professor, University of Arkansas 2005-2008 Ralph L. McQueen Chair in Accounting, University of Arkansas 2002-2005 Associate Professor, University of Kansas 1997-2002 Assistant Professor, University of Kansas", "title": "" }, { "docid": "7bef0f8e1df99d525f3d2356bd129e45", "text": "The term 'participation' is traditionally used in HCI to describe the involvement of users and stakeholders in design processes, with a pretext of distributing control to participants to shape their technological future. In this paper we ask whether these values can hold up in practice, particularly as participation takes on new meanings and incorporates new perspectives. We argue that much HCI research leans towards configuring participation. In exploring this claim we explore three questions that we consider important for understanding how HCI configures participation; Who initiates, directs and benefits from user participation in design? In what forms does user participation occur? How is control shared with users in design? In answering these questions we consider the conceptual, ethical and pragmatic problems this raises for current participatory HCI research. Finally, we offer directions for future work explicitly dealing with the configuration of participation.", "title": "" }, { "docid": "22a5aa4b9cbafa3cf63b6cf4aff60ba3", "text": "characteristics, burnout, and (other-ratings of) performance (N 146). We hypothesized that job demands (e.g., work pressure and emotional demands) would be the most important antecedents of the exhaustion component of burnout, which, in turn, would predict in-role performance (hypothesis 1). In contrast, job resources (e.g., autonomy and social support) were hypothesized to be the most important predictors of extra-role performance, through their relationship with the disengagement component of burnout (hypothesis 2). In addition, we predicted that job resources would buffer the relationship between job demands and exhaustion (hypothesis 3), and that exhaustion would be positively related to disengagement (hypothesis 4). The results of structural equation modeling analyses provided strong support for hypotheses 1, 2, and 4, but rejected hypothesis 3. These findings support the JD-R model’s claim that job demands and job resources initiate two psychological processes, which eventually affect organizational outcomes. © 2004 Wiley Periodicals, Inc.", "title": "" }, { "docid": "8d7ece4b518223bc8156b173875d06e3", "text": "This paper presents two robot devices for use in the rehabilitation of upper limb movements and reports the quantitative parameters obtained to characterize the rate of improvement, thus allowing a precise monitoring of patient's recovery. A one degree of freedom (DoF) wrist manipulator and a two-DoF elbow-shoulder manipulator were designed using an admittance control strategy; if the patient could not move the handle, the devices completed the motor task. Two groups of chronic post-stroke patients (G1 n=7, and G2 n=9) were enrolled in a three week rehabilitation program including standard physical therapy (45 min daily) plus treatment by means of robot devices, respectively, for wrist and elbow-shoulder movements (40 min, twice daily). Both groups were evaluated by means of standard clinical assessment scales and a new robot measured evaluation metrics that included an active movement index quantifying the patient's ability to execute the assigned motor task without robot assistance, the mean velocity, and a movement accuracy index measuring the distance of the executed path from the theoretic one. After treatment, both groups improved their motor deficit and disability. In G1, there was a significant change in the clinical scale values (p<0.05) and range of motion wrist extension (p<0.02). G2 showed a significant change in clinical scales (p<0.01), in strength (p<0.05) and in the robot measured parameters (p<0.01). The relationship between robot measured parameters and the clinical assessment scales showed a moderate and significant correlation (r>0.53 p<0.03). Our findings suggest that robot-aided neurorehabilitation may improve the motor outcome and disability of chronic post-stroke patients. The new robot measured parameters may provide useful information about the course of treatment and its effectiveness at discharge.", "title": "" }, { "docid": "b48d9053c70f51aa766a3f4706912654", "text": "Social tags are free text labels that are applied to items such as artists, albums and songs. Captured in these tags is a great deal of information that is highly relevant to Music Information Retrieval (MIR) researchers including information about genre, mood, instrumentation, and quality. Unfortunately there is also a great deal of irrelevant information and noise in the tags. Imperfect as they may be, social tags are a source of human-generated contextual knowledge about music that may become an essential part of the solution to many MIR problems. In this article, we describe the state of the art in commercial and research social tagging systems for music. We describe how tags are collected and used in current systems. We explore some of the issues that are encountered when using tags, and we suggest possible areas of exploration for future research.", "title": "" }, { "docid": "3e01af44d4819d8c78615e66f56e5983", "text": "The amount of dynamic content on the web has been steadily increasing. Scripting languages such as JavaScript and browser extensions such as Adobe's Flash have been instrumental in creating web-based interfaces that are similar to those of traditional applications. Dynamic content has also become popular in advertising, where Flash is used to create rich, interactive ads that are displayed on hundreds of millions of computers per day. Unfortunately, the success of Flash-based advertisements and applications attracted the attention of malware authors, who started to leverage Flash to deliver attacks through advertising networks. This paper presents a novel approach whose goal is to automate the analysis of Flash content to identify malicious behavior. We designed and implemented a tool based on the approach, and we tested it on a large corpus of real-world Flash advertisements. The results show that our tool is able to reliably detect malicious Flash ads with limited false positives. We made our tool available publicly and it is routinely used by thousands of users.", "title": "" } ]
scidocsrr